Quantcast
Channel: Ask the Directory Services Team
Viewing all 274 articles
Browse latest View live

AD FS 2.0 and AD FS 1.x Interoperability

$
0
0

Hi, it’s Adam Conkle again. I am excited about our recent release of AD FS 2.0 on May 5. I wanted to post a blog about AD FS 2.0 and AD FS 1.x interoperability as soon as possible since I think it will be a common scenario for our customers.

AD FS 2.0 and AD FS 1.x interoperability was a priority for this release, and full functionality of AD FS 1.x security token servers (STS) and the Claims-aware Web Agent is supported in AD FS 2.0.

Note: AD FS 2.0 does not support the AD FS 1.x Windows Token Web Agent

In order to federate between the two versions of AD FS there are some requirements that we need to manually address.

AD FS 1.x includes three claim types:

  1. Identity – claims to uniquely identify the user
  2. Group – claims to show security group membership
  3. Custom – any other attribute you need to extract and send (i.e. – Given name, surname, office, phone, etc.)

Below, I will describe methods to send claims from AD FS 2.0 which satisfy the AD FS 1.x claim requirements.

We will be extracting several claims from an Attribute Store, and, before we begin, we need to understand where the extractions should take place. You can extract from the Attribute Store either on the Claims Provider (CP) Trust or on the Relying Party (RP) Trust. Both will work; the difference is this:

Claims Pipeline

When you extract using the CP Trust rules, the claims are injected into the policy processing pipeline early in the process. Then, on the RP Trust, we execute the Issuance Authorization Rules and can make RP authorization decisions based on claims that we already have in the pipeline.

When you extract using the RP Trust rules, the claims are injected into the pipeline later in the process, and any Issuance Authorization Rules you have configured for the RP Trust will not apply to claims which are issued here since the authorization rules have already executed.

Name Identifier (Name ID) claim in the SAML subject

In AD FS 1.x, we require at least one Identity Claim (UPN, Email, or Common Name). The Identity Claim is sent as the subject of the SAML 1.1 assertion as a claim called Name Identifier (Name ID). Name Identifier also has a format property which equals the URI of the primary Identity Claim sent. For example, if userPrincipalName (UPN) is sent as the primary Identity Claim, then the Name Identifier claim is specified with the format of UPN with the claim value equal to the UPN of the authenticating user.

Here is a snippet of AD FS 1.x SAML assertion showing the Name Identifier claim:

AD FS 1.x SAML Assertion

When we utilize AD FS 2.0 there is no concept of Identity Claims, and we also do not automatically send a Name ID claim. This needs to be manually configured so that AD FS 2.0 will send the Name ID claim to the AD FS 1.x server in a format that the AD FS 1.x server is expecting. Simply issuing a UPN claim from AD FS 2.0 to AD FS 1.x does not achieve the requirement, and a Claim Rule is needed.

Let’s go to the AD FS 2.0 STS, and let’s also assume that AD FS 2.0 is the identity provider (IdP) for this federation scenario. We will configure an Acceptance Transform Rule for the Active Directory CP Trust which extracts userPrincipalName from Active Directory as the UPN claim type (URI). Creating this rule on the CP Trust will place the UPN claim into the pipeline prior to any RP Trust rules firing.

When you create this rule, use the Send LDAP Attributes as Claims template in the rule editor.

Figure 1 – Selecting a claim rule template to extract from AD
Figure 1 – Selecting a claim rule template to extract from AD

Figure 2 and Figure 3 show a rule named Extract UPN from AD which has been added to the claim rules for the Active Directory CP Trust.

Figure 2 – Configuring the Claim Rule to extract UPN from AD
Figure 2 – Configuring the Claim Rule to extract UPN from AD

Figure 3 – New rule has been created to extract UPN from AD
Figure 3 – New rule has been created to extract UPN from AD

Now that we have a UPN claim in the pipeline, the next step is to transform this claim into the format that AD FS 1.x requires: Name ID. We perform the transformation with an Issuance Transform Rule on the AD FS 1.x RP Trust.

To create a transformation rule, use the Transform an Incoming Claim template in the rule editor

Figure 4 – Selecting a claim rule template to transform a claim
Figure 4 – Selecting a claim rule template to transform a claim

Configure the rule to transform from the Incoming claim type: UPN to the Outgoing claim type: Name ID. Once you select Name ID as the outgoing claim type, the Outgoing name ID format drop-down box becomes available so that we can select the UPN format for Name ID. We want to pass the value of the user’s UPN to the new claim, so select Pass through all claim values.

Figure 5 – Configuring the transformation rule for Name ID
Figure 5 – Configuring the transformation rule for Name ID

When a user authenticates to the AD FS 2.0 STS, the UPN will be extracted from AD during execution of the AD CP Trust rules and injected into the pipeline. Next, the processing rules are executed for the AD FS 1.x RP Trust, and UPN will be transformed to Name ID with the UPN format. The value of the user’s UPN is maintained for the outgoing claim to the RP (AD FS 1.x).

AD FS 1.x Identity Claims

Now that we have the Name ID claim handled, we still need to send the appropriate Identity Claim(s) to AD FS 1.x. For that purpose, AD FS 2.0 includes Claim Descriptions for AD FS 1.x UPN and AD FS 1.x E-Mail Address.

Figure 6 – AD FS 1.x Claim Descriptions shown in the Claim Descriptions node of AD FS 2.0
Figure 6 – AD FS 1.x Claim Descriptions shown in the Claim Descriptions node of AD FS 2.0

We simply need to extract them from our Attribute Store (AD, in my case), and send them to the AD FS 1.x RP.

Figure 7 shows how to configure the claim rule to extract UPN and E-Mail as AD FS 1.x UPN and AD FS 1.x E-Mail Address. I have chosen to create this rule on the CP Trust.

Figure 7 – Configuring a rule to extract AD FS 1.x claims from AD
Figure 7 – Configuring a rule to extract AD FS 1.x claims from AD

In Figure 8, I have selected the Pass Through or Filter an Incoming Claim template so I can pass the AD FS 1.x claim types to the AD FS 1.x RP. This rule is created on the RP Trust so that the claims accepted from the AD CP Trust can be passed to the RP.

Figure 8 – Selecting the claim rule template to pass a claim through to the RP
Figure 8 – Selecting the claim rule template to pass a claim through to the RP

Finally, in Figure 9, I have configured the rule to Pass through all claim values for the Incoming claim type: AD FS 1.x UPN. You will need to create another set of extraction and pass through rules to handle the AD FS 1.x E-Mail Address or Common Name claim types if you wish to send them.

Figure 9 - Configuring a claim rule to pass through the value of the AD FS 1.x UPN claim type
Figure 9 - Configuring a claim rule to pass through the value of the AD FS 1.x UPN claim type

AD FS 1.x Group Claims

Next, you may need to send Group claims to AD FS 1.x. AD FS 2.0 comes with a built-in Claim Description named Group which has the URI that AD FS 1.x expects for Group Claims. This can be seen on the Claim Descriptions node in the AD FS 2.0 MMC console.

Figure 10 – Group Claim Description on the Claim Descriptions node of AD FS 2.0
Figure 10 – Group Claim Description on the Claim Descriptions node of AD FS 2.0

AD FS 2.0 also has a Claim Rule template named Send Group Membership as a Claim which allows you to select a security group, and send a claim based on membership of that group. I have chosen to create this rule on the RP Trust.

Figure 11 – Selecting the claim rule template to send a claim based on group membership
Figure 11 – Selecting the claim rule template to send a claim based on group membership

Now, all we need is the AD FS 1.x Incoming Group Claim Mapping name the AD FS 1.x administrator is expecting on the resource federation server. For our example, let’s call it SharePointUsersMapping. Our rule looks like this:

Figure 12 – Configuring a claim rule to send a Group claim based on group membership
Figure 12 – Configuring a claim rule to send a Group claim based on group membership

Since I created my rule on the RP Trust this time, there is no need to create a pass-through rule in order for the claim to be sent to the RP.

AD FS 1.x Custom Claims

We’re on the home stretch now! Finally, we may need to send additional claims to AD FS 1.x which are neither Identity Claims nor Group Claims; they are AD FS 1.x Custom Claims. As an example, I have extended my AD schema to include a user attribute named costCenter. I want to send the users’ cost center as a claim when they authenticate to a resource hosted by my AD FS 1.x partner.

I need to create a new Claim Description to handle this on the AD FS 2.0 STS, but I need to do a bit of background work before I create the new Claim Description. If I take a look at a SAML assertion from AD FS 1.x which contains a Custom Claim, it looks like this:

clip_image024

The way the SAML assertion is constructed here is as follows:

1. The full URI of the claim type is passed in

2. Everything after the last “/” in the URI is stripped off of the URI and is used as the AttributeName property

3. Everything before the last “/” in the URI is used as the AttributeNamespace property

Consider the following full URI:

http://schemas.xmlsoap.org/claims/FirstNameMapping

Now, run that through the steps above:

1. http://schemas.xmlsoap.org/claims/FirstNameMapping is passed in

2. FirstNameMapping is stripped off of the URI and is used as the AttributeName property

3. http://schemas.xmlsoap.org/claims is used as the AttributeNamespace property

I’m ready to create my Claim Description for costCenter, which is accomplished on the Claim Descriptions node of the AD FS 2.0 MMC console:

Figure 13 – Configuring a new Claim Description for Cost Center
Figure 13 – Configuring a new Claim Description for Cost Center

The AD FS 1.x administrator will need to create an Incoming Custom Claim Mapping named costCenter so that the incoming claim is mapped.

The last thing we need to do on the AD FS 2.0 federation server is create a rule using the Send LDAP Attributes as Claims template to extract the costCenter attribute from AD and send it to AD FS 1.x as our new claim type. I have chosen to do this on the RP Trust which, again, negates the need for an additional pass-through rule to the RP.

Figure 14 – Configuring a rule template to extract costCenter from AD and send as Cost Center
Figure 14 – Configuring a rule template to extract costCenter from AD and send as Cost Center

We’re done! This blog post covers AD FS 2.0 as the Claims Provider (Identity Provider) and AD FS 1.x as the Relying Party (Resource Provider) because it is AD FS 1.x which has the special claims requirements. You can certainly configure AD FS 1.x as the Claims Provider and AD FS 2.0 as the Relying Party, but that deployment should be a bit more straight-forward since AD FS 2.0 simply needs to be configured to accept the incoming claims from AD FS 1.x. Thanks for reading, and please let me know if there are any questions or points needing clarification.

Thanks!

Adam “So He Claims” Conkle


Enabling CEP and CES for enrolling non-domain joined computers for certificates

$
0
0

Hey all, Rob here again. I thought I would expand upon my last blog describing Certificate Enrollment Web Services by covering some of the different configurations that are possible.

As a refresher, Certificate Enrollment Policy and Certificate Enrollment Services abstracts certificate Policy and certificate Enrollment from a specific Active Directory forest allowing clients in a different forest -- or no forest -- to request and obtain certificates.

So here is a simple network diagram of what I am setting up in this blog post.

NetworkDiagram

A non-domain joined computer on the Internet needs to be able to enroll for certificates from a Microsoft Enterprise Certification Authority. We are configuring the CEP/CES web services to interact with the Internet-based computer and this computer has no network connectivity to domain controllers or certification authorities behind the firewall. You could also further isolate the domain controllers and certification authorities by placing the CEP/CES server(s) in a perimeter network with another firewall between CEP/CES and the internal network.

Installing CEP and CES Role Services

First you need to install the CEP and CES roles on the member server Win2K8R2-MEM1.

  1. Launch Server Manager.
  2. Click on Roles in the tree view.
  3. In the right hand pane click on “Add Roles”.
  4. Click the Next button.
  5. Check the box “Active Directory Certificate Services”.
  6. Click Next button twice.
  7. Uncheck “Certification Authority”.
  8. Check “Certificate Enrollment Web Service”.
  9. When you check the role, another dialog box will come up as shown below. Click the “Add Required Role Services” button.

    Figure 1 - Additional required role services 
  10. Check “Certificate Enrollment Policy Web Service”.

    Figure 2 - Adding CEP and CES Role Services 
  11. Click the Next button.
  12. Click the Browse button and select the CA to which this CES server will send certificate requests.

    Figure 3 - Select the certification authority 
  13. Click the Next button.
  14. Select “Username and password” as the Authentication Type. We are choosing this method because these are non-domain joined computers and do not have a domain-based account to pass to the web service.

    Figure 4 - Select Authentication Type 
  15. Click the Next button.
  16. Select “Use the built-in application pool identity”.

    NOTE: I know that the setting default is Specify service account (recommended). If you want to specify an account see the advanced configuration section at the end of the blog.

    Figure 5 - CES Application pool identity 
  17. Click the Next button twice.
  18. You will be given a screen listing IIS components. Do not make any changes; just click the Next button.
  19. Click the Install button.

Configuring SSL certificate for the websites

For those of you well versed in getting certificates issued through IIS and how to setup the websites to require SSL you can skip this section.

  1. Open Internet Information Services (IIS) Manager.
  2. Select the server node in the tree view.
  3. In the right hand pane double click on Server Certificates.

    Figure 6 - Server certificates 
  4. Click on Create Domain Certificate.

    Figure 7 - Create domain certificate 
  5. The Create Certificate dialog box will be presented. Fill out the fields; the “Common name” field MUST be the DNS name that the clients will use to connect to the CEP / CES services on the Internet. In figure 8 you will be using “cert-enroll.fabrikam.com” as the Internet DNS name. If you need to request a certificate that has multiple DNS names associated with it then you can create the certificate using the Custom Certificate requests within the Certificates snap-in.

    Figure 8 - Create Certificate wizard 
  6. Once you have filled in the fields, click the Next button.
  7. Select the online certification authority.NOTE: If an enterprise certification authority is not listed, then look to make sure that the Root CA certificate is in the Trusted Root Certification Authority machine store. Also, the Create Certificate Wizard will attempt to enroll for a certificate based on the default Web Server template. If that template is not available then enrollment will fail.
  8. Type in a friendly name for the certificate.
  9. Click the Finish button.
  10. Back in the IIS Manager Tree view, expand to select Default Web Site node.
  11. In the right hand pane select SSL Settings.
  12. Click on Bindings for the action.

    Figure 9 - SSL Bindings
  13. Select the HTTPS binding, and click the Edit button.
  14. Select the certificate created in step 9 for the SSL certificate field.
  15. Click the OK button.
  16. Click the Close button.
  17. Select each web service virtual directory one at a time and do the following.
    1. Double click on SSL Settings.
    2. Verify Require SSL is checked.
    3. Lastly, you will want to give a friendly name to the CEP service. This friendly name shows up on all client computers when they manually request certificates.
    4. Expand Default Web Site.
    5. Select the virtual directory ADPolicyProvider_CEP_UsernamePassword.
    6. Double click on Application Settings.
    7. Double click on the group name FriendlyName.
    8. In the value field, enter a name that uniquely identifies your organization. See figure 10 for an example.

      Figure 10 - Adding FriendlyName value

Modification of msPKI-Enrollment-Servers attribute

Now that you have the services installed and the IIS configuration completed, you need to focus on the URI sent to the client computer to which it will send the enrollment request.

  1. Run ADSIEdit.msc.
  2. Expand ADSIEdit to: cn=Enrollment Services,cn=Public Key Services,cn=Services,cn=Configuration, DC=yourforestrootdomain,dc=com.
  3. In the right hand pane select the CA for which you are configuring CEP/CES.
  4. Right click and select Properties.
  5. Double click on the attribute msPKI-Enrollment-Servers. See Figure 11.

    Figure 11 - CA Enrollment Services object properties
  6. You need to modify the URL address here to match the Internet based URL that the client computers will be using. You can use the Remove button, and modify the URI and then click the Add button to get the changed URI added back to the attribute.
  7. So in my setup here is what I changed:

    Original URI: 140https://win2k8r2-mem1.fabrikam.com/fabrikam%20Root%20CA1_CES_UsernamePassword/service.svc/CES

    Changed URI: 140https://cert-enroll.fabrikam.com/fabrikam%20Root%20CA1_CES_UsernamePassword/service.svc/CES
  8. Click the OK button.
  9. Exit ADSIEdit.msc.

Configuring the client computers

Alright, you are almost done with the setup now. The last thing you have to do is configure the clients to use the CEP/CES services.

Before clients will enroll for certificates against a certification authority hierarchy, they must have the public Root CA certificate in the computer’s trusted root store. This part of the configuration is probably the most difficult since these are not domain computers and you will have to rely on the user to follow the steps. There are two ways to do this – through the snap-in, or with a command-line that you could give to the user as a batch script.

Adding the Root certificate to the Trusted Root Store
  1. Logon as a local computer administrator account.
  2. You can add the Root CA certificate to the computers Trusted Root Certification Authorities store via the MMC:
    1. Open the Run command and type MMC.
      1. Select File then Add/Remove Snap-in
      2. Select Certificates, and click the Add > button.
      3. Select Computer Account, and click the Next button.
      4. Click the Finish button.
      5. Click OK
    2. Expand Certificates (Local Computer).
    3. Expand Trusted Root Certification Authorities.
    4. Right click on Certificates, and select All Tasks, and then select Import
      1. Certificate Import Wizard comes up.
      2. Click the Next button.
      3. Click the Browse… button and navigate to the CER file.
      4. Click the Next button.
      5. Leave the defaults, and click the Next button.
      6. Click the Finish button.
  3. Or you can run an elevated command prompt (Run as Administrator) and type the below command:

    CertUtil –AddStore Root <Root CA Public Certificate file name>

    For example:
    CertUtil –AddStore ROOT c:\fab-root-ca1.cer
Configuring the CEP web address in the client

Before I go into the steps it is important to understand that this configuration is based on the security context. You have a CEP configuration for the user, and you have another configuration for the computer. Depending on what certificates you plan on issuing (user or computer certificates) you may only require one of these to be configured.

Configuring user certificate enrollment

  1. Run CertMgr.msc.
  2. Expand Certificates, then Current User.
  3. Expand Personal.
  4. Right click on Personal, and select All Tasks, then Advanced Operations, then Manage Enrollment Policies
  5. On the Manage Enrollment Policies dialog click the Add… button. See Figure 12

    Figure 12 - Enrollment Policies dialog box
  6. Type in the URI for the CEP service in the field. This will be in the format of:

    https://<Internet FQDN>/ADPolicyProvider_CEP_UsernamePassword/service.svc/CEP

    In my example this would be:

    https://cert-enroll.fabrikam.com/ADPolicyProvider_CEP_UsernamePassword/service.svc/CEP

    NOTE: the only thing that will be unique to your environment is the Internet FQDN of the URI.
  7. In the Authentication type drop down select: Username/password
  8. Click the Validate button.
  9. Once the Validate button is pressed, you will be prompted to type in a domain user name and password. Supply these credentials.
  10. If everything goes correctly you should see that the validation test passed in the lower section of the dialog box see Figure 13.

    Figure 13 - Validation of Enrollment Policy Server configuration

    NOTE: You can see in Figure 13 that the only difference is the DNS portion of this URI. If you scroll down further in the validation output, you will see the friendly name you added under the website configuration being displayed also.
  11. Click the Add button.
  12. Uncheck Enable for automatic enrollment and renewal.

    NOTE: Failure to do so could cause users to be prompted for user name and password each time they logon to the computer. This occurs because Windows Autoenrollment runs immediately after the user has logged on. If the enrollment policy is configured for automatic enrollment and renewal, Windows Autoenrollment will attempt to contact the configured CEP server when it starts in order to determine if new certificates have been assigned. Since this will result in the users being prompted for credentials every time they log on your users may be annoyed.
  13. Click the OK button.NOTE: Follow the same procedures to configure the Enrollment Policy server for the computer personal store if you need to enroll for computer certificates.

Testing the configuration

With the users certificate store MMC still open do the following:

  1. Right click on Personal, select All Tasks, and select Request New Certificate…
  2. Certificate Enrollment wizard is displayed.
    1. Click Next.
    2. You will see that our new CEP configuration is being displayed.

      Figure 14 - Certificate Enrollment CEP server 
    3. Click the Next button.
    4. You will be prompted for a domain user name and password. Type in a valid domain user name and password combination. Keep in mind that the account used here will determine what certificate templates that are displayed to you next. You can check the box “Remember my credentials”, however if the account password changes you will have to change the credentials through the credentials vault.
    5. Select a certificate template to issue. I always like to use Basic EFS during testing.
    6. Click the Enroll button.
    7. You will be prompted for User name and Password. Type in a valid domain user name and password combination.
    8. You have now successfully enrolled for a certificate against the certification authority.

      NOTE: The same procedures can be used to test computer certificates. The templates that are available must be configured for “Supplied in the request” on the Subject Name tab for the template. Also, the user account used to authenticate to the CEP and CES service must be configured on the templates Security tab with Enroll permissions.

How to deploy the Certificate Enrollment Policy URL to clients:

Now you may be asking “how can I get this configuration deployed to users?” And you may be thinking “these steps are way too complicated for them to follow.” Well, you can export the configuration from the registry and have the users import the settings.

Computer CEP configuration is located here:
HKEY_LOCAL_MACHINE\Software\Microsoft\Cryptography\PolicyServers\<GUID>

Use CEP configuration is located here:
HKEY_CURRENT_USER\Software\Microsoft\Cryptography\PolicyServers\<GUID>

Advanced Configurations

Alright, so you got your non-domain joined client computers requesting and getting certificates over the Internet. Great job! There are two different topics to be discussed for advanced configurations:

1. You have multiple CES servers or multiple authentication methods (Kerberos, Username Password, or Certificate) in your environment.

2. You want to use the CEP and CES services on the same computer, and use a domain or managed service account for the web application pool.

Multiple CES servers or multiple authentication methods:

If you have multiple CES servers that host different authentication methods or one CES server that hosts multiple authentication type web services, you need to be concerned with the priority of the authentication methods you specify. If you recall from the section “Modification of msPKI-Enrollment-Servers attribute”, those changes assume that this is the only CES server and authentication method you are implementing in your environment. If you need to support multiple authentication methods then the CES URI needs to be added in a different way to assign different priorities to each authentication method.

The first thing to understand is that the lower the priority number the more preferred the CES URI is. So a priority of 100 is more preferred then that of 200. If you want to find out the different CES authentication types assigned to a certain CA and the priority you can type the following CertUtil command: certUtil –config “<CA Computer Name>\<CA Name>” -enrollmentServerURL

for example:

CertUtil –config “fab-rt-ca1.fabrikam.com\Fabrikam Root CA1” –enrollmentServerURL

Output:

Figure 15 - EnrollmentServerURL output

As you can see from the output in Figure 15 there are two authentication methods assigned to the Fabrikam Root CA1. You have Kerberos at a Priority of 100, and UsernamePassword at a priority of 200. You can also see that the URL addresses are different. External clients would not resolve the win2k8r2.fabrikam.com DNS name, and internal clients would prefer Kerberos authentication over the UserNamePassword method because the priority for Kerberos is lower. Failure to set the priorities correctly could cause domain joined client computers to prefer UsernamePassword authentication method over Kerberos, and you will get a lot of calls to the help desk asking why the computer is constantly asking for credentials.

Domain account running the application pool

Alright, if you are reading this section, you must be really serious about security, and using domain based service accounts to run application pools. As stated earlier, during the installation phase if the CEP and CES web services are running on the same server then the application pool accounts for both of these services must be using the same account.

  1. Open Internet Information Services (IIS) Manager snapin.
  2. Highlight the Application Pools tree node.
  3. On the right hand pane, you will see some application pool accounts. You are interested in WSEnrollmentPolicyServer and WSEnrollmentServer.
  4. Do the following for each application pool.
    1. Right click on the application pool, and select Advanced Settings.
    2. Select Identity under Process Model node, and click on the button. See figure 16.

      Figure 16 - Application Pool Identity
    3. Radio button select Custom account:
    4. Click on the Set… button.
    5. Type in the Application Pool account in the form of domain\user name.
    6. Type in the password twice.
    7. Click the OK button three times.
  5. Open up computer management (Compmgmt.msc).
  6. Add the group to the local computer IIS_IUSRS group.
  7. Open an elevated command prompt, and type IISRESET

So in a nutshell, that’s pretty much how you can configure CEP/CES to allow users on non-domain joined clients to enroll for certificates against an internal Enterprise CA. Stay tuned in the future when we’ll cover some other scenarios featuring CEP/CES in Windows Server 2008 R2.

I hope that you have enjoyed learning how to use CEP and CES to extend certificate issuance to your users and customers.

Rob “Unjoined” Greene

FRS to DFSR Migration Tool Released

$
0
0

Heya, Ned here again. I am out of my barrel and on the road again this week, coming to you live from Las Colinas Texas. As you may have noticed, I recently wrote a TechNet Whitepaper on how to migrate your old custom FRS data to DFSR. Hopefully that's been useful.

In the meantime though, Mike Stephens and I also created a free migration tool. Because I am lazy very busy I am simply going to repost from our download page at Code Gallery. I hope you find this useful.

FRS2DFSR.EXE

http://code.msdn.microsoft.com/frs2dfsr

FRS2DFSR is a utility that assists Windows admins in moving from the legacy File Replication Service (FRS) to the newer Distributed File System Replication (DFSR) service. Because FRS is no longer supported except for SYSVOL starting in Windows Server 2008 R2 and because Windows 2000 support ends on July 13 2010, this utility can help unblock admins from migrating to newer operating systems. The tool is written in C# .NET and can be run on x64 and x86 Windows Server 2003/2008/2008 R2 and Windows Vista/7 with DFSR RSAT installed. It requires domain admin rights and is command-line only. FRS2DFSR.EXE exports an existing File Replication Service replica set, deletes the replica set in Active Directory and creates DFS Replication group with the same servers, folders, connections, and settings. See the release notes for utility limitations. This tool is not used for SYSVOL migrations, see below for steps on using DFSRMIG.EXE in that scenario.

Important support information:

This tool is provided as-is, without warranty and is not supported by Microsoft Commericial Technical Support (aka CTS, PSS, CSS, EPS). No official support cases may be opened against this tool. It is intended only as a fully functional sample. Use the Discussions and Issue tracker tabs to report issues.

For a supported set of steps for migrating from FRS to DFSR for custom sets, see:

DFS Operations Guide: Migrating from FRS to DFS Replication
http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=a27008a8-4b28-49cc-80b5-05b867440af9

To migrate SYSVOL, use the DFSRMIG.EXE tool included in Windows and reference:

SYSVOL Replication Migration Guide: FRS to DFS Replication
http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=df8e5e84-c6c6-4cef-9dab-304c92299804

- Ned "and Mike Stephens" Pyle

Designing and Implementing a PKI: Part III Certificate Templates

$
0
0

Chris here again. In this segment I will be covering setting up certificate templates on the newly created CA hierarchy. Enterprise Certification Authorities (CAs), as well as clients, utilize what are called certificate templates. Certificate templates contain properties that would be common to all certificates issued by the CA based on that template. Windows includes several predefined templates, but Administrators also have the ability to create their own templates specific for their enterprise. When requesting a certificate, a client can just specify the template name in the request and the CA will build the certificate based upon the requestor’s information in Active Directory and the properties defined in the template.

Certificate templates are also used to define the enrollment policy on the CA. First, an Enterprise CA can only issue certificates based upon the templates it is configure to use. For example, if the CorpUserEmail template is not available on the CA then the CA cannot issue certificates based on that template. Second, permissions set on the certificate template’s Active Directory object determine whether or not a user or computer is permitted to request a certificate based on that template. If a user does not have Enroll permissions on a particular template, the CA will deny any request submitted by the user for a certificate based on that template.

As the Windows Server operating system has evolved over the last ten years, so has the concept of the certificate template. Currently, there are three versions of templates:

Version 1 templates were introduced in Windows 2000, and can be used by Windows 2000, Windows Server 2003 (R2), and Windows Server 2008 (R2) Enterprise CAs. Version 1 templates Active Directory objects are created the first time an Enterprise CA is created in the forest. These templates were designed to reflect the most common scenarios for digital certificates in the Enterprise. Unfortunately, if you don’t like the settings we selected you’re pretty much out of luck. Creating new v1 templates, or editing the existing templates, is not supported. The only customization supported is to the permissions on the template.

Version 2 templates were introduced in Windows Server 2003 and are a vast improvement over v1 templates. First and foremost, v2 templates can be modified by an Enterprise Admin. In addition, the Admin can duplicate an existing v1 or v2 template to create a new v2 template, and then customize the result. Finally, v2 templates expose a larger number of properties that can be configured, and also expose some controls to take advantage of some other new features introduced in Windows Server 2003. One of these features, for example, is key archival. Version 2 templates can be used by Windows Server 2003 and Windows Server 2008 Enterprise or Datacenter Editions. On Windows Server 2008 R2, v2 templates can be used by a CA installed on Standard, Enterprise, Datacenter, Foundation and Server Core Editions.

Version 3 templates were introduced in Windows Server 2008. Version 3 templates have all the features of a version 2 template with two major additions. First, v3 templates support the use of Crypto Next Generation (CNG) providers, which means that the certificates support Suite B algorithms based on Elliptical Curve Cryptography (ECC). Second, v3 templates have a setting that instructs Windows to grant the Network Service account access to the private key created on the requesting computer. This is great for those certificates that will be used by applications or services that run as Network Service rather than Local System. Version 3 templates are supported by CAs installed on Windows Server 2008 Enterprise and Datacenter Editions. They are also supported by CAs installed on Windows Server 2008 R2 Standard, Enterprise, Datacenter, Foundation and Server Core Editions.

For a complete table of Windows Server SKU and the features supported by it, check out this blog post by the Windows PKI development team.

Deploying Certificate Templates

For the purpose of example, I am going to use a fictional company called Fabrikam. The diligent IT staff at Fabrikam have done their research, performed some testing, consulted the auguries, and they’ve determined what types of certificates they need to issue to meet the specified business needs. The next step is to look at what templates are available that they can use out of the box and which ones they need to modify to suit their purposes.

Here’s a quick overview of what Fabrikam determined:

CA Issuance: Domain Controller Authentication, Web Server, and User certificates
Key Archival: The private keys for User certificates should be archived
Doman Controller Authentication template: No additional requirements
Web Server template:

  • A version 2 template must be created from the default Web Server template.
  • The security group Fabrikam Web Servers should have Enroll permissions.
  • The Subject name must contain the DNS name of the web server, and should be provided automatically.

User Certificate Template:

  • A version 2 template must be created from the default User template
  • Key Archival must be implemented for the template

Certificate Templates Setup

Fabrikam has decided that they need to deploy the following certificate templates: Domain Controller Authentication, Web Server, and User. In addition, the fact that Key Archival is to be enabled for the User template means that the CA should also be configured to issue certificates based on the Key Recovery Agent template (Actually, this is not a requirement if there is another Windows Enterprise CA in environment that is configured to issue Key Recovery Agent certificates, and is trusted to do so.)

Let’s assume that the PKI hierarchy has been set up and is functional. The next step is to configure the certificate templates. Let’s check the configuration of the templates before deploying them.

To manage the certificate templates, you use the Certificate Templates MMC snap-in. In the Certificate Services MMC snap-in, right-click on the Certificate Templates folder and select Manage from the context menu.

clip_image002

In the view pane of the Certificate Templates snap-in you’ll see all the certificate templates available in Active Directory. If you locate the Domain Controller Authentication template and double-click on it, you’ll see the properties available for that template. Our fictional IT staff has already reviewed the settings and determined that no changes need to be made, so we’ll just click Cancel, here.

clip_image004

Next, locate the Web Server template. The default Web Server template already meets the current requirements that arose from an analysis of business needs. However, to allow for future changes Fabrikam has decided that they need to duplicate this default template and create a v2 template.

clip_image006

To duplicate the existing Web Server template and create Version 2 template:

1. I right click on the Web Server template and select Duplicate Template from the context menu.
clip_image008
Fabrikam still has a lot of Windows Server 2003 servers and Windows XP workstations (But they are steadily upgrading. No, really! They are!! Trust me! Sigh.) This means that we can’t use the latest and greatest v3 templates available on our Windows Server 2008 CA. We’ll have to specify that we’re creating a template for Windows 2003 Server, Enterprise Edition which will create a v2 certificate template.
clip_image010

2. We’ll give a new name to the template: Fabrikam WebServer.
clip_image012

3. Clients within Fabrikam will connect to the web servers via the server’s DNS name. This means that the requesting server’s fully qualified DNS name must be in the Subject of the certificate it receives. To meet this requirement, click on the Subject Name tab and select Build from this Active Directory information. For the Subject Name Format, select DNS Name. Finally, deselect all of the check boxes under Include this information in the alternate Subject name.
clip_image014

Now that the new template is configured per the specified requirements, we need to set the security. The computer account for a particular web server will be the principal enrolling for the Fabrikam WebServer template, so we have to make sure that all the web server computer accounts have Enroll permission on the new template. Fabrikam, luckily, has a Security Group containing all of their web servers called, oddly enough, Fabrikam Web Servers. We can simply grant the necessary permissions to that group.

 

  1. In the template properties, elect the Security tab, and click Add…
  2. Enter the group name (Fabrikam Web Servers) and click the Check Names button.
  3. After the name of the security group is resolved, click OK.
  4. Grant the group Enroll permission.

    The permissions in the security tab should like this when these changes are complete.

    clip_image016

    Once all the necessary changes have been made, click Ok to commit the new template and save it to Active Directory. The Fabrikam WebServer template is now ready to be added to the CA.

User Certificate Template

We’ll use essentially the same process to duplicate the default User template and modify the resulting v2 template to suit Fabrikam’s requirements.

Just as with the default WebServer, we’ll duplicate the existing User template to create the custom v2 template. We need to do this because the default User template is a v1 template, so its properties cannot be modified. One of our requirements is to enable Key Archival which requires configuring a setting in the template, so in order to do this a v2 template is required.

To create and configure our new User template:

  1. Select the User template, right click on it, and select Duplicate Template from the context menu.

    clip_image018
  2. Select Windows 2003 Server, Enterprise Edition to create a v2 template.

    clip_image020
  3. Change the Template Display name to Fabrikam User.

    clip_image022
  4. Navigate to the Request Handling Tab, and select Archive subject’s encryption private key to enable key archival for this template.

    clip_image024
  5. Next, set permissions on the new template. Domain Users will already have Enroll permission, but since this certificate will be deployed via user Autoenrollment, Domain Users will also require Autoenroll permission. The permissions, when set properly, should look like this:

    clip_image026

    Once all the necessary changes have been made, click Ok to commit the new template and save it to Active Directory. The Fabrikam User template is now ready to be added to the CA.

Key Recovery Agent Certificate Template

Although this template was not mentioned as one of Fabrikam’s requirements, it is a requirement to issue at least one Key Recovery Agent certificate to support Key Archival. This step is only necessary if there is not another Windows Enterprise CA configured to issue certificates based on the Key Recovery Agent template.

For this example, however, let’s assume that there is not and go ahead and configure the Key Recovery Agent template. The only setting that requires modification is the permissions. We’ll assign enroll permissions to the Fabrikam KRA security group so that members of that group can enroll for a Key Recovery Agent certificate.

  1. Open up the Key Recovery Agent certificate template by double-clicking on it and selecting the Security tab. I click Add…
  2. Enter the name Fabrikam KRA and click the Check Names button.
  3. After the name of the security group is resolved, click OK.
  4. Check the Enroll permission.

Configuring the CA to issue certificates

To configure the CA to issue the desired certificate templates, I right-click on the Certificate Templates folder, select New, then select Certificate Templates to Issue from the context menu.

clip_image028

Then I select the certificate templates I wish to issue, by holding down the control key and selecting multiple templates, and then clicking OK.

clip_image030

This CA can now issue certificates based on the selected certificated templates.

clip_image032

Conclusion

That’s really all there is to it. While in this segment we only modified a few properties of our templates, in the vast majority of cases there should be no need for making extreme changes. The default templates should be sufficient for most implementations, and the changes we made were more to ease certificate deployment than actually create truly custom templates. Perhaps in a later blog post we’ll cover some of the more esoteric settings. However, this shouldn’t stop you from exploring on your own using the online help.

In Part IV of this series we’ll cover implementing Web Enrollment and Key Archival.

New Directory Services Content 5/16-5/22

$
0
0

One Stop Shop for Windows Time Information

$
0
0

Hi folks, Ned here again. After much noodling and work here with our TechNet writer team, there is a new, consolidated set of info for Windows Time (w32time) in all of our operating systems, to include Windows 7 and Win2008 R2. All of it can be found here:

Windows Time Service Technical Reference

This includes updated info on:

  • Where to Find Windows Time Service Configuration Information
  • What is the Windows Time Service?
  • Importance of Time Protocols
  • How the Windows Time Service Works
  • Windows Time Service Tools and Settings

I think you'll find this useful, make sure to give it a look. A huge thanks to Bob Drake, Kurt, and Jarrett for making this happen.

PS: The phrase "one stop shop" is the pet peeve of David Fisher. If you ever find yourself talking to him, make sure you use it often.

- Ned "tick tock you don't stop" Pyle

New Directory Services Content 5/23-5/29

$
0
0

KB

983456

SMTP configuration options are reset in Windows Server 2008 R2, Windows Server 2008 Service Pack 1 and Service Pack 2, after you install the MS10-024 update (976323)

981482

In Windows Server 2008 or Windows Server 2008 R2 environment, if the network environment is set to enable Delay ACK and storage is connected with iSCSI, an iScsiPrt error is output to the System Event Log when a general operation is executed

Blogs

Designing and Implementing a PKI: Part III Certificate Templates

FRS to DFSR Migration Tool Released

Enabling CEP and CES for enrolling non-domain joined computers for certificates

Hey, Scripting Guy! Weekend Scripter: Using the Get-ACL Cmdlet to Show Inherited Permissions on Registry Keys

Offline Folders and Folder Redirection with Anjli

Interview on Identity and the Cloud

Group Policy Setting of the week 26 – Do not allow Windows Messenger to be Run

Windows Server 2008 R2 Netsh Technical Ref – now available for download

Inside the new PowerShell 2.0 commands for Active Directory

Federation Trust Partner Certificates

Kim Cameron on Identity, Federation and the Cloud

How to apply a Group Policy Object to individual users or computer

Transitioning your Active Directory to Windows Server 2008 R2

What's New in Roaming User Profiles in Windows 7

Information Card Issuance CTP

Managing Windows Server 2008 R2 using PowerShell

Work Remotely with Windows PowerShell without using Remoting or WinRM

TechNet Wiki Pick of the Week: DirectAccess and Teredo Adaptor Behavior

Issuing Information Cards with ADFS 2.0

PowerShell Modules versus Snapins

FAQ: Microsoft Hyper-V Server 2008 R2

Deployment guides for Remote Desktop Services in Windows Server 2008 R2 and for Terminal Services in Windows Server 2008 are now available.

Two Minute Drill: The Eventcreate command

Should you install Microsoft Hyper-V on Server Core?

Windows XP SP2 retirement looms, puts users in tough spot

Delete certificate from smartcard with Base Smart Card provider

ADFS V2.0 Lingo

System Center Configuration Manager v.Next Beta 1 - now available

VHD Getting Started Guide – now available

Friday Mail Sack: Walking Tall Edition

$
0
0

Hello folks, Ned here again. After a week in Las Colinas Texas, the blog migration, and Jonathan’s attempted coup, we are still standing. Since I’m sure your whole day has been designed around this post I won’t keep you waiting. 

Question

I am testing RODC’s in a WAN scenario, where the RODC is in a branch site. When the WAN is taken offline, some users cannot logon even when I have cached their passwords. Other users can logon but not access other resources using Kerberos authorization, like file shares and what not.

Answer

Make sure that the computers in that branch site are allowed to cache their passwords also. This means that those computers need to be added into the Password Replication Policy allow list via DSA.MSC. For example:

image

image

If a user tries to logon to a computer that cannot itself create a secure channel and logon to a DC, that user will receive the error “The trust relationship between this workstation and the primary domain failed”.

If users can logon to their local computers, but then try to access other resources requiring a Kerberos ticket granting service ticket for those computers, and those computers are not able to logon to the domain, users will see something like:

image

The error “The system detected a possible attempt to compromise security” is the key, the dialog may change – in this case I was trying to connect to a share.

You will also see “KDC_ERR_SVC_UNAVAILABLE” errors in your network captures from the RODC. Here I am using a workstation called 7-04-x86-u to try and browse the shares on a file server called 2008r2-06-fn (which is IP address 10.70.0.106). My RODC 2008r2-04-f has a KDC that keeps getting TGS requests that it cannot fulfill since that 06 server cannot logon. So now you see all the SMB (i.e. CIFS) related TGS issues below:

image

Question

Does DFSR talk to the PDCE Emulator like DFS Namespace root servers?

Answer

Nope, it locates DC’s just like your computer does when you logon – through the DC Locator process. So if everything is working correctly, any DC’s in the same site are the primary candidates for LDAP communication.

Question

I understand that DFSR uses encrypted RPC to communicate, but the details are kind of lacking. Especially around what specific cipher suite is used. Can you explain a bit more?

Answer

DFSR uses RPC_C_AUTHN_GSS_NEGOTIATE with Kerberos required, with Mutual Auth required, and with Impersonation blocked. The actual encryption algorithm depends on the OS’s supported algorithms used by Kerberos. On Windows 2003 that would be AES 128 (and RC4 or DES technically, but that would never be used normally). On Win2008 and Win2008R2 it would be AES-256. DFSR doesn’t really care what the encryption is, he just trusts Kerberos to take care of it all within RPC (and this means that you can replace “DFSR” here with “Pretty much any Windows RPC application, as long as it uses Negotiate with Kerberos”). Both AES 128 and AES 256 are very strong block cipher suites that meet FIPS compliance and no one is close to breaking them in the foreseeable future.

Proof!

Question

Not really an AD thing, but is Windows 7 able to use the Novell IPX network protocol?

Answer

Nope. Windows XP/2003 were the last Microsoft operating systems to include IPX support. Novell stopped including IPX when they released their client for Vista/2008:

http://www.novell.com/documentation/vista_client/pdfdoc/vista_client_admin10/vista_client_admin10.pdf

Novell Client for Windows XP/2003 Features Not Included in the Novell Client for Windows Vista/2008

  • IPX/SPXTM protocols and API libraries.

Question

What settings should I configure for Windows Security Auditing? What’s recommended?

Answer

That’s a biiiiig question and it doesn’t have a simple answer. The most important thing to consider when configuring auditing – and the one that hardly anyone ever asks – is “what are you trying to accomplish?” Just turning on a bunch of auditing is wrong. Just turning on one set of auditing you find on the internet, a government website, or through some supposed “security auditing” company is also wrong – there is no one size fits all answer, and anyone that says there is can be discarded.

  • Decide what type of information you want to gain by collecting audit events – what are you going to do with this audit data.
  • Consider the resources that you have available for collecting and reviewing an audit log – not just cost of deployment, but reviewing, acting upon it, etc. Operational costs.
  • Collect and archive the logs using something like ACS. The forensic trail is very short in the event log alone.

Don’t just turn on auditing without having a plan for those three points. Start by reviewing our auditing best practices guide. Then review Eric Fitzgerald’s excellent blog post “Keeping the noise down in your security log.” It has one of the best points ever written about auditing:

“5. Don't enable "failure" auditing, unless you have a plan on what to do when you see one (that doesn't involve emailing me ;-) and you are actually spending time on a regular basis following up on these events.

You might or might not realize, that auditing in general is a potential denial-of-service attack on the system.  Auditing consumes system resources (CPU & disk i/o and disk space) to record system and user activity.  Success auditing records activity of authenticated users performing actions which they've been authorized to perform.  This somewhat limits the attack, since you know who they are, and you've allowed them to do whatever it is that you're auditing.  If they try to abuse the system by opening the audited file a million times, you can go fire them.

Failure auditing allows unauthenticated or unauthorized users to consume resources.  In the worst case, a logon failure event, a remote user with no credentials can cause consumption of system resources.”

Make sure you are not impacting performance with your auditing – another good Eric read here. Understand exactly what it is your auditing will tell you by reviewing:

Finally, for some general sample template security settings, take a look at the Security Compliance Manager tool.

There must have been something in the water this week, as I got asked this by a dozen different customers, askds readers, and MS internal folks. Weird.

Question

When running AD PowerShell cmdlet get-adcomputer -properties * it always returns:

Get-ADComputer : One or more properties are invalid.
Parameter name: msDS-HostServiceAccount
At line:1 char:15
+ Get-ADComputer <<<<  srv1 -Properties *
    + CategoryInfo          : InvalidArgument: (srv1:ADComputer) [Get-ADComputer], ArgumentException
    + FullyQualifiedErrorId :
One or more properties are invalid.
Parameter name: msDS-HostServiceAccount,Microsoft.ActiveDirectory.Management.Commands.GetADComputer

Not using –properties * or using other cmdlet’s worked fine.

Answer

Rats! Well, this is not by design or desirable. If you are seeing this issue then you are probably using the add-on "AD Management Gateway" PowerShell service on your Win2003 and Win2008 DC's, and have not yet deployed Windows Server 2008 R2 DC’s yet.  You don’t have to roll out Win2008 R2, but you do need to update the AD schema to version 47 – i.e. Windows Server 2008 R2. Steps here, and as always, test your forest schema upgrade in your lab environment first.

Have a nice weekend.

- Ned “not actually walking tall, per se” Pyle


Son of SPA: AD Data Collector Sets in Win2008 and beyond

$
0
0

Hello, David Everett here again. This time I’m going to cover configuration and management of Active Directory Diagnostics Data Collector Sets. Data Collector Sets are the next generation of a utility called Server Performance Advisor (SPA).

Prior to Windows Server 2008, troubleshooting Active Directory performance issues often required the installation of SPA. SPA is helpful because the Active Directory data set collects performance data and it generates XML based diagnostic reports that make analyzing AD performance issues easier by identifying the IP addresses of the highest volume callers and the type of network traffic that is placing the most load on the CPU. A screen shot of SPA is shown here with the Active Directory data set selected.

image

Those who came to rely upon this tool will be happy to know its functionality has been built into Windows Server 2008 and Windows Server 2008 R2.

This performance feature is located in the Server Manager snap-in under the Diagnostics node and when the Active Directory Domain Services Role is installed the Active Directory Diagnostics data collector set is automatically created under System as shown here. It can also be accessed by running “Perfmon” from the RUN command.

image

Like SPA, the Active Directory Diagnostics data collector set runs for a default of 5 minutes. This duration period cannot be modified for the built-in collector. However, the collection can be stopped manually by clicking the Stop button or from the command line. If reducing or increasing the time that a data collector set runs is required, and manually stopping the collection is not desirable, then see How to Create a User Defined Data Collection Set below. Like SPA, the data is stored under %systemdrive%\perflogs, only now it is under the \ADDS folder and when a data collection is run it creates a new subfolder called YYYYMMDD-#### where YYYY = Year, MM = Month and DD=Day and #### starts with 0001.

Once the data collection completes the report is generated on the fly and is ready for review under the Reports node.

Just as SPA could be managed from the command line with spacmd.exe, data collector sets can also be managed from the command line.

How to gather Active Directory Diagnostics from the command line

  • To START a collection of data from the command line issue this command from an elevated command prompt:

logman start “system\Active Directory Diagnostics” -ets

  • To STOP the collection of data before the default 5 minutes, issue this command:

logman stop “system\Active Directory Diagnostics” -ets

NOTE: To gather data from remote systems just add “-s servername” to the commands above like this:

logman -s servername start “system\Active Directory Diagnostics” -ets

logman -s servername stop “system\Active Directory Diagnostics” -ets

This command will also work if the target is Server Core. If you cannot connect using Server Manager you can view the report by connecting from another computer to the C$ admin share and open the report.html file under \\servername\C$\PerfLogs\ADDS\YYYYMMDD-000#.

In the event you need a Data Collection set run for a shorter or longer period of time, or if some other default setting is not to your liking you can create a User Defined Data Collector Set using the Active Directory Diagnostics collector set as a template.

NOTE: Increasing the duration that a data collection set runs will require more time for the data to be converted and could increase load on CPU, memory and disk.

Once your customized Data Collector Set is defined to your liking you can export the information to an XML file and import it to any server you wish using Server Manager or logman.exe

How to Create a User Defined Data Collection Set

 

  1. Open Server Manager on a Full version of Windows Server 2008 or later.
  2. Expand Diagnostics > Reliability and Performance > Data Collector Sets .
  3. Right-click User Defined and select New > Data Collector Set.
  4. Type in a name like Active Directory Diagnostics and leave the default selection of Create from a template (Recommended) selected and click Next.
  5. Select Active Directory Diagnostics from the list of templates and click Next and follow the Wizard prompts making any changes you think are necessary.
  6. Right-click the new User Defined data collector set and view the Properties.
  7. To change the run time, modify the Overall Duration settings in the Stop Condition tab and click OK to apply the changes.

Once the settings have been configured to your liking you can run this directly from Server Manager or you can export this and deploy it to specific DCs.

Deploying a User Defined Data Collection Set

  • In Server Manager on a Full version of Windows Server 2008 or later
  1.  
    1. Expand Diagnostics > Reliability and Performance > Data Collector Sets > User Defined
    2. Right-click the newly created data collector set and select Save Template…
  • From the command line

1. Enumerate all User Defined data collector sets

logman query

NOTE: If running this from a remote computer the command add “-s servername” to target the remote server

logman -s servername query

2. Export the desired collection set

logman export -n “Active Directory Diagnostics” -xml addiag.xml

3. Import the collection set to the target server.

logman import -n “Active Directory Diagnostics” -xml addiag.xml

NOTE: If you get the error below then there’s an SDDL string in the XML file between the <Security></Security> tags that is not correct. This can happen if you export the Active Directory Diagnostics collector set under System. To correct this, remove everything between <Security></Security> tags in the XML file.

Error:

This security ID may not be assigned as the owner of this object.

4. Verify the collector set is installed

 logman query

5. Now that the data collector set is imported you’re ready to gather data. See How to gather Active Directory Diagnostics from the command line above to do this from the command line.

Once you’ve gathered your data, you will have these interesting and useful reports to aid in your troubleshooting and server performance trending:

image

image

In short, all the goodness of SPA is now integrated into the operating system, not requiring an install or reboot. Follow the steps above, and you'll be on your way to gathering and analyzing lots of performance goo.

David “highly excitable” Everett

How to Virtualize Active Directory Domain Controllers (Part 1)

$
0
0

Hello Everyone, This is Shravan from the Active Directory team and Jason from the System Center VMM team here at Microsoft. We will be discussing a scenario that comes up often: how to migrate active directory domain controllers to a virtualized system.

Why Now?

Reduce Cost! Reduce Cost! Reduce Cost! It’s an old adage. When this conclusion reaches the folks who work within large data centers, this means a big push to consolidate how much space, cost and energy we consume on the big beefy servers. Virtualization serves as a good method to optimize the use of the server resources but data center administrators need be cautious as they proceed. Therefore let’s discuss some of the common concerns regarding virtualized domain controllers as to when/where/how to move the resources to virtual hardware.

How to plan?

When introducing virtualized DC’s, one needs to think of virtual DC's the same way they think about scalability planning with physical DC's with the extra dimension of virtualization platform. Conventional wisdom says not to put all eggs in one basket and avoid single point of failures as much as possible. Some of the logical examples of these single points of failure for physical DC’s are as follows:

  • All DC's in the same data center
  • All DC’s on the same network switch
  • All DC’s on the same power grid
  • All DC’s same make/model of hardware etc.

Administrators have learned to avoid these pitfalls by adequately planning the resources. Taking this to the next level, the same applies to the virtualized DC’s as well. Here are some examples of single points of failure specifically to the virtualized DC’s:

  • Multiple DC's on a common host virtual server
  • Multiple DC’s using the same hard disk spindle
  • Multiple DC’s using the same network adaptor on a virtualized host
  • Multiple DC’s hosted on different hosts but using single UPS for power failures

One of the most obvious single points of failure is that when the machine- on which all the virtualized solutions run - fails or when the virtualization solution itself fails. This event causes all Virtual Machines hosted by that machine to go offline. This might sound scary but actually, this risk is relatively easy to handle. Redundant capacity and regular backups of the virtualized operating systems (together with the virtualized applications) are a warranty against data loss and downtime due to the single point of failure.

Another question is in what order to virtualize the DC's in the Hub and Branch sites. The same considerations that went into place when placing the number of physical DC’s in each site needs to be revisited. There may be specific cases which call for specific plan. Our general recommendation would be to start with optimizing the number of the DC's needed in the branch office sites first while constantly testing the load bearing capacity in each step. Then virtualize the DCs in the Hub site. Performing the steps in this bottom-up fashion ensures you don’t starve the branches sites while virtualizing your hub DCs. As always, nothing beats comprehensive testing in your own environment as one size may not fit all.

Pardon the geek-speak while we review some performance considerations: The peak and steady state load generated by a collection of VM guests should not exceed the capabilities of the virtual host computer and network infrastructure. Specifically, collection of VM guests should not exceed the capabilities of the CPU, disk subsystem, memory, and network bandwidth on a common host computer. Some load scenarios can exceed capabilities that a DC on a single physical computer can service so multiple physical or virtual computers may be required. So for instance, we have one virtual server hosting individual virtual machines in the following roles:

  • Domain Controller (DC)
  • Exchange server front-end server
  • Exchange back-end server
  • SQL server

The peak load on the DC as a guest is not merely dependent on the authentication traffic coming to the DC but a cumulative load on the Virtual server can also affect the capacity on the DC. Therefore, please take into account the factor the total load on the virtual server.

While we have not seen any specific issues with any roles (FSMO, GC, DNS, RODC etc) running on virtual servers. Please take load and criticality into consideration before you make the switch to virtual or deciding to keep them as physical servers.

Regardless of the virtual host software product that you are using, here are some rules on the “don’t do this when hosting virtualized DC guests on VM hosts.” These rules include but are not limited to the following:

  • Do not stop or pause domain controllers.
  • Do not restore snapshots of domain controller role computers. This action causes an update sequence number (USN) rollback that can result in permanent inconsistencies between domain controller databases. USN rollback is discussed further in this blog.
  • Do not perform ONLINE physical-to-virtual (P2V) conversions. All P2V conversions for domain controller role computers should be done in OFFLINE mode. System Center Virtual Machine Manager enforces this for Hyper-V. Please read further to understand the difference between ONLINE and OFFLINE modes for P2V. For information about other virtualization software, see the vendor documentation. The exception to this is tools such as disk2vhd which convert the DC while the source stays online because the virtual DC is not turned on the production network.
  • Configure virtualized domain controllers to synchronize with a time source in accordance with the recommendations for your hosting software. For Microsoft Virtual Server or Hyper-V server, turn off host time synchronization from the properties of the VM.
  • If you do not have uninterruptable power supplies (UPS) for your VM hosts or the storage disk where the active directory database resides, then ensure write-caching is disabled on the virtual machine’s host computer. Please refer this link for additional guidance. Conversely, if the write caching needs to stay enabled for the VM host which hosts the DC, then install a UPS to avoid damage to the DC(s).
  • Virtual DC’s are subject to the same backup requirements as physical DCs. Please refer this TechNet article for details.
  • Be careful when you are adding the Virtual Server host as a member of the same domain as the guest DCs it’s hosting as you may run into a Chicken & Egg problem if a DC is not available during boot time for the host.

For more considerations about running domain controllers in virtual machines, see Microsoft Knowledge Base article 888794. Also, see the following TechNet article for additional information:

Deployment Considerations for Virtualized Domain Controllers
http://technet.microsoft.com/en-us/library/dd348449(WS.10).aspx

Two methods to DC virtualization

With all that behind us let’s dig deeper into the two methods on how to introduce virtualized domain controllers into an environment.

1. DCPromo

Stand up a member server in the virtual environment and run dcpromo. Configure it as an additional domain controller to replicate the data from another DC in the same domain. If you want to reuse the same name as one of the physical DC’s, you must first demote the physical DC. Then rename the virtual server while still as a member server and then promote it as a physical server. If you choose to use the same name as an existing DC, ensure that you allow end-to-end AD replication of the demotion to complete prior to running dcpromo on the virtualized guest.

2. Physical-to-Virtual (P2V)

As per the VMM 2008 glossary, physical-to-virtual machine (P2V) conversion [describes] the process of creating a virtual machine by copying the configuration of a functioning physical computer.”. In simple terms, here we convert a physical domain controller server to a virtual domain controller guest using a P2V tool.

Today SCVMM (System Center Virtual Machine Manager) is available from Microsoft, as are similar 3rd party P2V tools where you run the tool against a physical server to convert to a virtual server. In concept it performs a backup on the physical server and restores the machine to virtual hardware. The end result is you have converted the physical server to a virtual domain controller which looks and act as the original. You then turn off the converted physical DC and then connect the virtual DC to the network and clients don't see any difference in the functionality with authentication.

Since most of us are familiar with dcpromo promote/demote process, we will focus on the second method of the P2V tool. If the P2V conversion goes as expected and there are no problems after the conversion, there is no service outage other than the duration where the P2V tool is performing the backup/restore. A USN rollback will occur if for some reason you decide to move back to the physical DC after you have already performed the P2V process, and the new virtualized DC has replicated with other DCs. So don’t ever do it.

What’s USN ROLLBACK?

Back to the geek-speak: Active Directory Domain Services (AD DS) uses update sequence numbers (USNs) to keep track of replication of data between domain controllers. Each time that a change is made to data in the directory, the USN is incremented to indicate that a change has been made. For each directory partition that a destination domain controller stores, USNs are used to track the latest originating update that a domain controller has received from each source replication partner. Also, it helps with the status of every other domain controller that stores a replica of the directory partition. When a domain controller is restored after a failure, it queries its replication partners for changes with USNs that are greater than the USN of the last change it has recorded. USN rollback occurs when the normal updates of the USNs are circumvented and a domain controller tries to use a USN that is lower than its latest update.

If you are still wondering why are we talking about USN Rollback with our P2V tool, remember how we discussed that it’s performing a backup of the physical DC and restoring it to the virtual DC. If the virtual DC replicated with the rest of the DC’s and we try to reinstate the physical DC and bring it online, it will detect that the highest USN it has for itself is lower than what others have for it. When this happens, the physical DC detects that it’s in a USN ROLLBACK state, stops replication, and pauses the Netlogon service on machine startup. A USN rollback can also occur on the virtual DC if the physical DC isn't turned off immediately after the P2V finishes taking its backup of the original.

Please refer the following TechNet link for a detailed understanding of USN Rollback - http://technet.microsoft.com/en-us/library/dd348479(WS.10).aspx

NOTE: In Windows Server 2003 (SP1) and later, USN rollback will be detected and replication will be stopped before divergence in the forest is created, in most cases. For Windows 2000 Server, the updates in Microsoft Knowledge Base article 885875 must be installed to enable this detection. Remember that Win2000 support ends on July 13, 2010 though, so your real answer here is to not be running it at all!

The supported recovery options when in USN Rollback state are pretty limited - you have to forcibly demote the DC, perform a metadata cleanup and re-promote the domain controller.

How to P2V Domain Controllers

During the course of writing this blog, we did a bunch of different tests and tried out different combinations of hardware, FSMO roles, GC, domains etc. We will be sharing our takeaways during this experiment. For those who are unfamiliar with SCVMM as a product and how P2V works, the detailed steps regarding the SCVMM P2V process are thoroughly documented in the following links:

P2V: How to Perform a Conversion
http://technet.microsoft.com/en-us/library/cc917882.aspx

P2V: Converting Physical Computers to Virtual Machines in VMM
http://technet.microsoft.com/en-us/library/cc764232.aspx

One of our customers shared the following link with us which outlines VMWARE’s P2V method which uses online migration. http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006996

Please note ONLINE mode keeps the source and target running at the time and is not recommended. When using this un-recommended method, it’s up to the administrator to keep the network cable disconnected on the respective machines to keep them isolated. A lot of our customers experience that keeping the new target virtual DC completely isolated from the source physical DC is easier said than done. There is a big risk of USN rollback if the machines are not isolated as identified by VMWARE. We have seen a number of customers who try to perform an ONLINE P2V and end up in a USN Rollback state, leading to the forced demotion of the problem DCs.

Good place to mention our disclaimer for any 3rd party software for virtualization.

897615 Support policy for Microsoft software running in non-Microsoft hardware virtualization software
http://support.microsoft.com/default.aspx?scid=kb;EN-US;897615

By now, you should be able to able to identify some of the benefits and pitfalls of going virtual on your domain controllers. Next time we will go into the details on how to perform the Offline P2V migration of domain controllers using SC VMM, requirements on the source machines, destination servers, identifying the suitable candidates that can be moved over to the virtual world.

More on this topic in Part 2.

- Shravan Kumar and Jason Alanis.

Friday Mail Sack: Shut Up Laura Edition

$
0
0

Hello again folks, Ned here for another grab bag of questions we’ve gotten this week. This late posting thing is turning into a bad habit, but I’ve been an epileptic octopus here this week with all the stuff going on. Too many DFSR questions though, you guys need to ask other stuff!

Let’s crank.

Question

Is it possible to setup a DFSR topology between branch servers and hub servers, where the branches are an affiliate company that are not a member of our AD forest?

Answer

Nope, the boundary of DFSR replication is the AD forest. Computers in another forest or in a workgroup cannot participate. They can be members of different domains in the same forest. In that scenario, you might explore scripting something like:

robocopy.exe /mot /mir <etc>

Question

I was examining KB 822158 – with the elegant title of “Virus scanning recommendations for Enterprise computers that are running currently supported versions of Windows” - and wanted to make sure these recommendations are correct for potential anti-virus exclusions in DFSR.

Answer

They better be, I wrote the DFSR section! :-)

Question

Is there any way to tell that a user’s password was reset, either by the user or by an admin, when running Win2008 domains?

Answer

Yes – once you have rolled out Win2008 or R2 AD and have access to granular auditing, this becomes two easy events to track once you enable the subcategory User Account Management:

ID 

Message 

4723 

An attempt was made to change an account's password.  

4724  

An attempt was made to reset an account's password.

 

Once that is turned on, the 4724 event tells you who changed who’s password:

clip_image002

And if you care, the 4738 confirms that it did change:

image 

If a user changes their own password, you get the same events but the Subject Security ID and Account Name change to that user.

Question

Any recommendations (especially books) around how to program for the AD Web Service/AD Management Gateway service?

Answer

Things are a little thin here so far for specifics, but if you examine the ADWS Protocol specification and start boning up on the Windows Communication Foundation you will get rolling.

Windows Communication Foundation
http://msdn.microsoft.com/en-us/library/dd456779(v=VS.100).aspx

WCF Books - http://www.amazon.com/s/ref=pd_lpo_k2_dp_sr_sq_top?ie=UTF8&cloe_id=05ebc737-d598-45a3-9aec-b37cc04e3946&attrMsgId=LPWidget-A1&keywords=windows%20communication%20foundation&index=blended&pf_rd_p=486539851&pf_rd_s=lpo-top-stripe-1&pf_rd_t=201&pf_rd_i=0672329484&pf_rd_m=ATVPDKIKX0DER&pf_rd_r=1NQD69FBHSA2RM8PR97K)

[MS-ADCAP]: Active Directory Web Services: Custom Action Protocol Specification
http://msdn.microsoft.com/en-us/library/dd303965(v=PROT.10).aspx

Remember that we don’t do developer support here on AskDS so you should direct your questions over to the AD PowerShell devs if you get stuck in code specifics.

Question

Is their any guidance around using DFSR with satellite link connections?

Answer

Satellite connections create a unique twist to network connectivity – they often have relatively wide bandwidth compared to low-end WAN circuits, but also have comparitively high latency and error levels. When transmitting a packet through a geosynchronous orbit hop, it hits the limitation of the speed of light – how fast you can send a packet 22,000 miles up, down, then reply with a packet up and down again. And when talking about a TCP conversation using RPC, one always uses round trip times as part of the equation. You will be lucky to average 1400 millisecond response times with satellite, compared to a frame relay circuit that might be under 50ms. This also does not account for the higher packet loss and error rates typically seen with satellite ISP’s. Not to mention what happens when it, you know, rains :-).  In a few years you can think about using medium and low earth orbit satellites to cut down latency, but those are not commercially viable yet. The ones in place have very little bandwidth.

When it comes to DFSR, we have no specific guidance except to use Win2008 R2 (or if you must, Win2008) and not Win2003 R2. That first version of DFSR uses synchronous RPC for most communications and will not reliably work over satellite’s high latency and higher error rates – Win2008 R2 uses asynchronous RPC. Even Win2008 R2 may perform poorly on the lower bandwidth ranges. Make sure you pre-seed data and do not turn off RDC on those connections.

===

Totally unrelated, I found this slick MCP business card thing we’re doing now since we stopped handing out the laminates. It’s probably been around for a while now, but hey, new to me. :) If you go to https://www.mcpvirtualbusinesscard.com and provide your MCP ID # and Live ID you can get virtual business cards that link to your transcript.

Then you can have static cards: 

Or get fancy stuff like this javascript version. Mouse over the the right side to see what I mean:


Oh yeah, did you know my name is really Edward? They have a bunch of patterns and other linking options if you don't want graphics; give it a look. 

 

Finally, I want to welcome the infamous Laura E. Hunter to the MSFT borg collective. Author and contributor to TechNet Magazine, the AD Cookbook, AD Field Guide, Microsoft Certified Masters, and endless boring a considerable body of ADFS documents, Laura is most famously known for her www.ShutUpLaura.com blog. And now she’s gone blue – welcome to Microsoft, Laura! Now get to work.

Have a nice weekend folks,

- Ned “what does the S stand for Bobby?” Pyle

How to Virtualize Active Directory Domain Controllers (Part 2)

$
0
0

Hello everyone, this is Shravan from the Active Directory team and Jason from the System Center VMM team here at Microsoft. This is part 2 of the blog series where we discuss how to migrate Active Directory domain controllers to a virtualized system. Last time we discussed how to plan for moving the physical domain controllers to virtual servers, identified the concerns with USN rollback and the methods of performing the P2V migrations. Here we will identify the competitive features of SC VMM, system requirements to identify source machines and target servers and some tricks with the tool we have learnt working with other customers..

Why is SC VMM better?

Basically, SCVMM has two modes of P2V operation - ONLINE and OFFLINE. In the ONLINE mode, the source and destination are kept turned ON during the migration process whereas in OFFLINE mode, the source machine is turned OFF before the restore process is completed on the Destination or Virtual DC. OFFLINE Mode is the recommended P2V method for DCs.

First and foremost, here’s the gotcha – the default selection for SCVMM is ONLINE mode. The option to change it is hidden under the “Conversion Options” expanded menu as shown below.

image

This has been easily overlooked by me and some of my customers but it results in a warning message below stating “Online physical to virtual conversion of domain controller is not recommended” but the wizard lets you proceed with it. Unless you read the warning message and stop to find out where the switch to OFFLINE P2V conversion is, you may run into the problem with USN Rollback we discussed earlier.

image

Going back to the previous screen, we expand the Conversion options and choose OFFLINE conversion. Also it’s recommended to select the checkbox for “Turn off source computer after conversion” to avoid the potential for a USN rollback.

image

Additionally when you choose OFFLINE Conversion mode, you get presented with the following UI which lets you select how you want to handle the IP assignment on the virtual DC.

image

Below I have pasted in some important verbatim from the following article:

P2V: Requirements for Physical Source Computers
http://technet.microsoft.com/en-us/library/cc917954.aspx

Requirements on the Source Machine

To perform a P2V conversion, your source computer:

  • Must have at least 512 MB of RAM.
  • Cannot have any volumes larger than 2040 GB.
  • Must have an Advanced Configuration and Power Interface (ACPI) BIOS Vista WinPE will not install on a non-ACPI BIOS.
  • Must be accessible by VMM and by the host computer.
  • Cannot be in a perimeter network. A perimeter network, which is also known as a screened subnet, is a collection of devices and subnets placed between an intranet and the Internet to help protect the intranet from unauthorized Internet users. The source computer for a P2V conversion can be in any other network topology in which the VMM server can connect to the source machine to temporarily install an agent and can make Windows Management Instrumentation (WMI) calls to the source computer.

The following restrictions apply to P2V operation system support:

  • VMM does not support P2V conversion for computers with Itanium architecture based operating systems.
  • VMM does not support P2V on source computers running Windows NT Server 4.0. However, you can use the Microsoft Virtual Server 2005 Migration Toolkit (VSMT) or third-party solutions for converting computers running Windows NT Server 4.0.
  • VMM 2008 R2 does not support converting a physical computer running Windows Server 2003 SP1 to a virtual machine that is managed by Hyper-V. Hyper-V does not support Integration Components on computers running Windows Server 2003 SP1. As a result, there is no mouse control when you use Remote Desktop Protocol (RDP) to connect to the virtual machine. To avoid this issue, update the operating system to Windows Server 2003 SP2 before you convert the physical computer. As an alternative, you can convert the computer by using VMM 2008 and then deploy the virtual machine in VMM 2008 R2.

Requirements for the Destination Host Server

In VMM, a host is a physical computer on which you can deploy one or more virtual machines. To run P2V, you need a host on which to place the image of the source computer.

Requirements for the host server include:

  • The destination host during a P2V conversion can be running Windows Server 2008 with Hyper-V, Windows Server 2008 R2 with Hyper-V, or Virtual Server  R2  SP1 (or later).
  • The destination host cannot be in a perimeter network.
  • As in any virtual machine creation or migration, the destination host for a P2V conversion must have sufficient memory for the virtual machine in addition to memory reserved for the host operating system. By default, the amount of memory reserved for the host operating system is 256 MB in VMM 2008 or 512 MB in VMM 2008 R2. If the host does not have enough memory for the virtual machine in addition to the memory reserved for the host, you will get a placement error in the Convert Physical Server Wizard

Deciding Which Computers to Convert

To successfully perform P2V, you must be able to identify appropriate physical workloads for consolidation into the virtualized environment. This section will help you identify which computers are good candidates for conversion.

Identifying Virtualization Candidates

If you have deployed Microsoft System Center Operations Manager 2007, VMM can help you identify the right physical servers for consolidation based on direct analysis of the performance counters of the target machine or historical performance data stored in the Operations Manager database.

The Virtualization Candidates report helps you identify underutilized computers by displaying average values for a set of commonly requested performance counters for CPU, memory, disk usage, hardware configurations, including processor speed, number of processors, and total RAM. To use the Virtualization Candidates report, you must deploy the System Center VMM 2008 Management Pack. For more information about reporting, see Configuring Reporting for VMM.

Prioritizing Virtualization Candidates

When identifying the best candidates for P2V conversion, consider converting these types of computers, in order of preference:

  1. Non business-critical underutilized computers. By starting with the least utilized computers that are not business critical, you can learn the P2V process with relatively low risk. Web servers may make good candidates.
  2. Computers with outdated or unsupported hardware that needs to be replaced.
  3. Computers with low utilization that are hosting less critical in-house applications.
  4. Computers with higher utilization that are hosting less critical applications.
  5. The remaining underutilized computers.
  6. In general, business-critical applications, such as e-mail servers and databases that are highly utilized, should only be virtualized to the Hyper-V platform in the Windows Server 2008 (64-bit) operating system.

Some Problem Cases:

  • Missing Driver Issue:

Since VMM uses WinPE to boot into the source when performing the OFFLINE migration, if the drivers for any device on the source machine doesn’t exist in WinPE, you may get an error similar to the one below:

No compatible drivers were identified for the device: <DEVICE_NAME>

For instance, if the VMM server does not have the drivers for the NIC “3COM 3C920 Integrated Fast Ethernet Controller” that is present on the physical source DC, then you will see an error similar to the one below as we require the driver in order to boot using WinPe on the physical DC.

image 

  • If you receive the error above, you need to copy the drivers for the NIC “3COM 3C920 Integrated Fast Ethernet Controller” on the VMM server in the following folder location <%ProgramFiles%\Microsoft System Center Virtual Machine Manager 2008 R2\Driver Import> on the VMM server and click the “Check Again” button which should retry the process.
  • Disk Issue:

If the primary boot and active partition on a server is FAT32, then SCVMM will be unable to perform the migration.

image

image

While VMM does support migration of the FAT32 partitions to the target virtual guest, it does NOT support migration of servers where FAT32 partition is the boot and active one.

 

That’s it for now. We are certain that there are a million other hardware combinations that we may not have tested in our above experiments but we hope to hear back from you with any specific situations you may have run into during your journey to a virtualized world.

Happy Virtualization!

-Shravan Kumar and Jason Alanis.

Friday Mail Sack: Ride ‘Em Cowboy Edition

$
0
0

Howdy partners. This week we talk event logs, auditing, NTLM “fallback”, file server monitoring, and SCOM 2007 management pack dissection. It was a fairly quiet week for questions since everyone is off for vacation at this point, I reckon. That didn't mean it wasn't crazy at work - our folks take vacation too, and that leaves fewer of us to handle the cases. Hopefully you weren't on hold too long...

Oh, and it’s my fifth anniversary as an employee at Microsoft today. So being from the Midwest and not wanting to do the usual Microsoft M&M cliché, I brought 5 pounds of delicious Hickory Farms meat. It disappeared fast, people here are animals. Sausage-loving animals.

Anyhow, on to the goods.

Question

Is there a way to set security logs to be retained for X days automatically? What about having them automatically archive?

Answer

Starting in Windows Vista we added Group Policy to handle the archiving piece. See:

Computer configuration \ <policies> \ Administrative templates \ Windows components \ Event Log Service \ Security \

                Backup log automatically when full
                Retain old events

This also works for Application, Setup, and System logs. The big old chatty ones.

image

This does not help you on age, but if you are archiving the log every time it fills you get the same effect. Obviously you would need to start backing up all these event logs and deleting them or you would risk running out of disk space. And what about Windows Server 2003, you ask? We have a registry key there that will do the same thing – see the AutoBackupLogFiles value buried in KB312571.

Rather than going this route though, I instead suggest deploying some kind of security event collection tool, like System Center 2007’s free ACS component or a third party. It will scale much better and be less of a hassle to maintain. Then you are always intercepting and collecting your security events. Hopefully you have a plan to do something with them!

Question

<A conversation about why you should not skew clocks as that makes Kerberos break, as everyone knows. But then:>

However the vast majority of app servers should “just work” with NTLM fallback when Kerberos doesn’t work, correct?

Answer

Not necessarily! When MS started implementing Kerberos eleven years ago, NTLM was being replaced as the preferred security protocol. However, we knew that a million apps and down-level or 3rd party clients would not be able to handle Kerberos through negotiation. In order to make the experience less painful, we decided that when using the Windows Negotiate security package, we’d allow applications to first try Kerberos and if that failed, then try NTLM. Pretty much any failure was ok, such as the target server not supporting Kerberos or Kerberos being possible but malfunctioning due to environmental problems. If you simply asked for Kerberos only or NTLM only, there was no fallback because you were being specific. Some languages also provide for blocking fallback post negotiation, such as WCF’s ALLOWNTLM=FALSE flag. So NTLM fallback was never guaranteed or even tried in many scenarios. There are a lot of misunderstandings and mythology about this out there, but this is how it works - when it comes to your specific app, just test it under a network capture to see how it behaves.

Then starting Windows Vista SP1 and Windows Server 2008 we made a significant change – from then on, interactive logon stopped allowing NTLM fallback if Kerberos had errors. So for example, if someone duplicated a DC’s SPN, the user cannot logon (with error “The security database on the server does not have a computer account for this workstation trust relationship”) and examining their event log would show KDC 11 error and you'd see 4625 events on the DC security log. So if Kerberos was supposed to work and didn’t, too bad - no more fallback. Obviously that is also in place in Windows 7 and Win2008 R2, and for the foreseeable future.

Furthermore, in Windows 7 and Windows Server 2008 R2 we added a new extension to the negotiate security package to start making fallback less likely everywhere, not just in interactive logon. That is called negoexts and does stuff like federation support –  from the beginning it has no concept of fallback at all.

So why change all this? Because it’s more secure. Better to prevent auth rather than allow someone to somehow damage Kerberos then use that opportunity to go through a weaker protocol.

Question

I would like to start examining the File Services management pack and other MP for System Center Operations Manager 2007. I don’t always find complete documentation on what these packs do (or I find this). I’d also rather not download and configure 1.25GB of SCOM trial edition just yet either.

What to do?

Answer

Here’s the trick:

1. Install the System Center Operations Manager 2007 R2 Authoring Resource Kit (free download, very small)

2. Install the management packs you are interested in (such as File Services MP).

3. Start the Authoring Console and load your management pack. Generally, the “Library” MP will contain the majority of info – that’s why it’s bigger than the other files. For our File Services example:

image

image

4. Now in the Health Model area you will see all the monitoring… goo. In this case, DFSR monitoring stuff:

image

5. Now start drilling down below the Aggregate Monitors. There’s a lot to see here. 

image

6.  At a glance, you can see some interesting info about each monitor:

image

7. If you click Edit Monitor, then the Product Knowledge tab, you can see how the monitor works, what the known causes are of the issue, what the resolutions are, and more info to take in. This is the part that makes you smart.

image

image

image

This works anywhere; you don’t even need to install a server – I am doing this all on my Win7 client.

And what this really highlights is just how important using these monitors are. The resolution sections are written by the Product Group to tell you the appropriate way to fix things and in many cases are also vetted and expanded on by MS Support. I spent a hellish few weeks going through the File Services one for example: 200 pages of spec, arrrrghhhh! Rather than relying on some uninformed stranger on the Internet you can instead get the official answer to each problem that SCOM finds, and it can even react on your behalf. It’s slick stuff.

 

Finally, I learned an important lesson today. When you are in a team meeting and you describe some broken process as “a real goat rodeo”, your colleagues will use the opportunity to remind you how short you are using terrible artwork:

clip_image002

And if you complain that the picture isn’t “bling” enough, they will improve it Kanye West style:

clip_image002[4]

 

Have a great weekend folks.

- Ned “Yeeeeeee-hhhhhhaaaaaahhhh!”Pyle

ADMT 3.2 released

$
0
0

Come and get it. Especially you commenters that liked to swear at me anonymously :-). Besides now supporting Windows Server 2008 R2 and fixing some bugs, this version now supports Managed Service Accounts.

Active Directory Migration Tool version 3.2 - download here.
ADMT 3.2 Migration Guide (DOC) - download here

System Requirements

  • Supported Operating Systems: Windows Server 2008 R2
  • ADMT can be installed on any computer capable of running the Windows Server 2008 R2 operating system, unless they are Read-Only domain controllers or in a Server Core configuration.
  • Target domain: The target domain must be running Windows Server 2003, Windows Server 2008, or Windows Server 2008 R2
  • Source domain: The source domain must be running Windows Server 2003, Windows Server 2008, or Windows Server 2008 R2
  • The ADMT agent, installed by ADMT on computers in the source domains, can operate on computers running Windows XP, Windows Server 2003, Windows Vista, Windows Server 2008, Windows 7, and Windows Server 2008 R2.

The updated 3.2 Migration Guide is coming shortly, I will update this post with that link when I have it. It's out.

- Ned "you wouldn't say that to my face!" Pyle

We are hiring

$
0
0

We are hiring again (in your face, 2008 economy!) and if you think you have what it takes to do my job, come show me:

Support Escalation Engineer, Directory Services
https://careers.microsoft.com/Search.aspx#&&p4=all&p0=724746&p5=all&p1=all&p2=all&p3=all

We are looking for talented, experienced, motivated engineers who want to take their career and IQ to the zenith. This is the most challenging technical job you will ever have, but it is also the most rewarding: it is never boring, you work on the most complex environments, and you will learn more in one month about Directory Services than you will in five years out in the field - I personally guarantee that. All this and more can be yours.

It also doesn't hurt that Microsoft benefits are unparalleled and Microsoft is on Fortune's 100 best companies to work for list year after year. :-)

These positions are in Charlotte, NC and Las Colinas, TX. If you are anywhere in the Southeast or Southcentral US, you're just a few hours away from changing your life forever. Come and get it.

- Ned "I charge for autographs" Pyle


Announcing the Group Policy Search service

$
0
0

Hello, Kapil here. I am a Product Quality PM for Windows here in Texas [i.e. someone who falls asleep cuddling his copy of Excel - Ned]. Finding a group policy when starting at the "is there even a setting?" ground zero can be tricky, especially in operating systems older than Vista that do not include filtering. A solution that we’ve recently made available is a new service in the cloud:

Group Policy Search

With the help of Group Policy Search you can easily find existing Group Policies and share them with your colleagues. The site also contains a Search Provider for Internet Explorer 7 and Internet Explorer 8 as well as a Search Connector for Windows 7. We are very interested in hearing your feedback (as responses to this blog post) about whether this solution is useful to you or if there are changes we could make to deliver more value.

Note - the Group Policy search service is currently an unsupported solution. At this time the maintenance and servicing of the site (to update the site with the latest ADMX files, for example) will be best-effort only.

Using GPS

image

In the search box you can see my search string “wallpaper” and below that are the search suggestions coming from the database.

On the lower left corner you see the search results and the first result is automatically displayed on the right hand side. The search phrase has been highlighted and in the GP tree the displayed policy is marked bold.

Note: Users often overlook the language selector in the upper right corner, where one can switch the policy results (not the whole GUI itself) to “your” language (sorry for having only UK English and not US English ;-) [Whut the heck ur yoo tawkin' 'bout - Ned])

image

Using the “Tree” menu item you can switch to the “registry view”, where you can see the corresponding registry key/value, or you can reset the whole tree to the beginning view:

image

In the “Filter” menu, you can specify for which products you want to search (this means, if you select IE7, it will find all policies which are usable with IE7, not necessarily only these only available for IE7 and not for IE6 or IE8; this is done using the “supported on” values from the policies):

image

In the “copy” menu you can select the value from the results that you want to copy. Usually “URL” or “Summary” is used (of course you can easily select and CTRL+C the text from the GUI as well):

image

In the “settings” menu you can add the search provider and/or Connector.

image

Upcoming features (planned for the next release)

  • “Favorites” menu, where you can get a list of some “interesting” things like “show me all new policies IE8 introduced”:

image

  • “Extensions” menu:

image

  • We will introduce a help page with a description for the usage of the GPS.

GPS was written by Stephanus Schulte and Jean-Pierre Regente, both MS Internet Explorer Support Engineers in Germany. Yep, this tool was written by Support for you. :-) 

The cool part – it’s all running in:

image 

Kapil “pea queue” Mehra

Friday Mail Sack: 1970’s Conversion Van Edition

$
0
0

Hello folks, Ned here again with another ridiculously overdue Friday Mail Sack. This week we talk about patching, admin rights, Kerberos, hiring, ADMT, and PKI. Next week we talk about… nothing. I will be out celebrating an Important Wife Birthday™ and unless Jonathan takes pity on you, there will be crickets. So bother him A LOT for me, would you?

Now…let’s get groovy.

Question

What are the best practices for installing security updates on Domain Controllers? I always transfer the FSMO roles before rebooting any DC, is it correct, wrong? Is there anything else I should monitor or do before or after the restarts?

Answer

There’s no requirement that you move the FSMO roles as none of them need to be online for general domain functionality in the short term; heck, I had one customer with a PDCE offline for more than a year before they noticed – nice! Even if something awful happens and the DC doesn’t immediately come online, most of the FSMO roles serve no immediate purpose (like Schema Master and Domain Naming Master) or are used in only a periodic/failsafe role (RID, PDCE, Infrastructure) where a few minutes won't matter.

The important things are pretty common sense, but I’ll repeat them:

1. Make sure not all DC’s are rebooted at once; stagger them out a bit.
2. Make sure clients are not pointing to DC’s acting as DNS servers that are all being rebooted at once.
3. Make sure you are using a patching system so you don’t miss DC’s; these include WSUS, SCCM, or a third party.
4. Do it all off hours to minimize service interruption and maximize recover time if a DC doesn’t want to come back up!

Question

What group do I use to install security updates on DC’s and member servers if I do not want those users being Administrators.

Answer

It’s called “Power Unicorn Operators”.

:-D

No such group. Non-admins cannot install patches and security updates, and this is very much by design. If they could, they could also uninstall them – making a system unsecure. If they could, they could also install malware masquerading as patches and security updates, compromising a system. Use WSUS (free), SCCM ($), Automatic Updates with “download and install” (free), or a third party ($). Installing updates by hand is only going to work for admins, but even then it’s a poor management solution. Just ask all my Conficker-infected customers that were using that methodology…

Man, what a kick-$%# group that would be!!!

image

Question

I want to create a startup script via GPO. When I use the DC's FQDN in the path the script runs just fine - i.e. \\dc1.contoso.com\sysvol\netlogon\script1.cmd But when I specify DC1's IP address, the script silently fails: \\10.20.30.40\sysvol\netlogon\script1.cmd

I suppose it is an authentication issue (Kerberos?), but I cannot prove it – am I right?

Answer

You are correct, it is Kerberos. :-)

When a domain-joined client starts up and talks to an AD DC, it must use Kerberos as NTLM is not allowed for computer-to-computer communication. When given a network resource, it needs to be able to pass that host and service info off to the KDC to request a TGS ticket. For that to work, you have to be able to take that computer/service info and use it to find a Service Principal Name, and that starts with a computer or user principal.

So when you give it an IP address, there is no way to get a SPN, and therefore no way to get Kerberos. So it fails, expectedly and by design. You need to use FQDN (or if you must, NetBIOS name). You will see all this in a network capture as well.

Question (Continued)

The key was “…NTLM is not allowed for computer-to-computer communication...”

That really makes sense now :-). But staring at a network trace, captured during XP startup, I noticed the PC is looking for SPN CIFS/10.20.30.40 (when I used DC’s IP address in the startup script path). I was tempted and I added this SPN in the ServicePrincipalName attribute of DC’s object in the lab. After restarting both machines – the startup script run even with DCs IP in the file path (i.e. \\10.20.30.40\sysvol\netlogon\script1.cmd ).

Sounds logical, but is it practical? I suppose this is one of the “do not ever do this!” things? What would be the impact (security/design) if I add SPN like this?

Answer

Oh you sneaky engineers in the field, always clever and always hacking. :-)

Possible, yes. Practical, no. For a few reasons:

1. The computer will not self-maintain that SPN, unlike the other SPN’s.
2. This means you will need to maintain this on all SPN’s for all file servers.
3. It also means you need to remember to change this when IP addresses change, or serious confusion will ensue.
4. It also means all IT staff will need to know this, since you will not be there forever and you may like taking vacation from time to time.
5. It also means that if anyone forgets any of this, huge numbers of computers will not be getting policy/scripts and unless you are monitoring all client event logs, you won’t know it.

So all that adds up to not recommended, leaning towards highly discouraged. Not to mention that pointing to a specific server isn’t needed when using DFSN (such as with SYSVOL). This will work perfectly well and guarantee the computer talks to the nearest DC first, then continue to work if that DC is down:

\\contoso.com\netlogon\script1.cmd 

Voila!

Question

I am trying to install ADMT 3.2 and there’s an error that it is not a valid Win32 application.

clip_image002

Answer

Naughty naughty, you did not read the requirements. You are trying to install this on a Windows Server 2008 or Windows Server 2003 computer running x86 (32-bit). ADMT 3.2 only installs on Windows Server 2008 R2. Since that can only be x64, the installer was only compiled in 64-bit. When you run x64 on x86, you naturally get that error.

If you tried to install this on Win2003/2008 X64, it would instead say that it requires Win2008 R2.

Question

I’ve not seen the Weekly DS KB articles from the AD Team blog for a while…. Is it because there aren't any? Or are you just no longer providing those?

Answer

No, Craig just got a bit behind. He plans to resume that soon. Soon being sometime between now and the zombie apocalypse.

image 

Holy crap, do you believe we put pictures like that in Office Clipart?! We must give kids nightmares.

Question

Is constrained delegation between different domains (with trusted relationship) ever going to be supported? Maybe Windows Server 2014, Windows Server 2020, Windows server 2096, etc.  ;-)

Answer

While I cannot speak about future releases, this is definitely something we get asked about all the time. When you ask us over and over for something, that helps make it more likely - not guaranteed, mind you - to happen. So if you have a Premier contract, whale on your TAM and let them know you need this functionality and why. The more compelling the argument and the more often it is made, the more likely to get examined for a future release. This goes for pretty much everything in Windows.

Question

We just created a new Win2008 R2 PKI (one Root CA and one Issuing Sub CA). We have two domains, so we placed the CA’s in our child domain, as we have an empty forest root domain. Should we have placed those CA’s in the empty root?

Answer

[This answer courtesy of Rob Greene – Ned]

I would recommend that you put the CA’s in the domain where the largest amount of certificate requests are going to be generated.  I say this this because if you configure your certificate templates to publish the certificate in AD, then the CA computer will contact a local domain controller to get it added to the domain. Less traffic, less hopping, generally more efficient.

The other thing I would recommend is to add the CA’s computer account to the Cert Publishers group in both the child and root domains.  This allows the CA to publish certificates for users / computers in both domains.  

Question

I heard you are hiring, what are some good things to study up on if I want to interview and really rock your face off?

Answer

Start below. These are the core technologies - mainly as represented in XP and 2003 – that every DS Support Engineer has to know inside and out to be worth a darn in MS Support. Once you have those down you can find the Vista/08/7/R2 differences on your own.

TCP/IP
DNS
Active Directory
Kerberos
Active Directory Replication Model
Active Directory Replication Topology
Group Policy
Interactive Logon
Authentication
Authorization
Public Key Infrastructure (PKI)
User Profiles

Note: you can use these free trial editions below in order to do live repros of all this, and repros are highly suggested. Especially with the use of Netmon 3.4 to see how things look on the wire. Running these in Hyper-V, in Virtualbox, etc. will make the materials more understandable.

http://www.microsoft.com/windowsserver2008/en/us/trial-software.aspx
http://technet.microsoft.com/en-us/evalcenter/cc442495.aspx

Next time I’ll give some links to the post-graduate level studying. Most people think they know these above really well… then the hyperventilating starts in the interview.

Until next time,

- Ned “shag carpet” Pyle

Reminder: Windows 2000 Support ends July 13 (and other lifecycle stuff for 2003, XP, SfU)

$
0
0

Ned here again. If you’ve been under a rock for the past year, here it is one more time:

Windows 2000 support ends on July 13, 2010

That is just a week from now. For more info on how to upgrade, migrate, and otherwise remove the last traces of Win2000 from your environment, make sure you head here immediately:

http://support.microsoft.com/win2000

Other major milestones on July 13th include:

  • Windows Server 2003 enters extended support
  • Windows XP SP2 (i.e. without SP3 installed) support ends
  • Windows Services for UNIX 2.0 support ends

For more info on what mainstream, extended, and end of support policies mean, make sure you review:

This is your final warning. The next time I post on this it’s to say goodbye to the venerable operating system that launched Active Directory more than a decade ago.

- Ned “lonesome trail” Pyle

ADMT 3.2: Common Installation Issues

$
0
0

Hello folks, Ned here again. ADMT 3.2 was released a few weeks ago and we have a decent understanding of common installation issues that you might run into. Hopefully this helps you unblock or prevents you from blocking in the first place someday. One of these is headed to a KB near you as it’s too tricky to figure out and people are likely to hit it even when doing everything “right” otherwise.

Onward.

SQL Server 2008 SP1 install returns error "Invoke or BeginInvoke cannot be called on a control unit until the window handle has been created.."

Symptoms

ADMT 3.2 requires SQL Server 2005 Express with SP3 or SQL Server Express 2008 with SP1, and when attempting to install ADMT you are given a link to the 2008 version. However, when attempting to install this download on Windows Server 2008 R2, the installation fails with:

"SQL Server Setup Failure.
SQL Server Setup has encountered the following error:
Invoke or BeginInvoke cannot be called on a control unit until the window handle has been created.."

image

Cause

This error is purely within SQL Express 2008 and is not really to do with ADMT 3.2. The issue is fixed in "Cumulative update package 4 for SQL Server 2008".

Unhelpfully, this error is identified in KB975055 as being only for Windows 7 and that it was fixed by SP1 - both incorrect. The issue does affect Win2008 R2 and is only fixed by the cumulative update.

Resolution

Before installing SQL Server Express 2008 with SP1 (which will fail), first install:

Cumulative update package 4 for SQL Server 2008
http://support.microsoft.com/kb/963036

Once this is installed (can even be installed without SQL being installed at all) then you can install SQL Server Express 2008 with SP1 without errors, and then you can install ADMT 3.2 and point to this instance.

More Information

It's perfectly alright to instead use SQL Express 2005 SP3 instead of SQL Express 2008 SP1. It will install and run fine on Win2008 R2, and since you are using SQL Express anyway, it's not like you were customizing anything or trying to use existing infrastructure.

ADMT 3.2 install error "admtinst.exe is not a valid Win32 application"

Symptoms

When attempting to install ADMT 3.2, you receive error:

“Admtinst.exe is not a valid Win32 application"

image

Cause

You are attempting to install ADMT 3.2 anywhere but on Windows Server 2008 R2.

Resolution

ADMT 3.2 can only be installed on Windows Server 2008 R2. Don’t fight it!

More Information

This is by design behavior.


ADMT 3.2 install error "The Active Directory Migration Tool v3.1 must be installed on Windows Server 2008."

Symptoms

When installing ADMT 3.2, you get error:

“The Active Directory Migration Tool v3.1 must be installed on Windows Server 2008.”

image

Cause

You are installing ADMT 3.2 on a Windows 7 computer.

Resolution

ADMT 3.2 can only be installed on Windows Server 2008 R2. I really mean it!

More Information

Sigh… an old error string got referenced here by mistake. The block is intentional and expected, however. If you try to install on a Windows Server 2008 R2 core server, it will also say “v3.1” incorrectly.


ADMT 3.2 error "Unable to connect" when connecting to a remote SQL instance

Symptoms

When installing ADMT 3.2 you are prompted with the Database Selection screen:

image 

If you enter a remote “server\instance”, the following error is always returned:

"Unable to connect to 'server\instance', please ensure the SQL Server hosting this instance is running and connections can be made to this instance. [DBNETLIB][ConnectionOpen (Connect().]SQL Server does not exist or access denied."

image

If you use a local instance of SQL running on the computer, no issues.

Cause

The remote instance is running SQL Server Express edition (2005 SP3 or 2008 SP1, it doesn't matter). ADMT is not allowed to connect to remote SQL Express instances. Even if configuration work is done on the Express instance to allow remote connections, the error will then change to:

"The specified instance is hosted on a SQL Server version that is not supported. Use SQL Server 2005 or SQL Server 2008. We recommend you install the latest SQL Server service packs. If you are using SQL Server 2005 Express Edition, you must install SP3 or later. If you are using SQL Server 2008 Express Edition, you must install SP1 or later. Only local installations are supported for SQL Server Express Editions."

image

Note: this is the same error you would get trying to use an unsupported version of SQL, such as SQL 2008 R2 or SQL 2000.

Resolution

If you want to use multiple ADMT 3.2 consoles to connect to a single remote SQL instance, that instance must be running SQL Server 2005 or 2008, and not an Express edition.

More Information

This behavior is by design. The requirement is also documented in the ADMT 3.2 migration guide (http://www.microsoft.com/downloads/details.aspx?familyid=6D710919-1BA5-41CA-B2F3-C11BCB4857AF&displaylang=en), in section "Installing ADMT v3.2":

ADMT v3.2 requires a preconfigured instance of SQL Server for its underlying data store. You should use SQL Server Express. When you use one of the following versions of SQL Server Express, ADMT installation enforces the following service pack requirements:
SQL Server 2005 Express must be installed with Service Pack 3 (SP3) or later.
SQL Server 2008 Express must be installed with Service Pack 1 (SP1) or later.
Note
If you use SQL Server Express, the ADMT console must be installed and run locally on the server that hosts the SQL Server Express database instance.
As an option, you can use full versions of SQL Server 2005 or SQL Server 2008. In this case, you can install and run the ADMT console on a remote computer, and you can run multiple ADMT consoles on different remote computers. If you use a full version of SQL Server, ADMT installation does not enforce any service pack requirements.

ADMT 3.2 installation incomplete, console error "cannot open database "ADMT" requested by the login"

Symptoms

When installing ADMT 3.2 on a Windows Server 2008 R2 domain controller and using a SQL Express 2008 with SP1 instance, the installation completes without errors.

However, the “Active Directory Migration tool Installation Wizard” completion screen (like below) is not shown:

image

Instead, the completion screen is blank (like below):

image

When then attempting to run the ADMT console, you receive error:

"Active Directory Migration Tool
Unable to check for failed actions. :DBManager.IManageDB.1 : Cannot open database "ADMT" requested by the login. The logon failed."

image

The MMC console then displays:

"MMC could not create the snap-in.
MMC could not create the snap-in. The snap-in might not have been installed correctly.
Name: Active Directory Migration Tool
CLSID: {E1975D70-3F8E-11D3-99EE-00C04F39BD92}"

image

On Windows Server 2008 R2 member servers there are no issues. When using SQL Express 2005 SP3 there are no issues on DC's or member servers.

Cause

A code defect in ADMT's interoperability with SQL Express 2008 SP1 on DC's where the expected "SQLServerMSSQLUser$ComputerName$InstanceName" group is not created. This is required by ADMT to configure specific permissions during the ADMT install and allows the ADMT database to be created in the SQL instance. ADMT does not expect the group to be missing, which leads to the blank dialog and an incomplete installation.

I also wrote a KB on this and it’s coming soon.

Resolutions

Workaround 1:

The standard practice is to install ADMT on a member computer in the target domain. Install SQL Express 2008 SP1 on a Windows 2008 R2 member server in the target domain and then install ADMT 3.2 onto that same member server.

Workaround 2:

If you have a requirement to install ADMT 3.2 on a domain controller in order to use command-line or scripted user migrations with SID History, install SQL 2008 SP1 (non-Express edition) on a Windows Server 2008 R2 member server in the target domain and select that remote instance when installing ADMT 3.2 on the DC. Alternatively, you can install SQL Express 2005 SP3 on the DC.

Workaround 3:

If you have a requirement to install ADMT 3.2 and SQL Express 2008 SP1 on the same DC, use the following steps on target domain DC:

  1. Install Cumulative Update Package 4 for SQL Server 2008 on the DC - http://support.microsoft.com/kb/963036.

  2. Install SQL Express 2008 SP1 on the DC - http://www.microsoft.com/downloads/details.aspx?FamilyID=01af61e6-2f63-4291-bcad-fd500f6027ff&displaylang=en. Note the SQL Instance name created during the install (default is SQLEXPRESS).

  3. Create a domain local group with the format of "SQLServerMSSQLUser$<DCComputerName>$<InstanceName>". For example, if the DC is named "DC1" and the SQL instance was "SQLEXPRESS" you would run the following command in an admin-elevated CMD prompt:

    NET LOCALGROUP SQLServerMSSQLUser$DC1$SQLEXPRESS /ADD

  4. Retrieve the SQL service SID by using the SC.EXE command with the name of the SQL service instance. For example, if the SQL instance was "SQLEXPRESS" you would run the following command in an admin-elevated CMD prompt and note the returned SERVICE SID value:

    SC SHOWSID MSSQL$SQLEXPRESS

  5. In the Windows directory, create the "ADMT" subfolderfolder and a further subfolder of "Data". For example you would run the following command in an admin-elevated CMD prompt:

    MD %SystemRoot%\ADMT\Data

  6. Using the SID retrieved in Step 4, set FULL CONTROL permissions on the %SystemRoot%\ADMT\Data folder. For example, if the SID returned in Step 4 was "S-1-5-80-3880006512-4290199581-3569869737-363123133" you would run the following command in an admin-elevated CMD prompt:

    ICACLS %systemroot%\ADMT\Data /grant *S-1-5-80-3880006512-4290199581-3569869737-363123133:F

  7. Install ADMT 3.2 on the DC while selecting the local SQL Express 2008 instance.

Wrap Up

That’s everything we’re aware of currently. Like I said above, I have a KB coming shortly for the last issue mentioned, but it’s basically a copy of the above without pretty pictures. The ADMT migration guide will also be updated and (for the short term) the FWLINK that ADMT 3.2 points to when it sends you to a SQL install is going to be sending people to SQL Express 2005 SP3.

- Ned “admit!” Pyle

Migrating Vista and Windows 7 profiles with ADMT 3.2

$
0
0

Ned here again with another ADMT post – this one’s a quickie. V2 profiles were introduced with Windows Vista to allow isolation between XP and newer operating systems. If you haven’t done so already, make sure to review Managing Roaming User Data Deployment Guide; it was written by our very own Mike Stephens. If you have a mixture of V1 and V2 profiles and are planning an ADMT 3.2 migration, make sure you review this updated planning guide:

ADMT 3.2 and Managing Users, Groups, and User Profiles -
http://technet.microsoft.com/en-us/library/cc974331(WS.10).aspx

This covers planning and deployment steps to make sure these profiles migrate correctly. I hope you find it useful.

Ned “that’s enough ADMT for awhile” Pyle

Viewing all 274 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>