Quantcast
Channel: Ask the Directory Services Team
Viewing all 274 articles
Browse latest View live

How to become a PFE (worth reading if you are job hunting)

$
0
0

Hi all, Ned here. Greg Jaworski has posted an informative read for those looking to join the ranks of Microsoft Premier Field Engineering. They are always hiring and if your New Year's resolution includes travel, career growth, and working for the largest software company in the world, I recommend you give it a look.

How to become a Premier Field Engineer (PFE

It has useful tips, an explanation of the interview process, and other helpful goo. This comes to you the new Ask PFE blog.

They also appear to favor those with Polish surnames. I'm not saying it's required, but it seems to help. ;-P

- Ned "Casimir" Pyle 


Friday Mail Sack: It’s a Dog’s Life Edition

$
0
0

Hi folks, Ned here again with some possibly interesting, occasionally entertaining, and always unsolicited Friday mail sack. This week we talk some:

Fetch!

Question

We use third party DNS but used to have Windows DNS on domain controllers; that service has been uninstalled and all that remains are the partitions. According to KB835397, deleting the ForestDNSZones and DomainDNSZones partitions is not supported. Soon we will have removed the last few old domain controllers hosting some of those partitions and replaced them with Windows Server 2008 R2 that never had Windows DNS. Are we getting ourselves in trouble or making this environment unsupported?

Answer

You are supported. Don’t interpret the KB too narrowly; there’s a difference between deletion of partitions used by DNS and never creating them in the first place. If you are not using MS DNS and the zones don’t exist, there’s nothing in Windows that should care about them, and we are not aware of any problems.

This is more of a “cover our butts” article… we just don’t want you deleting partitions that you are actually using and naturally, we don’t rigorously test with non-MS DNS. That’s your job. ;-)

Question

When I run DCDIAG it returns all warning events for the system event log. I have a bunch of “expected” warnings, so this just clogs up my results. Can I change this behavior?

Answer

DCDIAG has no idea what the messages mean and has no way to control the output. You will need to suppress the events themselves in their own native fashion, if their application supports it. For example, if it’s a chatty combination domain controller/print server in a branch office that shows endless expected printer Warning messages, you’d use the steps here.

If your application cannot be controlled, there’s one (rather gross) alternative to make things cleaner though, and that’s to use the FIND command in a few pipelines to remove expected events. For example, here I always see this write cache warning when I boot this DC, and I don’t really care about it:

image

Since I don’t care about these entries, I can use pipelined FIND (with /v to drop those lines) and narrow down the returned data. I probably don’t care about the time generated since DCDIAG only shows the last 60 minutes, nor the event string lines either. So with that, I can use this single wrapped line in a batch file:

dcdiag/test:systemlog | find /I /v "eventid: 0x80040022" | find /I /v "the driver disabled the write cache on device" | find /i /v "event string:" | find /i /v "time generated:"

clip_image002
Whoops, I need to fix that user’s group memberships!

Voila. I still get most of the useful data and nothing about that write cache issue. Just substitute your own stuff.

See, I don’t always make you use Windows PowerShell for your pipelines. ツ

Question

If I walk into a new Windows Server 2008 AD environment cold and need to know if they are using DFSR or FRS for SYSVOL replication, what is the quickest way to tell?

Answer

Just run this DFSRMIG command:

dfsrmig.exe /getglobalstate

That tells you what the current state of the SYSVOL DFSR topology and migration.

If it says:

  • “Eliminated”

… they are using DFSR for SYSVOL. It will show this message even if the domain was built from scratch with a Windows Server 2008 domain functional level or higher and never performed a migration; the tool doesn’t know how to say “they always used DFSR from day one”.

If it says:

  • “Prepared”
  • “Redirected”

… they are mid-migration and using both FRS and DFSR, favoring one or the other for SYSVOL.

If it says:

  • “Start”
  • “DFSR migration has not yet initialized”
  • “Current domain functional level is not Windows Server 2008 or above”

… they are using FRS for SYSVOL.

Question

When using the DFSR WMI namespace “root\microsoftdfs” and class “dfsrvolumeconfig”, I am seeing weird results for the volume path. On one server it’s the C: drive, but on another it just shows a wacky volume GUID. Why?

Answer

DFSR is replicating data under a mount point. You can see this with any WMI tool (surprise! here’s PowerShell) and then use mountvol.exe to confirm your theory. To wit:

image

image

Question

I notice that the "dsquery user -inactive x" command returns a list of user accounts that have been inactive for x number of weeks, but not days.  I suspect that this lack of precision is related to this older AskDS post where it is mentioned that the LastLogonTimeStamp attribute is not terribly accurate. I was wondering what your thoughts on this were, and if my only real recourse for precise auditing of inactive user accounts was by parsing the Security logs of my DCs for user logon events.

Answer

Your supposition about DSQUERY is right. What's worse, that tool's queries do not even include users that have never logged on in its inactive search. So it's totally misleading. If you use the AD Administrative Center query for inactive accounts, it uses this LDAP syntax, so it's at least catching everyone (note that your lastlogontimestamp UTC value would be different):

(&(objectCategory=person)(objectClass=user)(!userAccountControl:1.2.840.113556.1.4.803:=2)(|(lastLogonTimestamp<=129528216000000000)(!lastLogonTimestamp=*)))

You can lower the msDS-LogonTimeSyncInterval down to 1 day, which removes the randomization and gets you very close to that magic "exactness" (within 24 hours). But this will increase your replication load, perhaps significantly if this is a large environment with a lot of logon activity. Warren's blog post you mentioned describes how to do this. I’ve seen some pretty clever PowerShell techniques for this: here's one (untested, non-MS) example that could be easily adopted into native Windows AD PowerShell or just used as-is. Dmitry is a smart fella. Make sure that you if you find scripts that the the author clearly understood Warren’s rules.

There is also the option - if you just care about users' interactive logons and you have all Windows Vista or Windows 7 clients - to implement msDS-LastSuccessfulInteractiveLogonTime. The ups and downs of this are discussed here. That is replicated normally and could be used as an LDAP query option.

Windows AD PowerShell has a nice built-in constructed property called “LastLogonDate” that is the friendly date time info, converted from the gnarly UTC. That might help you in your scripting efforts.

After all that, you are back to Warren's recommended use of security logs and audit collection services. Which is a good idea anyway. You don't get to be meticulous about just one aspect of security!

Question

I was reading your older blog post about setting legal notice text and had a few questions:

  1. Has Windows 7 changed to make this any easier or better?
  2. Any way to change the font or its size?
  3. Any way to embed URLs in the text so the user can see what they are agreeing to in more detail?

Answer

[Courtesy of that post’s author, Mike “DiNozzo” Stephens]

  1. No
  2. No
  3. No

:)

#3 is especially impossible. Just imagine what people would do to us if we allowed you to run Internet Explorer before you logged on!

image

 [The next few answers courtesy of Jonathan “Davros” Stephens. Note how he only ever replies with bad news… – Neditor]

Question

I have encountered the following issue with some of my users performing smart card logon from Windows XP SP3.

It seems that my users are able to logon using smart card logon even if the certificate on the user’s smart card was revoked.
Here are the tests we've performed:

  1. Verified that the CRL is accessible
  2. Smartcard logon with the working certificate
  3. Revoked the certificate + waited for the next CRL publish
  4. Verified that the new CRL is accessible and that the revoked certificate was present in the list
  5. Tested smartcard logon with the revoked certificate

We verified the presence of the following registry keys both on the client machine and on the authenticating DC:

HKEY_Local_Machine\System\CurrentControlSet\Services\KDC\CRLValidityExtensionPeriod
HKEY_Local_Machine\System\CurrentControlSet\Services\KDC\CRLTimeoutPeriod
HKEY_Local_Machine\System\CurrentControlSet\Control\LSA\Kerberos\Parameters\CRLTimeoutPeriod
HKEY_Local_Machine\System\CurrentControlSet\Control\LSA\Kerberos\Parameters\UseCachedCRLOnlyAndIgnoreRevocationUnknownErrors

None of them were found.

Answer

First, there is an overlap built into CRL publishing. The old CRL remains valid for a time after the new CRL is published to allow clients/servers a window to download the new CRL before the old one becomes invalid. If the old CRL is still valid then it is probably being used by the DC to verify the smart card certificate.

Second, revocation of a smart card certificate is not intended to be usable as real-time access control -- not even with OCSP involved. If you want to prevent the user from logging on with the smart card then the account should be disabled. That said, one possible hacky alternative that would be take immediate effect would be to change the UPN of the user so it does not match the UPN on the smart card. With mismatched UPNs, implicit mapping of the smart card certificate to the user account would fail; the DC would have no way to determine which account it should authenticate even assuming the smart card certificate verified successfully.

If you have Windows Server 2008 R2 DCs, you can disable the implicit mapping of smart card logon certificates to user accounts via the UPN in favor of explicit certificate mapping. That way, if a user loses his smart card and you want to make sure that that certificate cannot be used for authentication as soon as possible, remove it from the altSecurityIdentities attribute on the user object in AD. Of course, the tradeoff here is the additional management of updating user accounts before their smart cards can be used for logon.

Question

When using the SID cloning tools like sidhist.vbs in a Windows Server 2008 R2 domain, they always fail with error “Destination auditing must be enabled”. I verified that Account Management auditing is on as required, but then I also found that the newer Advanced Audit policy version of that setting is also on. It seems like the DSAddSIDHistory() API does not consider this new auditing sufficient? In my test environment everything works fine, but it does not use Advanced Auditing. I also found that if I set all Account Management advanced audit subcategories to enabled, it works.

Answer

It turns out that this is a known issue (it affects ADMT too). At this time, DsAddSidHistory() only works if it thinks legacy Account Management is enabled. You will either need to:

  • Remove the Advanced Auditing policy and force the destination computers use legacy auditing by setting Audit: Force audit policy subcategory settings (Windows Vista or later) to override audit policy category settings to disabled.
  • Set all Account Management advanced audit subcategories to enabled, as you found, which satisfies the SID cloning function.

We are making sure TechNet is updated to reflect this as well.  It’s not like Advanced Auditing is going to get less popular over time.

Question

Enterprise and Datacenter editions of Windows Server support enforcing Role Separation based on the common criteria (CC) definitions.  But there doesn't seem to be any way to define the roles that you want to enforce.

CC Security Levels 1 and 2 only define two roles that need to be restricted (CA Administrator and Certificate Manager).  Auditing and Backup functions are handled by the CA administrator instead of dedicated roles.

Is there a way to enforce separation of these two roles without including the Auditor and Backup Operator roles defined in the higher CC Security Levels?

Answer

Unfortunately, there is no way to make exceptions to role separation. Basically, you have two options:

  1. Enable Role Separation and use different user accounts for each role.
  2. Do not enable Role Separation, turn on CA Auditing to monitor actions taken on the CA.

[Now back to Ned for the idiotic finish!]

Other Stuff

My latest favorite site is cubiclebot.com. Mainly because they lead me to things like this:


Boing boing boing

And this:


Wait for the pit!

Speaking of cool dogs and songs: Bark bark bark bark, bark bark bark-bark.

Game of Thrones season 2 is April 1st. Expect everyone to die, no matter how important or likeable their character. Thanks George!

At last, Ninja-related sticky notes.

For all the geek parents out there. My favorite is:

adorbz-ewok
For once, an Ewok does not enrage me

It was inevitable.

 

Finally: I am headed back to Chicagoland next weekend to see my family. If you are in northern Illinois and planning on eating at Slott’s Hots in Libertyville, Louie’s in Waukegan, or Leona’s in Chicago, gimme a wave. Yes, all I care about is the food. My wife only cares about the shopping, that’s why we’re on Michigan avenue and why she cannot complain. You don’t know what it’s like living in Charlotte!! D-:

Have a nice weekend folks,

Ned “my dogs are not quite as athletic” Pyle

RPC over IT/Pro

$
0
0

Hi folks, Ned here again to talk about one of the most commonly used – and least understood – network protocols in Windows: Remote Procedure Call. Understanding RPC is a foundation for any successful IT Professional. It’s integral to distributed systems like Active Directory, Exchange, SQL, and System Center. The administrator who has never run into RPC configuration issues is either very new or very lucky.

Today I attempt to explain the protocol in practical terms. As always, the best way to troubleshoot is with an understanding of how things are supposed to work, so that when it fails the reasons are obvious.  If you have a metered or capped Internet connection, read this off hours – it’s a biggee.

Some context

The RPC concept has roots in ARPANET, but got its first business computing use – like so many others – at Xerox PARC as “Courier”. The Microsoft implementation is an extension of The Open Group’s DCE/RPC, sometimes called MSRPC. We further extended that into the Distributed Component Object Model (DCOM), which is RPC and COM. The Exchange folks heavily invested in RPC over HTTP. Microsoft also retains the legacy "RPC over SMB" system, often referred to as Named Pipes. That ends the brochure.

As I began to learn RPC, the first problem I ran into was the documentation. It seemed to come in two forms:

image
Let’s do lunch – you like human?

If you actually read the docs, you're let down in the details. It comes in two arrangements, both of which completely miss the IT boat:

1. The “it’s all processes and libraries, get to coding” form:

image
See, it's just code!

2. The “Jedi network magic” form:

image
These aren't the computers you're looking for… move along

I find developers are often like Rain Man: specialist geniuses, bewildered by real life. This isn’t bad documentation, but IT pros aren’t the audience. The developers of RPC are providing a framework and since they live in a perfect world of design where nothing breaks, how it works is not important – they just want you to use the right APIs. The problem is I don’t care about the specifics of MIDL, stubs, or marshaling unless I’m at the point of debugging; I just want to know how it all works in practical networking terms. Then when it breaks, I have somewhere to start, and when I’m designing a distributed system, I’m not setting my customer up for headaches.

Today I focus on MSRPC, as that’s the main RPC protocol of AD components. I may return someday to discuss the others, if you’re interested. And bribe me.

The MSRPC details

Let's start with an analogy: you meet a nice girl and really hit it off. Like an idiot, you manage to lose her phone number. You know that she works for Microsoft though, so you start by looking up the Charlotte office. You call and get a switchboard, so you ask for her by name. The operator tells you her number and then offers to transfer you – naturally, you say yes. Someone answers and you make sure it’s the nice girl by introducing yourself. You both exchange pleasantries, then make plans for dinner and a movie, with directions to the restaurant and a chat about the Flixster reviews. You hang up and think about what you’re going to say to keep her interested until the appetizers arrive. You called her on your mobile phone so you have the outgoing number saved in case you need to call back.

There, now you understand MSRPC. No really, you do…

  1. A client application knows about a server application and wants to communicate with it.
  2. The client computer uses name resolution to locate the computer where that server application runs.
  3. The client app connects to an endpoint locator and requests access to the server application.
  4. The endpoint locator provides that info and the client connects to the server with an initial conversation.
  5. The client and server apps exchange instructions and data.
  6. The client and server apps disconnect.
  7. The client computer has a cache of name resolution and the connection that can save time reconnecting later.   

RPC allows a client application to let other computers work on its behalf, offloading processing to more powerful centralized servers. Instead of sending real functions over the network, the client tells the server what functions to run, and then the server sends the data back. This has nothing to do with the OS: some of these applications can be both client and server – for instance, Active Directory multi-master replication. That RPC application is LSASS.EXE. I’m going to use it as our sample app.

image

There are a few important terms to understand:

  • Endpoint mapper – a service listening on the server, which guides client apps to server apps by port and UUID
  • Tower – describes the RPC protocol, to allow the client and server to negotiate a connection
  • Floor – the contents of a tower with specific data like ports, IP addresses, and identifiers
  • UUID – a well-known GUID that identifies the RPC application. The UUID is what you use to see a specific kind of RPC application conversation, as there are likely to be many
  • Opnum – the identifier of a function that the client wants the server to execute. It’s just a hexadecimal number, but a good network analyzer will translate the function for you. MSDN can too. If neither knows, your application vendor must tell you
  • Port – the communication endpoints for the client and server applications
  • Stub data – the information given to functions and data exchanged between the client and server. This is the payload; the important part

There’s a lot more but we’re getting into developer country. I know it sounds like jabber, so let’s dissect this with a real-world example using our old friend NetMon and the latest open source parsers.

Back to reality

Here I have two DCs in the same AD site, named WIN2008R2-01 and WIN2008R2-02, with respective IP addresses of 10.0.0.101 and 10.0.0.102. I reboot DC2 and have a network capture running on DC1. I create a brand new test user and let it replicate, then I stop the capture. It’s critical to have a network capture see the whole conversation or it will be a mess to analyze; if possible, the captures should always be running on both client and server, but in this case, that’s not possible due to the reboot.

image

When you first examine AD replication traffic in NetMon (like above) it looks like Greek. What the heck is a stub parser? DRSR?

Open the Options menu and select Parser Profiles. The reason you see the “Windows stub parser” messages is that by default, NetMon uses a balanced set of parsers designed for limited analysis without packet loss.

image

When analyzing captures on your desktop, set the active parser to “Windows” and you get the most detail.

image

While you’re in the Options, I also recommend configuring color filters. Since I am examining AD replication, I want visual cues for DRSR (Directory Replication Service Remote protocol), EPM (RPC Endpoint Mapper), MSRPC, and DNS. This makes skimming a capture easier.

image

Now I add a simple filter of: msrpc. Better. Let’s start deciphering:

image

Right away, we see the endpoint mapper request above. The tower for Directory Replication is in that request, using the UUID E3514235-4B06-11D1-AB04-00C04FC2DCD2 (that's how Netmon knows to parse it, by the way). It is connecting to TCP port 135. This happens shortly after LSASS.EXE starts, as domain controllers are nearly always talking about replication.

Naturally, there is a response, and it contains several key ingredients:

image

You can see the towers - there may be more than one - and the floors in each tower with their ports. Importantly, you also see the status of the attempted connection. And a specific server port is listed. That port may be dynamic or static, it depends on the application’s configuration.

Now the client application opens a local client port (again, maybe dynamic, maybe static) and binds to that new application port, using security; the original connection, by default, did not require special permissions - EPM is a switchboard, remember. Because this is MSRPC and domain controllers, this means Kerberos and packet privacy are required. This bind phase below is negotiation.

image

image

The server responds with the (hopefully) successful negotiation, providing details about which security protocols were selected for further encryption of the traffic. The NegState field shows how this is not yet complete, but things are proceeding as planned.

image

This bind was the negotiation. What follows is the completion of the authentication and encapsulation phase, called an ALTER_CONTEXT operation. If all goes well, the authentication is accepted and RPC application communications proceeds with some nice secure packet payloads.

image

Everything after this point is application… stuff. RPC connected from a client port to a server port and then communicates along that "channel" for the rest of the conversation. The two halves of the application send each other requests and responses, with stub data used by the application's functions.

Every application is different, but once you know each one's rules, it will work in a (relatively) predictable fashion. Since this is the well-documented Directory Replication Services application, what happens next is the DC creates a context handle, called a DRSBIND. It then does some work. Let's take a look at one example of the work by switching the NetMon filter to just DRSR, then apply it to our scenario.

image

Netmon is politely translating all of these RPC functions above into semi-intelligible words, like DRSBind, DRSReplicaSync, and DRSGetNCChanges. It knows that when there is an opnum it understands for a given protocol, it means an RPC function that the client is telling the server to run remotely on the client's behalf.

If you examine one of those packets, you see that the data itself is encrypted (good!), but with knowledge of the opnum's purpose and that RPC reached this stage, you have a decent idea what it is doing or how to look it up based on the UUID and Opnum information, even if your network parsers are terrible. In this case:

http://msdn.microsoft.com/en-us/library/cc228532(v=PROT.13).aspx

Function Explanation
IDL_DRSBind

Creates a context handle necessary to call any other method in this interface.
Opnum: 0

IDL_DRSReplicaSync

Triggers replication from another DC.
Opnum: 2

IDL_DRSGetNCChanges

Replicates updates from an NC replica on the server.
Opnum: 3

IDL_DRSCrackNames

Looks up each of a set of objects in the directory and returns it to the caller in the requested format.
Opnum: 12

IDL_DRSUnbind

Destroys a context handle previously created by the IDL_DRSBind method.
Opnum: 1

image

Importantly, you know that RPC and the network appear to be functioning correctly, so any application problems are likely inside the application itself. If the application has internal logging, you can use these network captures to correlate each opnum request/response to real work, and perhaps see where things are failing internally. If the application doesn’t have good security, you can see exactly what it's doing - but so can anyone else. Probably something to bring to the third party vendor's attention, as it will not be Microsoft.

A polite application will tear down the connection with noticeable "unbind" traffic, and perhaps even send a network reset, but many simply abandon the conversation and let Windows deal with it later.

image

A final note: a domain controller has a great many RPC conversations going with multiple partners; always ensure you are looking at the same conversations by filtering based on IP addresses and ports, as well as your network analysis tools conversation ID system. NetMon makes this pretty easy:

image

And we're done. See? It’s just a phone call with a nice girl from Microsoft. Don’t be intimidated when she knows more about computers than you do, bub.

Until next time.

Ned "really pedantic chatter" Pyle

Security Compliance Manager 2.5 Beta is out

$
0
0

Hi folks, Ned here with a quickie advert: The Security Compliance Manager 2.5 beta released the other day, with a bunch of new features and other goo.

  • Integration with the System Center 2012 IT GRC Process Pack for Service Manager-Beta:Product baseline configurations are integrated into the IT GRC Process Pack to provide oversight and reporting of your compliance activities.
  • Gold master support: Import and take advantage of your existing Group Policy or create a snapshot of a reference machine to kick-start your project.
  • Configure stand-alone machines: Deploy your configurations to non-domain joined computers using the new GPO Pack feature.
  • Updated security guidance: Take advantage of the deep security expertise and best practices in the updated security guides, and the attack surface reference workbooks to help reduce the security risks that you consider to be the most important.
  • Compare against industry best practices: Analyze your configurations against prebuilt baselines for the latest Windows client and server operating systems.
  • NEW baselines include:
    • Exchange Server 2007 SP3 Security Baseline
    • Exchange Server 2010 SP2 Security Baseline
  • Updated client product baselines include:
    • Windows 7 SP1 Security Compliance Baseline
    • Windows Vista SP2 Security Compliance Baseline
    • Windows XP SP3 Security Compliance Baseline
    • Office 2010 SP1 Security Baseline
    • Internet Explorer 8 Security Compliance Baseline

Hot damn, #2 and #3 are what everyone kept asking for, and they’ve finally been delivered.

Never heard of SCM? For shame, I’ve discussed it here a few times. You just don’t care what I have to say, DO YOU? I AM GOING TO SPEND FOUR HOURS ON THE PHONE TALKING ABOUT YOU WITH MY GIRLFRIENDS!!!

- Ned “SCMbag” Pyle

If you use Symantec Products, Read Me

$
0
0

Ned here again, with a public service announcement similar to the previous one we did for RSA as it implicitly affects so many Microsoft customers. Symantec has announced:

Symantec can confirm that a segment of its source code has been accessed. Upon investigation of the claims made by Anonymous regarding source code disclosure, Symantec believes that the disclosure was the result of a theft of source code that occurred in 2006.

Read the rest here: http://www.symantec.com/theme.jsp?themeid=anonymous-code-claims&inid=us_ghp_banner1_anonymous

Older versions of their security products appear to be safe as long as you were maintaining patching (as always with early announcements, return to make sure this story doesn’t change). However, but if you use PCAnywhere you must update (for free) to a patched version of 12.5 immediately. It goes without saying if you were using PCAnywhere prior to this announcement, you should commence auditing your remote access. Symantec isn’t clowning around here, their actual guidance is that you should not allow PCAnywhere external access to your corporate network at all:

Customers should block pcAnywhere assigned ports (5631, 5632) on Internet facing network connections, or shut off port forwarding of these ports. Blocking these ports will help ensure that an outside entity will not have access to pcAnywhere through these ports, and will help ensure that the use of pcAnywhere remains within the confines of the corporate network.

Which kind of defeats the purpose as I understand it, but whatever.

- Ned “get to it” Pyle

Friday Mail Sack: Carl Sandburg Edition

$
0
0

Hi folks, Jonathan again. Ned is taking some time off visiting his old stomping grounds – the land of Mother-in-Laws and heart-breaking baseball. Or, as Sandburg put it:

Hog Butcher for the World,
Tool Maker, Stacker of Wheat,
Player with Railroads and the Nation's Freight Handler;
Stormy, husky, brawling,
City of the Big Shoulders”

Cool, huh?

Anyway, today we talk about:

And awayyy we go!

Question

When thousands of clients are rebooted for Windows Update or other scheduled tasks, my domain controllers log many KDC 7 System event errors:

Log Name: System
Source: Microsoft-Windows-Kerberos-Key-Distribution-Center
Event ID: 7
Level: Error
Description:

The Security Account Manager failed a KDC request in an unexpected way. The error is in the data field.

Error 170000C0

I’m trying to figure out if this is a performance issue, if the mass reboots are related, if my DCs are over-utilized, or something else.

Answer

That extended error is:

C0000017 = STATUS_NO_MEMORY - {Not Enough Quota} - Not enough virtual memory or paging file quota is available to complete the specified operation.

The DCs are being pressured with so many requests that they are running out of Kernel memory. We see this very occasionally with applications that make heavy use of the older SAMR protocol for lookups (instead of say, LDAP). In some cases we could change the client application's behavior. In others, the customer just had to add more capacity. The mass reboots alone are not the problem here - it's the software that runs at boot up on each client that is then creating what amounts to a denial of service attack against the domain controllers.

Examine one of the client computers mentioned in the event for all non-Windows-provided services, scheduled tasks that run at startup, SCCM/SMS at boot jobs, computer startup scripts, or anything else that runs when the computer is restarted. Then get promiscuous network captures of that computer starting (any time, not en masse) while also running Process Monitor in boot mode, and you'll probably see some very likely candidates. You can also use SPA or AD Data Collector sets (http://blogs.technet.com/b/askds/archive/2010/06/08/son-of-spa-ad-data-collector-sets-in-win2008-and-beyond.aspx) in combination with network captures to see exactly what protocol is being used to overwhelm the DC, if you want to troubleshoot the issue as it happens. Probably at 3AM, that sounds sucky.

Ultimately, the application causing the issue must be stopped, reconfigured, or removed - the only alternative is to add more DCs as a capacity Band-Aid or stagger your mass reboots.

Question

Is it possible to have 2003 and 2008 servers co-exist in the same DFS namespace? I don’t see it documented either “for” or “against” on the blog anywhere.

Answer

It's totally ok to mix OSes in the DFSN namespace, as long as you don't use Windows Server 2008 ("

V2 mode") namespaces, which won't allow any Win2003 servers. If you are using DFSR to replicate the data, make sure all server have the latest DFSR hotfixes (here and here), as there areincompatibilities in DFSR that these hotfixes resolve.

Question

Should I create DFS namespace folders (used by the DFS service itself) under NTFS mount points? Is there any advantage to this?

Answer

DFSN management tools do not allow you to create DFSN roots and links under mount points ordinarily, and once you do through alternate hax0r means, they are hard to remove (you have to use FSUTIL). Ergo, do not do it – the management tools blocking you means that it is not supported.

There is zero value in placing the DFSN special folders under mount points - the DFSN special folders consume no space, do not contain files, and exist only to provide reparse point tags to the DFSN service and its file IO driver goo. By default, they are configured on the root of the C: drive in a folder called c:\dfsroots. That ensures that they are available when the OS boots. Putting them under a mount point only breaks removing them later and does not serve any convincing purpose.

Question

How do you back up the Themes folder using USMT4 in Windows 7?

Answer

The built-in USMT migration code copies the settings but not the files, as it knows the files will exist somewhere on the user’s source profile and that those are being copied by the migdocs.xml/miguser.xml. It also knows that the Themes system will take care of the rest after migration; the Themes system creates the transcoded image files using the theme settings and copies the image files itself.

Note here how after scanstate, my USMT store’s Themes folder is empty:

clip_image001

After I loadstate that user, the Themes system fixed it all up in that user’s real profile when the user logged on:

clip_image002

However, if you still specifically need to copy the Themes folder intact for some reason, here’s a sample custom XML file:

<?xml version="1.0" encoding="UTF-8"?>

<migration urlid="http://www.microsoft.com/migration/1.0/migxmlext/migratethemefolder">

<component type="Documents" context="User">

<!-- sample theme folder migrator -->

<displayName>ThemeFolderMigSample</displayName>

 <role role="Data">

  <rules>

   <include filter='MigXmlHelper.IgnoreIrrelevantLinks()'>

   <objectSet>

    <pattern type="File">%CSIDL_APPDATA%\Microsoft\Windows\Themes\* [*]</pattern>

   </objectSet>

  </include>

 </rules>

 </role>

And here it is in action:

clip_image004

Question

I've recently been working on extending my AD schema with a new back-linked attribute pair, and I used the instructions on this blog to auto-generate the linkIDs for my new attributes. Confusingly, the resulting linkIDs are negative values (-912314983 and -912314984). The attributes and backlinks seem to work as expected, but when looking at the MSDN definition of the linkID attribute, it specifically states that the linkID should be a positive value. Do you know why I'm getting a negative value, and if I should be concerned?

Answer

The only hard and fast rule is that the forward link (flink) be an even number and the backward link (blink) be the flink's ID plus one. In your case, the flink is -912314984 then the blink had better be -912314983, which I assume is the case since things are working. But, we were curious when you posted the linkID documentation from MSDN so we dug a little deeper.

The fact that your linkIDs are negative numbers is correct and expected, and is the result of a feature called AutoLinkID. Automatically generated linkIDs are in the range of 0xC0000000-0xFFFFFFFC (-1,073,741,824 to -4). This means that it is a good idea to use positive numbers if you are going to set the linkID manually. That way you are guaranteed not to conflict with automatically generated linkIDs.

The bottom line is, you're all good.

Question

I am trying to delegate permissions to the DBA team to create, modify, and delete SPNs since they're the team that swaps out the local accounts SQL is installed under to the domain service accounts we create to run SQL.

Documentation on the Internet has led me down the rabbit hole to no end.  Can you tell me how this is done in a W2K8 R2 domain and a W2K3 domain?

Answer

So you will want to delegate a specific group of users -- your DBA team -- permissions to modify the SPN attribute of a specific set of objects -- computer accounts for servers running SQL server and user accounts used as service accounts under which SQL Server can run.

The easiest way to accomplish this is to put all such accounts in one OU, ie OU=SQL Server Accounts, and run the following commands:

Dsacls "OU=SQL Server Accounts,DC=corp,DC=contoso,DC=com" /I:S /G "CORP\DBA Team":WPRP;servicePrincipalName;user
Dsacls "OU=SQL Server Accounts,DC=corp,DC=contoso,DC=com" /I:S /G "CORP\DBA Team":WPRP;servicePrincipalName;computer

These two commands will grant the DBA Team group permission to read and write the servicePrincipalName attribute on user and computer objects in the SQL Server Accounts OU.

Your admins should then be able to use setspn.exe to modify that property on the designated accounts.

But…what if you have a large number of accounts spread across multiple OUs? The above solution only works well if all of your accounts are concentrated in a few (preferably one) OUs. In this case, you basically have two options:

  1. You can run the two commands specifying the root of the domain as the object, but you would be delegating permissions for EVERY user and computer in the domain. Do you want your DBA team to be able to modify accounts for which they have no legitimate purpose?
  2. Compile a list of specific accounts the DBA team can manage and modify each of them individually. That can be done with a single command line. Create a text file that contains the DNs of each account for which you want to delegate permissions and then use the following command:

    for /f "tokens=*" %i in (object-list.txt) do dsacls "%i" /G "CORP\DBA Team":WPRP;servicePrincipalName

None of these are really great options, however, because you’re essentially giving a group of non-AD Administrators the ability to screw up authentication to what are perhaps critical business resources. You might actually be better off creating an expedited process whereby these DBAs can submit a request to a real Administrator who already has permissions to make the required changes, as well as the experience to verify such a change won’t cause any problems.

Author’s Note: This gentleman pointed out in a reply that these DBAs wouldn’t want him messing with tables, rows and the SA account, so he doesn’t want them touching AD. I thought that was sort of amusing.

Question

What is Powershell checking when your run get-adcomputer -properties * -filter * | format-table Name,Enabled?  Is Enabled an attribute, a flag, a bit, a setting?  What, if at all, would that setting show up as in something like ADSIEdit.msc?

I get that stuff like samAccountName, sn, telephonenumber, etc.  are attributes but what the heck is enabled?

Answer

All objects in PowerShell are PSObjects, which essentially wrap the underlying .NET or COM objects and expose some or all of the methods and properties of the wrapped object. In this case, Enabled is an attribute ultimately inherited from the

System.DirectoryServices.AccountManagement.AuthenticablePrincipal.NET class. This answer isn’t very helpful, however, as it just moves your search for answers from PowerShell to the .NET Framework, right? Ultimately, you want to know how a computer’s or user’s account state (enabled or disabled) is stored in Active Directory.

Whether or not an account is disabled is reflected in the appropriate bit being set on the object’s userAccountControl attribute. Check out the following KB: How to use the UserAccountControl flags to manipulate user account properties. You’ll find that the penultimate least significant bit of the userAccountControl bitmask is called ACCOUNTDISABLE, and reflects the appropriate state; 1 is disabled and 0 is enabled.

If you find that you need to use an actual LDAP query to search for disabled accounts, then you can use a bitwise filter. The appropriate LDAP filter would be:

(UserAccountControl:1.2.840.113556.1.4.803:=2)

Other stuff

I watched this and, despite the lack of lots of moving arms and tools, had sort of a Count Zero moment:

And just for Ned (because he REALLY loves this stuff!): Kittens!

No need to rush back, dude.

Jonathan “Payback is a %#*@&!” Stephens

Purging Old NT Security Protocols

$
0
0

Hi folks, Ned here again (with some friends). Everyone knows that Kerberos is Microsoft’s preeminent security protocol and that NTLM is both inefficient and, in some iterations, not strong enough to avoid concerted attack. NTLM V2 using complex passwords stands up well to common hash cracking tools like Cain and Abel, Ophcrack, or John the Ripper. On the other hand, NTLM V1 is defeated far faster and LM is effectively no protection at all.

I discussed NTLM auditing years ago, when Windows 7 and Windows Server 2008 R2 introduced the concept of NTLM blocking. That article was for well-controlled environments where you thought that there was some chance of disabling NTLM – only modern clients and servers, the latest applications, and Active Directory. In a few other articles, I gave some further details on the limitations of the Windows auditing system logging. It turns out that while we’re ok at telling when NTLM was used, we’re not great at describing which flavor. For instance, Windows Server 2008+security auditing can tell you about the NTLM version through the 4624 event that states a Package Name (NTLM only): NTLM V1 or Package Name (NTLM only): NTLM V2, but all prior operating systems cannot. None of the older auditing can tell you if LM is used either. Windows Server 2008 R2 NTLM auditing only shows you NTLM usage in general.

Today the troika of Dave, Jonathan, and Ned are here to help you discover which computers and applications are using NTLM V1 and LM security, regardless of your operating system. It’s safe to say that some people aren’t going to like our answers or how much work this entails, but that’s life; when LM security was created as part of LAN Manager and OS/2 by Microsoft and IBM, Dave and I were in grade school and Jonathan was only 48.

If you need to keep using NTLM V2 and simply want to hunt down the less secure precursors, this should help.

Finding NTLM V1 and LM Usage via network captures

The only universal, OS-agnostic way you can tell which clients are sending NTLMv1 and LM challenges is by examining a network trace taken from destination computers. Using Netmon 3.4 or another network capture tool, look for packets with a negotiated NTLM security mechanism.

This first example is with LMCompatibilityLevel set to 0 on clients. This example is an SMB session request packet, specifying NTLM authentication.

Here is the SMB SESSION SETUP request, which specifies the security token mechanism:

  Frame: Number = 15, Captured Frame Length = 220, MediaType = ETHERNET

+ Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[00-15-5D-05-B4-44],SourceAddress:[00-15-5D-05-B4-49]

+ Ipv4: Src = 10.10.10.20, Dest = 10.10.10.27, Next Protocol = TCP, Packet ID = 747, Total IP Length = 206

+ Tcp: Flags=...AP..., SrcPort=49235, DstPort=Microsoft-DS(445), PayloadLen=166, Seq=2204022974 - 2204023140, Ack=820542383, Win=32724 (scale factor 0x2) = 130896

+ SMBOverTCP: Length = 162

- SMB2: C   SESSION SETUP (0x1)

    SMBIdentifier: SMB

  + SMB2Header: C SESSION SETUP (0x1),TID=0x0000, MID=0x0002, PID=0xFEFF, SID=0x0000

  - CSessionSetup:

     StructureSize: 25 (0x19)

     VcNumber: 0 (0x0)

   + SecurityMode: 1 (0x1)

   + Capabilities: 0x1

     Channel: 0 (0x0)

     SecurityBufferOffset: 88 (0x58)

     SecurityBufferLength: 74 (0x4A)

     PreviousSessionId: 0 (0x0)

   - securityBlob:

    - GSSAPI:

     - InitialContextToken:

      + ApplicationHeader:

      + ThisMech: SpnegoToken (1.3.6.1.5.5.2)

      - InnerContextToken: 0x1

       - SpnegoToken: 0x1

        + ChoiceTag:

        - NegTokenInit:

         + SequenceHeader:

         + Tag0:

         + MechTypes: Prefer NLMP (1.3.6.1.4.1.311.2.2.10)

         + Tag2:

         + OctetStringHeader:

         -MechToken: NTLM NEGOTIATE MESSAGE

          - NLMP: NTLM NEGOTIATE MESSAGE

             Signature: NTLMSSP

             MessageType: Negotiate Message (0x00000001)

           + NegotiateFlags: 0xE2088297 (NTLM v2128-bit encryption, Always Sign)

           + DomainNameFields: Length: 0, Offset: 0

           + WorkstationFields: Length: 0, Offset: 0

           + Version: Windows 6.1 Build 7601 NLMPv15

Next, the server sends its NTLM challenge back to the client:

  Frame: Number = 16, Captured Frame Length = 447, MediaType = ETHERNET

+ Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[00-15-5D-05-B4-49],SourceAddress:[00-15-5D-05-B4-44]

+ Ipv4: Src = 10.10.10.27, Dest = 10.10.10.20, Next Protocol = TCP, Packet ID = 24310, Total IP Length = 433

+ Tcp: Flags=...AP..., SrcPort=Microsoft-DS(445), DstPort=49235, PayloadLen=393, Seq=820542383 - 820542776, Ack=2204023140, Win=512 (scale factor 0x8) = 131072

+ SMBOverTCP: Length = 389

- SMB2: R  - NT Status: System - Error, Code = (22) STATUS_MORE_PROCESSING_REQUIRED  SESSION SETUP (0x1), SessionFlags=0x0

    SMBIdentifier: SMB

  + SMB2Header: R SESSION SETUP (0x1),TID=0x0000, MID=0x0002, PID=0xFEFF, SID=0x0019

  - RSessionSetup:

     StructureSize: 9 (0x9)

   + SessionFlags: 0x0

     SecurityBufferOffset: 72 (0x48)

     SecurityBufferLength: 317 (0x13D)

   - securityBlob:

    - GSSAPI:

     - NegotiationToken:

      + ChoiceTag:

      - NegTokenResp:

       + SequenceHeader:

       + Tag0:

       + NegState: accept-incomplete (1)

       + Tag1:

       + SupportedMech: NLMP (1.3.6.1.4.1.311.2.2.10)

       + Tag2:

       + OctetStringHeader:

       - ResponseToken: NTLM CHALLENGE MESSAGE

        - NLMP: NTLM CHALLENGE MESSAGE

          Signature: NTLMSSP

           MessageType: Challenge Message (0x00000002)

        + TargetNameFields: Length: 12, Offset: 56

         + NegotiateFlags: 0xE2898215 (NTLM v2128-bit encryption, Always Sign)

         + ServerChallenge: 67F9C5F851F2CD73

           Reserved: Binary Large Object (8 Bytes)

         + TargetInfoFields: Length: 214, Offset: 68

         + Version: Windows 6.1 Build 7601 NLMPv15

           TargetNameString: CORP01

         + AvPairs: 7 pairs

The client calculates the response to the challenge, using the various available hashes of the password. Note how this response includes both LM and NTLMv1 challenge responses.

  Frame: Number = 17, Captured Frame Length = 401, MediaType = ETHERNET

+ Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[00-15-5D-05-B4-44],SourceAddress:[00-15-5D-05-B4-49]

+ Ipv4: Src = 10.10.10.20, Dest = 10.10.10.27, Next Protocol = TCP, Packet ID = 748, Total IP Length = 387

+ Tcp: Flags=...AP..., SrcPort=49235, DstPort=Microsoft-DS(445), PayloadLen=347, Seq=2204023140 - 2204023487, Ack=820542776, Win=32625 (scale factor 0x2) = 130500

+ SMBOverTCP: Length = 343

- SMB2: C   SESSION SETUP (0x1)

    SMBIdentifier: SMB

  + SMB2Header: C SESSION SETUP (0x1),TID=0x0000, MID=0x0003, PID=0xFEFF, SID=0x0019

  - CSessionSetup:

     StructureSize: 25 (0x19)

     VcNumber: 0 (0x0)

   + SecurityMode: 1 (0x1)

   + Capabilities: 0x1

     Channel: 0 (0x0)

   SecurityBufferOffset: 88 (0x58)

     SecurityBufferLength: 255 (0xFF)

     PreviousSessionId: 0 (0x0)

   - securityBlob:

    - GSSAPI:

     - NegotiationToken:

      + ChoiceTag:

      - NegTokenResp:

       + SequenceHeader:

       + Tag0:

       + NegState: accept-incomplete (1)

       + Tag2:

       + OctetStringHeader:

       - ResponseToken: NTLM AUTHENTICATE MESSAGEVersion:v1, Domain: CORP01, User: Administrator, Workstation: CONTOSO-CLI-01

        - NLMP: NTLM AUTHENTICATE MESSAGEVersion:v1, Domain: CORP01, User: Administrator, Workstation: CONTOSO-CLI-01

           Signature: NTLMSSP

           MessageType: Authenticate Message (0x00000003)

         + LmChallengeResponseFields: Length: 24, Offset: 154

         + NtChallengeResponseFields: Length: 24, Offset: 178

         + DomainNameFields: Length: 12, Offset: 88

         + UserNameFields: Length: 26, Offset: 100

         + WorkstationFields: Length: 28, Offset: 126

         + EncryptedRandomSessionKeyFields: Length: 16, Offset: 202

         + NegotiateFlags: 0xE2888215 (NTLM v2128-bit encryption, Always Sign)

         + Version: Windows 6.1 Build 7601 NLMPv15

         + MessageIntegrityCheckNotPresent: 6243C42AF68F9DFE30BD31BFC722B4C0

           DomainNameString: CORP01

           UserNameString: Administrator

           WorkstationString: CONTOSO-CLI-01

         + LmChallengeResponseStruct: 3995E087245B6F7100000000000000000000000000000000

         + NTLMV1ChallengeResponse: B0751BDCB116BA5737A51962328D5CCD19EEBEBB15A69B1E

         + SessionKeyString: 397DACB158C9F10EF4903F10D4CBE032

       + Tag3:

       + OctetStringHeader:

       + MechListMic: Version: 1

The server then responds with successful negotiation state:

  Frame: Number = 18, Captured Frame Length = 159, MediaType = ETHERNET

+ Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[00-15-5D-05-B4-49],SourceAddress:[00-15-5D-05-B4-44]

+ Ipv4: Src = 10.10.10.27, Dest = 10.10.10.20, Next Protocol = TCP, Packet ID = 24312, Total IP Length = 145

+ Tcp: Flags=...AP..., SrcPort=Microsoft-DS(445), DstPort=49235, PayloadLen=105, Seq=820542776 - 820542881, Ack=2204023487, Win=510 (scale factor 0x8) = 130560

+ SMBOverTCP: Length = 101

- SMB2: R   SESSION SETUP (0x1), SessionFlags=0x0

    SMBIdentifier: SMB

  + SMB2Header: R SESSION SETUP (0x1),TID=0x0000, MID=0x0003, PID=0xFEFF, SID=0x0019

  - RSessionSetup:

     StructureSize: 9 (0x9)

   + SessionFlags: 0x0

     SecurityBufferOffset: 72 (0x48)

     SecurityBufferLength: 29 (0x1D)

   - securityBlob:

    - GSSAPI:

     - NegotiationToken:

      + ChoiceTag:

      - NegTokenResp:

       + SequenceHeader:

       + Tag0:

       +NegState: accept-completed (0)

       + Tag3:

       + OctetStringHeader:

       + MechListMic: Version: 1

To contrast this, consider the challenge response packet when LMCompatibility is set to 4 or 5 on the client (meaning it is not allowed to send anything but NTLM V2). The LM response is null, while the NTLMv1 response isn't included at all.

  Frame: Number = 17, Captured Frame Length = 763, MediaType = ETHERNET

+ Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[00-15-5D-05-B4-44],SourceAddress:[00-15-5D-05-B4-49]

+ Ipv4: Src = 10.10.10.20, Dest = 10.10.10.27, Next Protocol = TCP, Packet ID = 844, Total IP Length = 749

+ Tcp: Flags=...AP..., SrcPort=49231, DstPort=Microsoft-DS(445), PayloadLen=709, Seq=4045369997 - 4045370706, Ack=881301203, Win=32625 (scale factor 0x2) = 130500

+ SMBOverTCP: Length = 705

- SMB2: C   SESSION SETUP (0x1)

    SMBIdentifier: SMB

  + SMB2Header: C SESSION SETUP (0x1),TID=0x0000, MID=0x0003, PID=0xFEFF, SID=0x0021

  - CSessionSetup:

     StructureSize: 25 (0x19)

     VcNumber: 0 (0x0)

  + SecurityMode: 1 (0x1)

   + Capabilities: 0x1

     Channel: 0 (0x0)

     SecurityBufferOffset: 88 (0x58)

     SecurityBufferLength: 617 (0x269)

     PreviousSessionId: 0 (0x0)

   - securityBlob:

    - GSSAPI:

     - NegotiationToken:

      + ChoiceTag:

      - NegTokenResp:

       + SequenceHeader:

       + Tag0:

       + NegState: accept-incomplete (1)

       + Tag2:

       + OctetStringHeader:

       - ResponseToken: NTLM AUTHENTICATE MESSAGEVersion:v2, Domain: CORP01, User: Administrator, Workstation: CONTOSO-CLI-01

        - NLMP: NTLM AUTHENTICATE MESSAGEVersion:v2, Domain: CORP01, User: Administrator, Workstation: CONTOSO-CLI-01

           Signature: NTLMSSP

           MessageType: Authenticate Message (0x00000003)

         + LmChallengeResponseFields: Length: 24, Offset: 154

         + NtChallengeResponseFields: Length: 382, Offset: 178

         + DomainNameFields: Length: 12, Offset: 88

         + UserNameFields: Length: 26, Offset: 100

         + WorkstationFields: Length: 28, Offset: 126

         + EncryptedRandomSessionKeyFields: Length: 16, Offset: 560

         + NegotiateFlags: 0xE2888215 (NTLM v2128-bit encryption, Always Sign)

         + Version: Windows 6.1 Build 7601 NLMPv15

         + MessageIntegrityCheck: 2B69C069DD922D4A841D0EC43939DF0F

           DomainNameString: CORP01

           UserNameString: Administrator

           WorkstationString: CONTOSO-CLI-01

         + LmChallengeResponseStruct: 000000000000000000000000000000000000000000000000

         + NTLMV2ChallengeResponse: CD22D7CC09140E02C3D8A5AB623899A8

         + SessionKeyString: AF31EDFAAF8F38D1900D7FBBDCB43760

       + Tag3:

       + OctetStringHeader:

       + MechListMic: Version: 1

By taking traces and filtering on the NTLMV1ChallengeResponse field, you find those hosts that are sending NTLMv1 responses and determine if you need to upgrade them or if they simply have the wrong LMcompatibility values set through security policy.

Finding LM usage via Netlogon debug logs

If you just want to detect LM authentication and not looking to spend time in network captures, you can instead enable Netlogon logging on all DCs and servers in the environment.

Nltest /dbflag:2080ffff
net stop NetLogon
net start NetLogon

This creates the netlogon.log in the C:\Windows\Debug folder and it can grow to a maximum of 20 Mb by default. At that point, the server renames the file to netlogon.bak and a new netlogon.log file started. At 20Mb, the server deletes netlogon.bak, renames the netlogon.log to netlogon.bak, and a new netlogon.log file started. To make these log files larger, you can use a registry entry or group policy:

Registry

Path: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Netlogon\Parameters
Value Name: MaximumLogFileSize
Value Type: REG_DWORD
Value Data: <maximum log file size in bytes>

Group Policy

\Computer Configuration\Administrative Templates\System\Net Logon\Maximum Log File Size

You aren't trying to capture all data here - just useful samples - but if they wrap so much that you're unsure if they are accurate at all, increasing size is a good idea. As an alternative, you can create a scheduled task that runs ONSTART or a computer startup script. Either of them can use this batch file to make backups of the netlogon log by date/time and the computer name:

REM Sample script to copy the netlogon.bak to a netlogon_DATETIME_COMPUTERNAME.log backup form every 5 minutes

:start
if exist %windir%\debug\netlogon.bak goto copylog

:
copylog_return
sleep 300
goto start

:copylog
for /f "tokens=1-7 delims=/:., " %%a in ("%DATE% %TIME%") do (set DATETIME=%%a-%%b-%%c_%%d-%%e-%%f)
copy /v %windir%\debug\netlogon.bak %windir%\debug\netlogon_%DATETIME%_%COMPUTERNAME%.log
if %ERRORLEVEL% EQU 0 del %windir%\debug\netlogon.bak
goto copylog_return

Periodically, gather all of the NetLogon logs from the DCs and servers and place them in a single folder. Once you have assembled the NetLogon logs into a single spot, you may then use the following LogParser command from that folder to parse them all for a count of unique UAS logons to the domain controller by workstation:

Logparser.exe "SELECT TO_UPPERCASE(EXTRACT_SUFFIX(TEXT,0,'returns ')) AS ERR, TO_UPPERCASE (extract_prefix(extract_suffix(TEXT, 0, 'NetrLogonUasLogon of '), 0, 'from ')) as USER, TO_UPPERCASE (extract_prefix(extract_suffix(TEXT, 0, 'from '), 0, 'returns ')) as WORKSTATION, COUNT(*) FROM '*netlogon.*' WHERE INDEX_OF(TO_UPPERCASE (TEXT),'LOGON') >0 AND INDEX_OF(TO_UPPERCASE(TEXT),'RETURNS') >0 AND INDEX_OF(TO_UPPERCASE(TEXT),'NETRLOGONUASLOGON') >0 GROUP BY ERR, USER, WORKSTATION ORDER BY COUNT(*) DESC" -i:TEXTLINE -rtp:-1 >UASLOGON_USER_BY_WORKSTATION.txt

UASLOGON_USER_BY_WORKSTATION.txt contains the unique computers and counts. LogParser is available for download from here.

FIND and PowerShell are options here as well. The simplest approach is just to return the lines, perhaps into a text file for later sorting in say, Excel (which is very fast at sorting and allows you to organize your data).

image

image

image

I'll wager someone in the comments will take on the rather boring challenge of exactly duplicating what LogParser does. I didn't have the energy this time around. :)

Final thoughts

Microsoft stopped using LM after Windows 95/98/ME. If you do find specific LM-only usage and you don't have any (unsupported) Win9X computers, this is a third party application. A really heinous one.

All supported versions of Windows obey the LMCompatibility registry setting, and can use NTLMv2 just as easily as NTLMv1. At that point, analyzing network traces just becomes useful for tracking down those hosts that have applied the policy, but have not yet been rebooted. Considering how unsafe LM and NTLMv1 are, enabling NoLMHash and LMCompatibility 4 or 5 on all computers may be a faster alternative to auditing. It could cause some temporary outages, but would definitely catch anyone requiring unsafe protocols. There's no better auditing that a complaining application administrator.

Finally, do not limit your NTLM inventory to domain controllers and file or application servers. A comprehensive project requires you examine all computers in the environment, as even a Windows XP workstation can be a "server" for some application. Use a multi-pronged approach, where you also inventory operating systems through network probing - if you have Windows 95 or old SAMBA lying around somewhere on a shop floor, they are almost guaranteed to use insecure protocols.

Until next time,

- Ned “and Dave and Jonathan and Jonathan's in-home elderly care nurse” Pyle

Friday Mail Sack: Get Off My Lawn Edition

$
0
0

Hi folks, Ned here again. I know this is supposed to be the Friday Mail Sack but things got a little hectic and... ah heck, it doesn't need explaining, you're in IT. This week - with help from the ever-crotchety Jonathan Stephens - we talk about:

Now that Jonathan's Rascal Scooter has finished charging, on to the Q & A.

Question

We want to create a group policy for an OU that contains various computers needs to run for just Windows 7 notebooks only. All of our notebooks are named starting with an "N". Does group policy WMI filtering allows stacking conditions on the same group policy? 

Answer

Yes, you can chain together multiple query criteria, and they can even be from different classes or namespaces. For example, here I use both the Win32_OperatingSystem and Win32_ComputerSystem classes:

image

And here I use only the Win32_OperatingSystem class, with multiple filter criteria:

image

As long as they all evaluate TRUE, you get the policy. If you had a hundred of these criteria (please don’t) and 99 evaluate true but just one is false, the policy is skipped.

Note that my examples above would catch Win2008 R2 servers also; if you’ve read my previous posts, you know that you can also limit queries to client operating systems using the Win32_OperatingSystem property OperatingSystemSKU. Moreover, if you hadn’t used a predictable naming convention, you can also filter on with Win32_SystemEnclosure and query the ChassisTypes property for 8, 9, or 10 (respectively: “Portable”, “Laptop”, and “Notebook”). And no, I do not know the difference between these, it is OEM-specific. Just like “pizza box” is for servers. You stay classy, WMI.

Question

Is changing LDAP MaxPoolThreads a good or bad idea?

Answer

MaxPoolThreads controls the maximum number of simultaneous threads per-processor that a DC uses to work on LDAP requests. By default, it’s four per processor core. Increasing this value would allow a DC/GC to handle more LDAP requests. So if you have too many LDAP clients talking to too few DCs at once, raising this can reduce LDAP application timeouts and periodic “hangs”. As you might have guessed, the biggest complainer here is often MS Exchange and Outlook. If the performance counters “ATQ Threads LDAP" & "ATQ Threads Total" are constantly at the maximum number based on the number of processor and MaxPoolThreads value, then you are bottlenecking LDAP.

However!

DCs are already optimized to quickly return data from LDAP requests. If your hardware is even vaguely new and if you are not seeing actual issues, you should not increase this default value. MaxPoolThreads depends on non-paged pool memory, which on a Win2003 32-bit Windows OS is limited to 256MB (more on Win2008 32-bit). Meaning that if you still have not moved to at least x64 Windows Server 2003, don’t touch this value at all – you can easily hang your DCs. It also means you need to get with the times; we stopped making a 32-bit server OS nearly three years ago and OEMS stopped selling the hardware even before that. A 64-bit system's non-paged pool limit is 128GB.

In addition, changing the LDAP settings is often a Band-Aid that doesn’t address the real issue of DC capacity for your client/server base.  Use SPA or AD Data Collector sets to determine "Clients with the Most CPU Usage" under section "Ldap Requests”. Especially if the LDAP queries are not just frequent but also gross - there are also built-in diagnostics logs to find poorly-written requests:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Diagnostics\
15 Field Engineering

To categorize search operations as expensive or inefficient, two DWORD registry keys are used:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters\
Expensive Search Results Threshold

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters\
Inefficient Search Results Threshold

These DWORD registry keys have the following default values:

  • Expensive Search Results Threshold: 10000
  • Inefficient Search Results Threshold: 1000

For example, here’s an inefficient result written in the DS event log; yuck, ick, argh!:

Event Type: Information
Event Source: NTDS General
Event Category: Field Engineering
Event ID: 1644
Description:
The Search operation based at RootDSE
using the filter:
& ( | ( & ( (objectCategory = <val>) (objectSid = *) ! ( (sAMAccountType | <bit_val>) ) ) & ( (objectCategory = <val>) ! ( (objectSid = *) ) ) & ( (objectCategory = <val>) (groupType | <bit_val>) ) ) (aNR = <substr>) <startSubstr>*) )

visited 40 entries and returned 0 entries.

Finally, this article should be required reading to any application developers in your company:

Creating More Efficient Microsoft Active Directory-Enabled Applications -
http://msdn.microsoft.com/en-us/library/windows/desktop/ms808539.aspx#efficientadapps_topic04

(The title should be altered to “Creating even slightly efficient…” in my experience).

Question

I want to implement many-to-one certificate mappings by using Issuer and Subject DN match. In altSecurityIdentities I put the following string:

X509:<I>DC=com,DC=contoso,CN=Contoso CA<S>DC=com,DC=contoso,CN=users,CN=user name

In a given example, a certificate with “cn=user name, cn=users, dc=contoso, dc=com” in the Subject field will be mapped to a user account, where I define the mappings. But in that example I get one-to-one mapping. Can I use wildcards here, say:

X509:<I>DC=com,DC=contoso,CN=Contoso CA<S>DC=com,DC=contoso,CN=users,CN=*

So that any certificate that contains “cn=<any value>, cn=users, dc=contoso, dc=com” will be mapped to the same user account?

Answer

[Sent from Jonathan while standing in the 4PM dinner line at Bob Evans]

Unfortunately, no. All that would do is map a certificate with a wildcard subject to that account. The only type of one-to-many mapping supported by the Active Directory mapper is configuring it to ignore the subject completely. Using this method, you can configure the AD mappings so that any certificate issued by a particular CA can be mapped to a single user account. See the following: http://technet.microsoft.com/en-us/library/bb742438.aspx#ECAA

Question

I've recently been working on extending my AD schema with a new back-linked attribute pair, and I used the instructions on this blog and MSDN to auto-generate the linkIDs for my new attributes. Confusingly, the resulting linkIDs are negative values (-912314983 and -912314984). The attributes and backlinks seem to work as expected, but when looking at the MSDN definition of the linkID attribute, it specifically states that the linkID should be a positive value. Do you know why I'm getting a negative value, and if I should be concerned?

Answer

[Sent from Jonathan’s favorite park bench where he feeds the pigeons]

The negative numbers are correct and expected, and are the result of a feature called AutoLinkID. Automatically generated linkIDs are in the range of 0xC0000000-0xFFFFFFFC (-1,073,741,824 to -4). This means that it is a good idea to use positive numbers if you are going to set the linkID manually. That way you are guaranteed not to conflict with automatically generated linkIDs.

The bottom line is, this is expected under the circumstances and you're all good.

Question

Is there any performance advantage to turning off the DFSR debug logging, lowering the number of logs, or moving the logs to another drive? You explained how to do this here in the DFSR debug series, but never mentioned it in your DFSR performance tuning article.

Answer

Yes, you will see some performance improvements turning off the logging or lowering the log count; naturally, all this logging isn’t free, it takes CPU and disk time. But before you run off to make changes, remember that if there are any problems, these logs are the only thing standing between you and the unemployment line. Your server will be much faster without any anti-virus software too, and your company’s profits higher without fire insurance; there are trade-offs in life. That’s why – after some brief agonizing, followed by heavy drinking – I decided not to include it in the performance article.

Moving the logs to another physical disk than Windows is safe and may take some pressure of the OS drive.

Question

When I try to join this Win2008 R2 computer to the domain, it gives an error I’ve never seen before:

"The following error occurred attempting to join the domain "contoso.com":
The request is not supported."

Answer

This server was once a domain controller. During demotion, something prevented the removal of the following registry value name:

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters\
DSA Database file

Delete that "Dsa Database File" value name and attempt to join the domain again. It should work this time. If you take a gander at the %systemroot%\debug\netsetup.log, you’ll see another clue that this is your issue:

NetpIsTargetImageADC: Determined this is a DC image as RegQueryValueExW loaded Services\NTDS\Parameters\DSA Database file: 0x0
NetpInitiateOfflineJoin: The image at C:\Windows\system32\config\SYSTEM is a DC: 0x32

We started performing this check in Windows Server 2008 R2, as part of the offline domain join code changes. Hurray for unintended consequences!

Question

We have a largish AD LDS (ADAM) instance we update daily through by importing CSV files that deletes all of yesterday’s user objects and import today’s. Since we don’t care about deleted objects, we reduced the tombstoneLifetime to 3 days. The NTDS.DIT usage, as shown by the 1646 Garbage Collection Event ID, shows 1336mb free with a total allocation of 1550mb – this would suggest that there is a total of 214MB of data in the database.

The problem is that Task Manager shows a total of 1,341,208K of Memory (Private Working Set) in use. The memory usage is reduced to around the 214MB size when LDS is restarted; however, when Garbage Collection runs the memory usage starts to climb. I have read many KB articles regarding GC but nothing explains what I am seeing here.

Answer

Generally speaking, LSASS (and DSAMAIN, it’s red-headed ADLDS cousin) is designed to allocate and retain more memory – especially ESE (aka “Jet”) cache memory – than ordinary processes, because LSASS/DSAMAIN are the core processes of a DC or AD/LDS server. I would expect memory usage to grow heavily during the import, the deletions, and then garbage collection; unless something else put pressure on the machine for memory, I’d expect the memory usage to remain. That’s how well-written Jet database applications work – they don’t give back the memory unless someone asks, because LSASS and Jet can reuse it much faster when needed if it’s already loaded; why return memory if no one wants it? That would be a performance bug unto itself.

The way to show this in practical terms is to start some other high-memory process and validate that DSAMAIN starts to return the demanded memory. There are test applications like this on the internet, or you can install some app that likes to gobble a lot of RAM. Sometimes I’ll just install Wireshark and load a really big saved network capture – that will do it in a pinch. :-D You can also use the ESE performance counters under the “Database” and “Database ==> Instances” to see more about how much of the memory usage is Jet database cache size.

Regular DCs have this behavior too, as does DFSR and do other applications. You paid for all that memory; you might as well use it.

(Follow up from the customer where he provided a useful PowerShell “memory gobbler” example)

I ran the following Windows PowerShell script a few times to consume all available memory and the DSAMAIN process started releasing memory immediately as expected:

$chunk = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
for ($i = 0; $i -lt 5000; $i++)

       $chunk += $chunk
}

Question

When I migrate users from Windows 7 to Windows 7 using USMT 4.0, their pinned and automatic taskbar jump lists are lost. Is this expected?

Answer

Yes. For those poor $#%^&#s readers still using XP, Windows 7 introduced application taskbar pinning and a special menu called a jump list:

image

Pinned and Recent jump lists are not migrated by USMT, because the built-in OS Shell32 manifest called by USMT (c:\windows\winsxs\manifests\*_microsoft-windows-shell32_31bf3856ad364e35_6.1.7601.17514_non_ca4f304d289b7800.manifest) contains this specific criterion:

<pattern type="File">%CSIDL_APPDATA%\Microsoft\Windows\Recent [*]</pattern>

Note how it is not Recent\* [*], which would grab the subfolder contents of Recent. It only copies the direct file contents of Recent. The pinned/automatic jump lists are stored in special files under the CustomDestinations and AutomaticDestinations folders inside the Recent folder. All the other contents of Recent are shortcut files to recently opened documents anywhere on the system:

image

If you examine these special files, you'll see that they are binary, unreadable, and totally proprietary:

image

Since these files are binary and embed all their data in a big blob of goo, they cannot simply be copied safely between operating systems using USMT. The paths they reference could easily change in the meantime, or the data they reference could have been intentionally skipped. The only way this would work is if the Shell team extended their shell migration plugin code to handle it. Which would be a fair amount of work, and at the time these manifests were being written, customers were not going to be migrating from Win7 to Win7. So no joy. You could always try copying them with custom XML, but I have no idea if it would work at all and you’re on your own anyway – it’s not supported.

Question

We have a third party application that requires DES encryption for Kerberos. It wasn’t working from our Windows 7 clients though, so we enabled the security group policy “Network security: Configure encryption types allowed for Kerberos” to allow DES. After that though, these Windows 7 clients stopped working in many other operations, with event log errors like:

Event ID: 4
Source: Kerberos
Type: Error
"The kerberos client received a KRB_AP_ERR_MODIFIED error from the server host/myserver.contoso.com. This indicates that the password used to encrypt the kerberos service ticket is different than that on the target server. Commonly, this is due to identically named machine accounts in the target realm (domain.com), and the client realm. Please contact your system administrator."

And “The target principal name is incorrect” or “The target account name is incorrect” errors connecting to network resources.

Answer

When you enable DES on Windows 7, you need to ensure you are not accidentally disabling the other cipher suites. So don’t do this:

image

That means only DES is supported and you just disabled RC4, AES, etc.

Instead, do this:

image

If it exists at all and you want DES, this registry DWORD value to be 0x7fffffff on Windows 7 or Win2008 R2:

MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\System\Kerberos\Parameters\
SupportedEncryptionTypes

If it’s set to 0x3, all heck will break loose. This security policy interface is admittedly tiresome in that it has no “enabled/disabled” toggle. Use GPRESULT /H or /Z to see how it’s applying if you’re not sure about the actual settings.

Other Stuff

Windows 8 Consumer Preview releases February 29th, as if you didn’t already know it. Don’t ask me if this also means Windows Server 8 Beta the same exact day, I can’t say. But it definitely means the last 16 months of my life finally start showing some results. As will this blog…

Apparently we’ve been wrong about Han and Greedo since day one. I want to be wrong though. Thanks for passing this along Tony. And speaking of which, thanks to Ted O and the rest of the gang at LucasArts for the awesome tee!

This is a … creepily good music video? Definitely a nice find, Mark!


This is basically my home video collection

My new favorite site of the week? The Awesomer. Do not visit if you have to be somewhere in an hour.

Wait, no… my new favorite site is That’s Nerdaliscious. Do not read if hungry or dorky. 

Sick of everyone going on about Angry Birds? Love Chuck Norris? Go here now. There are a lot of these; don't miss Mortal Combat versus Donkey Kong.

Ah, there’s Waldo.

Likely the coolest advertisement for something that doesn’t yet exist that you will see this year.


I need to buy stock in SC Johnson. Can you imagine the Windex sales?!

Until next time.

- Ned “Generation X” Pyle with Jonathan “The Greatest Generation” Stephens


Friday Mail Sack: VROOM VROOM Edition

$
0
0

Hi folks, Jonathan here again. Ned’s a little busy right now trying to get items off the top shelf of the cabinet, I thought I’d grab some responses he was working on off this desk and put this week’s Mail Sack together. Today we talk about:

Let me go get Ned a step stool, and then we’ll get started on the Q & A.

Question

If I use Auditing and remove a user’s group membership, I see Security Group Management events (4729, 4759, etc.). If I delete that user though, I only see “a user account was deleted (4726) events.  There’s no group membership event – is that normal?

Answer

[Carefully crafted by Ned in his little Treebicle.]

User deletion means that the System performs the group membership removal.  You will see the same behavior when you create a user – there is no audit event when they are added to the local Users group, for example. This lack of System update auditing is intentional; otherwise, the log would explode from useless information.

Question

I was reading documentation about the Account Operators group’s default behavior. I have found that despite what it says here, members of the account operators group can delete administrators. Is the documentation wrong or is this expected?

Answer

[Straight from the (tiny) desk of Ned.]

Let’s analyze what the article says versus what the author meant:

Members of this group can create, modify, and delete accounts for users, groups, and computers located in the Users or Computers containers and organizational units in the domain, except the Domain Controllers organizational unit.

Mostly true. If you look at the default permissions on the Users container, for example, you see they definitely have create and delete rights:

clip_image002

It will be similar for your custom OUs, because those OU objects inherit those default permissions from the AD schema upon creation.

clip_image004

If your administrative accounts live in the Users container or a custom OU where you have not modified the default permissions, members of the account operators group can delete those users with impunity. If you want to stop this behavior, place your administrative users in a custom OU where you remove the Account Operators group from the permissions.

Members of this group do not have permission to modify the Administrators or the Domain Admins groups, nor do they have permission to modify the accounts for members of those groups.

True, but sometimes it takes a bit. At first, every user created allows the Account Operators group full control – this comes from the default schema security. They cannot modify administrative users, change their passwords, remove their group memberships, or otherwise manipulate them once AdminSDHolder and SDProp have their way with the account. Moreover, the author did not mean, “modification equals deletion”, even though you and I know as IT pros that it “is the ultimate modification”, of a sort. Modifying its existence. J At no point can Account Operators modify the members of the high security groups like Domain Admins, regardless of SDProp timing. Otherwise an Account Operator could elevate a normal user to the highest privilege levels in the domain.

Members of this group can log on locally to domain controllers in the domain and shut them down.

True (and subsequent <Rodney Dangerfield collar pull>). If you are dead set on using the Account Operators, removing this right (stored in the Default Domain Controllers policy) is probably a good idea. These users can deny service to your entire network, by shutting down every DC at once.

Because this group has significant power in the domain, add users with caution.

True! The Account Operators group is one of those NT 4.0 legacies that date back to an operating system that didn’t have a hierarchical management structure like X.500/LDAP, and instead a big blob of goo called a SAM database. Using this group is not ideal; it has far too many privileges and based on SDProp timing, can have too much power over brand new admin users for brief periods of time. We spent countless millions of dollars creating AD and Delegation of Control so that our customers could abandon the legacy management systems. If the Account Operators group is awesome, why would we have bothered otherwise?

Question

There are a lot of articles that discuss CIFS interoperability, and they refer to LAN Manager Authentication Level, but there are very few that mention the registry parameter AllowLegacySrvCall. What does this setting actually do?

Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0
Value: AllowLegacySrvCall
Type: REG_DWORD
Default: 0x0

Answer

Maybe you should sit down.

When a client attempts to connect to an SMB server the server generates a challenge and sends it to the client. The idea is that the client manipulates the challenge using its secret knowledge (the password) and sends the result back to the server as the response. Local System Authority Subsystem (LSASS) on the server evaluates that response and determines if the user has been properly authenticated. This is standard NTLM challenge/response authentication mechanics. With extended security support, LSASS performs some other checks to evaluate if the response has been tampered with, and if it has, the user is denied access. Unfortunately, this introduced a bug that was discovered in Windows Vista and Windows Server 2008. Creating the registry value AllowLegacySrvCall was our way of resolving this bug.

If the client supports extended security, LanManServer goes back to LSASS to generate the challenge to send to the client. If the client does not support extended security for NTLMv2, then LanManServer optimizes by generating its own challenge to the authentication request. Unfortunately, this challenge wasn't created by LSASS so it is missing some information when LSASS later evaluates the response to the challenge. This causes the response to be considered invalid and so authentication fails.

AllowLegacySrvCall enables logic in LSASS such that it can detect that a particular response was created from a challenge generated by LanManServer (as opposed to LSASS). In this case, LSASS will omit the extended security checks on the response. The effect of this setting is that if you have older SMB clients that do not support extended security then your NTLMv2 security is slightly compromised because there is no way to detect tampering of the authentication response on the wire.

So when do you need to enable AllowLegacySrvCall?

  1. You are enforcing NTLMv2 for authentication.
  2. Your SMB client does not support extended security. This usually means older Mac OS X, jcifs, and Samba. Note that NT 4.0 would also be affected, here.

Question

I realize this is an old article but couldn't you get around the USN rollback issue by doing an authoritative restore on the DC you bring back from a snapshot? Don't actually restore a backup but just run NTDSUTIL to make the DC authoritative for all objects. That would push all that DC's USNs up by 100,000 and the objects would replicate out -- hence no USN rollback issue.

Answer

Not quite. Consider the scenario where an object is created or an attribute is set on DC1 after the snapshot is taken. This change propagates out to all replication partners with the originating change designated as being on DC1. Now you restore your snapshot and use NTDSUTIL to mark authoritative all the objects and attributes in the Active Directory on DC1. Those objects and attributes will indeed replicate out, but what about the objects (or attributes) on DC1's partners that actually originated on DC1? Those changes will not propagate back to DC1 because the partner must assume that DC1 is already aware of them because the invocation ID of the partner has not changed.

This is why the invocation ID changes when AD is restored using a supported method. A new invocation ID indicates to all partners that this is essentially a new database with some pre-synchronized data in it and it needs to be updated from all partners. It is not just the USN value itself that impacts the rollback status of a DC, but it is also the invocation ID that distinguishes DC1's restored database from its original database. With the new invocation ID, changes that originated on DC1 after the backup was taken will propagate back to DC1 because partners won't think the changes originated on the now restored DC. Restoring a snapshot does not change the invocation ID, and thus basically breaks AD's ability to properly recover from a restore operation.

Long story short…don't do it.

If you have further questions, I recommend Rich Peckham's blog post on the topic of authoritative restores.

Question

I have read the W32Time documentation and blogs but I do not understand one thing. What is the difference in flags 0x1 and 0x8 in the registry parameter below:

Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Parameters
Value: NtpServer
Type: REG_SZ
Example: time.windows.com,0x8

Answer

The flags value for NtpServer are briefly documented in the following KB article: Time synchronization may not succeed when you try to synchronize with a non-Windows NTP server in Windows Server 2003.

0x01 - use special poll interval SpecialInterval
0x02 – UseAsFallbackOnly
0x04 - send request as SymmetricActive mode
0x08 - send request as Client mode

You can find more detail about how these flags interact in the Microsoft Communications Protocol Program (MCPP) library on MSDN: [MS-SNTP]: Network Time Protocol (NTP) Authentication Extensions.

If you need to understand the difference between Symmetric Active mode and Client mode, then you should consult RFC 5905. It’ll put hair on your chest.

Other Stuff

Is this the greatest metal album cover of all time? I think so.


Plow tank with gatling guns on a skull road with hardcore driver demanding answers from an uncaring corporate world? Check.

Do you have a set of wheels appropriate for the Zombie Apocalypse? Why not skip the Marauder and buy American?

And here is some stuff that makes me wish I still had a dorm room to decorate.

Until next time, folks.

- Jonathan “Average Height and Build” Stephens with Ned “Not” Pyle.

Congrats Sean and Mark, the Newest Masters!

$
0
0

Hey all, Ned here again. You probably know our pals Sean Ivey and Mark Renoden from their AskDS blog contributions. Both of them were once Directory Services Support Engineers and are now Premier Field Engineers, traveling the globe to help solve your problems. Much like the A-Team. Or not.

Anyway, what you probably don't know is that yesterday they joined the elite fraternity of Microsoft Certified Masters along with nine of their new best friends. Having taught that certification since day 0, I can tell you it is a royal gentleman's fruit buster and to get it takes serious dedication and serious smarts; heck, after five years and fourteen rotations, MCM DS only finally crossed the 100 graduate mark! If you haven't explored the certification that will set you apart from everyone in the IT industry, I suggest you start. Make sure you bank some sleep first though, and don't forget to ask Ryan about the Banana Crown.

We're awful proud of our former DS support brothers. Congratulations fellas.

- Ned "and all your old buddies" Pyle

 

Windows Server “8” Beta announcements, availability (updated)

$
0
0

Hi all, Ned here. For those who spent the day in a coma, Windows Server “8” Beta and Windows 8 CP are out. Make sure you start by visiting Bill Laing’s announcement on the Windows Server Blog. This morning he formally announced the availability of Windows Server “8” Beta and outlined some of the design philosophies in a brief post.

Next, we have a new kind of document we call the “Understand and Troubleshoot” guides, which are designed to explain the inner workings of new features and how to troubleshoot them. You may recognize some of the authors (you know I hate link lists, but in this case I’ll make an exception).

There are also “Test Lab Guides” and TechNet docs that introduce and demonstrate features, as well as assist with deployment.

And a reminder - send all your IT Pro feedback to the links below. People are definitely listening.

I know some of you are looking forward to the typical in-depth and honest AskDS beta content you’ve read for the past five years - you’re IT professionals and chomping at the bit to start learning about all the new enterprise features. Well, we’re still muzzled here and not allowed to discuss anything. Hang in there; I’m hopeful it won’t be too much longer.

- Ned “the gimp” Pyle

Unresponsive Servers due to DST and an unsupported registry key

$
0
0

Hi, David here to tell you about a thorny little problem that a few of our customers have run into during their testing for the upcoming Daylight Saving Time changes. For reference, the US enters DST this weekend, and parts of Europe enter DST on March 25th. (For a list of all the various Daylight Saving Time changes, click here)

What you need to know

If you have the following registry key implemented on any Windows systems, and your system clock is running faster than your CMOS clock, that computer will become unresponsive at the DST change. This unresponsiveness will persist until the CMOS clock catches up with the DST changeover time. For example, if the CMOS clock is set to 3/11/2012 6:55 AM UTC and the OS time is set to 3/11/2012 1:59 AM EST, when the system clock reaches 2:00 AM EST, the CPU will spike to 100%, and will remain pegged for 4 minutes until the CMOS clock reaches 7:00 AM UTC.

Key: HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\TimeZoneInformation
Value: RealTimeIsUniversal
Type: REG_DWORD
Data: 0x1 (default: 0x0)

We recommend the following steps:

1. Don’t use the undocumented and unsupported RealTimeIsUniversal registry key! If you have it set, delete it and reboot that computer. Make sure it doesn’t return via automation, like Startup Scripts or Group Policy Preferences

2. Check CMOS clocks on your systems and make sure that they are set to the correct time (yes, we know this requires a reboot).

See this KB article:

268725 - System may be unresponsive around Daylight Saving Time (DST) change when RealTimeIsUniversal is Set

http://support.microsoft.com/default.aspx?scid=kb;EN-US;2687252

David “What’s a TARDIS?” Beach

The yuck that is "PC Recycle Day" at Microsoft

$
0
0

Hey all, Ned here again. Still no ETA on Win8 word, and we've already discussed everything else on Earth ( ;-P ) so now I will share with you some insider knowledge of working in Microsoft Charlotte: the quarterly "PC Recycle Day". Here's an example of what I just saw on my way to get some coffee.

A couple of these are fairly hard to identify unless you are as old as Jonathan. Take a stab at them in the Comments, if you dare to date yourself. If you've used them all, give yourself a pat on the back - you are really close to retirement.

Update: Woo, a particularly crusty late arrival from the Networking team! They may upset the perennial Setup team favorites here and win it all this year, folks.

 

 

Have a nice weekend,

 

- Ned "spring chicken" Pyle

 

Gimme Some Sugar

$
0
0

Hi all, Ned here again. Like Bruce Campbell, we’ve been away for awhile, but you can always count on us to return for the sequel. Some of the Windows Server “8” Beta blogging rules have been relaxed and we’re ready to begin firing our boomstick. Look for the first one here in a few minutes.

Besides that, I’ve had plenty of inspiration in the past month from some of your questions and have some other non-8 posts in the quench tub that should be ready to go out soon; I’m thinking new USMT tricks, WMI filtering coolness, AD forest recovery gotchas, and some other. I might even find time for a Friday Mail Sack next week, who knows?

image
It’s a dirty job here, but someone has to get the backend of the pony.

Enough with metaphor mixing – on to the goods. The next post is a doozy: group policy management changes in Windows Server “8” Beta.

- Ned “Honey, you got reeeal ugly” Pyle

Group Policy Management Improvements in Windows Server "8" Beta

$
0
0

Hi all, Ned here again. If you've been supporting group policy for years, you’ve grown used to its behaviors. For something designed to manage an enterprise, its initial implementation wasn’t easy to manage itself. The Group Policy Management Console improved this greatly after Windows Server 2003, but there was room for enhancement.

Windows Server "8" Beta introduces a number of interesting Group Policy management changes to advance things. These include detecting overall replication consistency as well as remote policy refresh and easier resultant set of policy troubleshooting. Windows 8 Consumer Preview benefits from some of these changes as well.

Let's dig in.

Infrastructure Status

Once upon a time, someone wrote a Windows 2000 resource kit utility called gpotool.exe (no longer supported). It was supposed to tell you if the SYSVOL and AD portions of a group policy were synchronized on a given domain controller and between DCs in a domain. If it returned message "Policies OK", you were supposed to be golden.

Unfortunately, gpotool is not very bright or honest, which is why we do not recommend customers use it. It only checks the gpt.ini files in SYSVOL. Anyone who manages group policy knows that each GP GUID folder in SYSVOL contains many files critical to applying group policy. The gpt.ini existing is immaterial if the registry.pol does not exist or is some heinous stale version. Furthermore, gpotool bases everything on the gpt.ini version matching between AD and SYSVOL and alerting you if they don't. Except that the version matching alone has not mattered since Windows 2000 and file consistency checking is super important.

Enter Windows Server "8" Beta. When you fire up GPMC from a server or RSAT, then navigate to a domain node, you now see a new Status tab (more properly called the Group Policy Infrastructure Status tool). GPMC sets the DC it connected to as a baseline source of comparison. By default, that would be the PDC emulator, which GPMC tries to connect to first.

image

If you click Detect Now, the computer running GPMC directly reaches out to all the domain controllers in that domain using the LDAP and SMB protocols. It compares all the SYSVOL group policy file hashes, file counts, ACLs, and GPT versions against the baseline server. It also checks each DC's AD group policy object count, versions, and ACLS against the baseline. If everything is copacetic, you get the good news right there in the UI.

image

If it's not, you don't:

image

Note how the report renders above. If the Active Directory and SYSVOL columns are blank, the versions match between gpt and AD, and this means that the file hashes or security are out of sync (an indication of latency at the least); otherwise you will see version messages. If the FRS or DFSR service isn't running on a DC other than the baseline or SYSVOL is not shared, the SysVol message changes to Inaccessible. If you turn off a DC or NTDS service, the Active Directory field changes to Inaccessible. If you just deleted or added a group policy, the Active Directory field changes to Number of GPOS for comparison. It's all straightforward.

This new tool doesn’t grant permission to turn off your brain, of course. It's perfectly normal for AD and SYSVOL to be latent and out of sync between DCs for periods of time. Don't assume that because you see servers showing replication in progress that it is an error - that's why it specifically doesn't say “error” in GPMC. Finally, keep in mind that this new functionality version in the public Beta is naturally a bit unstable; feel free to report issues the Windows Server 8 Beta Forums along with detailed repro steps, and we can chat about if your issue is unknown. For example, stopping the DFSR service on the PDCE and then then clicking Detect Now to use that DC as the baseline terminates the MMC. Don’t take it too hard - work in progress, right? We'd love your feedback.

Moving right along…

Remote Policy Refresh

You can now use GPMC to target an OU and force group policy refresh on all of its computers and their currently logged on users. Simply right click any organizational unit and click Group Policy Update. The update occurs within 10 minutes (randomized on each targeted computer) in order to prevent crushing some poor DC in a branch office.

image

image

image

Windows Server "8" Beta Group Policy also updates the GroupPolicy PowerShell module to include a new cmdlet named Invoke-GpUpdate. If you examine its help, you see that it is very much like the classic gpupdate.exe. If you -force using invoke-gpupdate, you do the same as /force in gpupdate.exe, for instance.

NAME

Invoke-GPUpdate

SYNTAX

Invoke-GPUpdate [[-Computer] <string>] [[-RandomDelayInMinutes] <int>] [-AsJob] [-Boot] [-Force] [-LogOff] [-Target <string>] [<CommonParameters>]

Obviously, this cmdlet gives you much more control over the remote policy refresh process than GPMC. For instance, you can target a particular computer:

Invoke-gpupdate -computer <some computer>

Moreover, unlike the "within 10 minutes" pseudo-random behavior of GPMC, you can make the policy refresh happen right now and forcing group policy to update regardless of version changes. I don't know about you, but if I am interactively invoking a policy update for a given computer, I am not interested in waiting!

image

Since this is PowerShell, you have a great deal of flexibility compared to a purpose-built graphical or command-line tool. For example, you can get a list of computers with an arbitrary description then invoke against each one using a pipeline to for-eachobject, regardless of OU:

image

If you’re interested, this tool works by creating remote scheduled tasks. That's how it works for logged on users and with randomized refresh times. Another good reason to ensure the Task Scheduler service is running.

image

New RSOP Logging Data

I saved the best for last. The group policy resultant set of planning logs include a number of changes designed make troubleshooting and policy analysis easier. Just like in the last few versions of Windows, you can still use GPMC Group Policy Results or GPRESULT /H to gather an html log file showing how and what policy applied to a user and computer.

When you open that resulting html file, you now see an updated Summary section that provides better "at a glance" information on policy working or not and the type of network speeds detected. Even better is the new Component Status area. This shows you the time taken for each element of group policy processing to complete processing.

image

It also stores the associated operational event log activity under View Log that used to require you running gplogview.exe. Rather than parsing the event log with an Activity ID for the computer and user portions of policy processing, you just click the link to see it all unfold before you.

image

Finally, there is a change to the HTML result file for the applied policies. After 12 years, we’ve reached a point where there are thousands of individual Administrative template entries; far more than anyone could possibly remember or reliably discern from their titles. To make this easier, the Windows 8 version of the report now includes explanatory hotlinks to each of those policy entries.

image

By clicking the links in the report, you get the full Explanation text included with that policy entry. Like in this case, the new Primary Computer policy for roaming profiles (which I’ll discuss in a future post).

image

Nifty.

Key Point

Remote RSOP logging and Group Policy refresh require that you open firewall ports on the targeted computers. This means allowing inbound communication for RPC, WMI/DCOM, event logs, and scheduled tasks. You can enable the built-in Windows Advanced Firewall inbound rules:

  • Remote Policy Update
    • Remote Scheduled Tasks Management (RPC)
    • Remote Scheduled Tasks Management (RPC-EPMAP)
    • Windows Management Instrumentation (WMI-in)
  • Remote Policy Logging
    • Remote Event Log Management (NP-in)
    • Remote Event Log Management (RPC)
    • Remote Event Log Management (RPC-EPMAP)
    • Windows Management Instrumentation (WMI-in)

These are part of the “Remote Scheduled Tasks Management”, “Remote Event Log Management”, and “Windows Management Instrumentation” groups. These are TCP RPC port 135, named pipe port 445, and the dynamic ports associated with the endpoint mapper, like always.

Feedback and Beta Reminder

The place to send issues is the IT Pro TechNet forums. That engages everyone from our side through our main conduits and makes your feedback noticeable. Not all developers are readers of this blog, naturally.

Furthermore, remember that this article references a pre-release product. Microsoft does not support Windows 8 Consumer Preview or Windows Server "8" Beta in production environments unless you have a special agreement with Microsoft. Read that EULA you accepted when installing!

Until next time,

Ned “I used a fancy arrow!” Pyle


Your 24 Month XP Warning

$
0
0

Hi all, Ned here again with a public service announcement:

On April 8th 2014, Windows XP support ends

For the temporally challenged, that’s exactly two years from today. Hopefully, some of you don’t care because you’ve already gotten off XP. After all, Windows 7 has a 41% piece of Windows desktop distributions now according to NetMarketShare.com. Here’s their March 2012 take:

image

What that number also means though is that roughly 51% of the remaining desktops are still on XP. Hundreds of millions of computers that, two years from today, will stop getting security updates and lose support from third party software vendors.

If you have not started migrating your Windows XP environment to Windows 7 and begun evaluating Windows 8 Consumer Preview, you are probably late. According to our own customer deployment data, enterprise desktop replacement projects average 18-32 months. As someone who writes a lot about USMT, I can say that a customized PC migration undertaking is no joke. There are loads of moving parts in mass PC replacements and every company is different, even within the common areas of desktop, mobile, and work-from-home machines. If you’re prudent, you’ll spend months planning and testing before you get anywhere near your first end user. That means if you’re a company with 50,000 XP desktops, you’ll have to average around 2,100 desktops migrated a month before support ends. If you take the more realistic thinking and assume 250 working days in a year, you must average 100 migrated computers per working day, starting this minute.

The fiscal year is drawing to a close and the 24 month clock is running. Do you know where your XP clients are?

Until next time,

- Ned “like the Cubs, it’s a rebuilding year” Pyle

 

PS: Oh, and Vista mainstream support ended April 10th (today, as I wrote this). That means now it only gets security updates for the next 5 years, no further QFEs or service packs.

Like you care.

New USMT 5.0 Features for Windows 8 Consumer Preview

$
0
0

Hi all, Ned here again. Frequent readers know that I’ve written many times about the User State Migration Tool; it’s surprising to some, but the Directory Services team owns supporting this tool within Microsoft. With Windows 8 Consumer Preview, we released the new tongue twisting Windows Assessment and Deployment Kit for Windows 8 Consumer Preview (Windows ADK), which replaces the old WAIK and contains the updated User State Migration Tool 5.0 (binary version 6.2.8250). The new tool brings a long sought capability to the toolset: corrupt store detection and extraction. There are also various incremental supportability improvements and bug fixes.

Store verification and recovery

USMT 4.0 introduced usmtutils.exe, a simple command line tool that was mainly used to delete hardlink folders in use by some application and no longer removable through normal measures. The new usmtutils.exe now includes two new command-line arguments:

/verify[:reportType] <filePath> [/l:logFile] [/decrypt[:<AlgID>]] [/key:keyString] [/keyfile:fileName]

/extract <filePath> <destinationPath> [/i:<includePattern>] [/e:<excludePattern>] [/l:logFile] [/decrypt[:<AlgID>]] {/key:keyString] | [/keyfile:fileName] [/o]

You use the /verify option after gathering a scanstate compressed store. This checks the store file’s consistency and if it contains corrupted files or a corrupted catalog. It’s just a reporting tool, and it has options for the verbosity of the report as well as the optional encryption key info used to secure a compressed store. In Microsoft experience, hardware issues typically cause corrupt compressed stores, especially when errors are not reported back from USB devices.

image

You use the /extract option if you want to simply restore certain files, or cannot restore a compressed store with loadstate. For example, you’d use it if the store was later partially corrupted after validation, if loadstate cannot operate normally on a destination computer, or if a user deleted a file shortly after loadstate restoration but before their own backups were run. This new capability can restore files based on patterns (both include and exclude). It doesn’t restore setting or registry data, just files.

image

Changes in capabilities

USMT also now includes a number of other less sexy - but still important - changes. Here are the high points:

  • Warnings and logging – Scanstate and loadstate now warn you at the console with "…manifests is not present" if they cannot find the replacement and downlevel manifest folders:

image

USMT also warns about the risks of using the /C option (rather than /VSC combined with ensuring applications are not locking files), and how many units were not migrated:

image

Remember: you cannot use /vsc with /hardlink migrations. Either you continue to use /C or you figure out why files are in use and stop the underlying issue.

To that point, the log contains line items for each /C skipped file as well as a summary error report at the bottom:

----------------------------------- USMT ERROR SUMMARY -----------------------------------
* One or more errors were encountered in migration (ordered by first occurence)
+-----------------------------------------------------------------------------------------
| Error Code | Caused Abort | Recurrence | First Occurrence
| 33         | No           | 18         | Read error 33 for D:\foo [bar.pst]. Windows error 33 description: The process cannot access the file because another process has locked a portion of the file.[gle=0x00000012]
+-----------------------------------------------------------------------------------------
18 migration errors would have been fatal if not for /c. See the log for more information

  • Profile scalability – USMT 4.0 can fail to migrate if there are too many profiles and not enough memory. It takes a perfect storm but it’s possible and you would see error: “Close programs to prevent information loss. Your computer is low on memory” during loadstate. USMT 5.0 now honors an environmental variable of:

    MIG_CATALOG_PRESERVE_MEMORY=1

When set, loadstate trims its memory usage much more aggressively. The consequence of this is slower restoration, so don’t use this switch willy-nilly.

  • Built-in Variables - USMT now supports all of the KNOWNFOLDERID types now. Previously some (such as FOLDERID_Links) were not and required some hacking.

  • Command-line switches – the legacy /ALL switch was removed. The ALL argument was implicit and therefore pointless; it mainly caused issues when people tried to combine it with other arguments. 

  • /SF Works - the undocumented /SF switch that used to break things no longer breaks things. 
     
  • Scanstate Administrator requirements – Previously, loadstate required your membership in the Administrators group, but bizarrely, scanstate did not. This was pointless and confusing, as migration does not work correctly without administrative rights. Now they both require it.

  • "Bad" data handling - Certain unexpected file data formats used to lead to errors like "Windows error 4317 description: The operation identifier is not valid". Files with certain strings in alternate data streams would fail with "Windows error 31 description: A device attached to the system is not functioning". USMT handles these scenarios now.

  • NTUSER.DAT load handling - The NTUSER.DAT last modified date no longer changes after you run scanstate, meaning that /UEL now works correctly with repeated migrations.

  • Manifests and UNC paths - Previously, USMT failed to find its manifest folders if you ran scanstate or loadstate through a UNC path. Now it looks in the same folder as the running executable, regardless of that path's form.

  • Orphaned profiles - When USMT cannot load a user profile as described here, it tries 19 more times (waiting 6 seconds between tries) just like USMT 4.0. However, USMT skips any subsequent profiles that fail to load after one attempt. Therefore, no matter how many incorrectly removed profile entries exist, the most delay you can see is 2 minutes.

  • UEL and UE - In USMT 4.0, a /UEL exclusion rule would override the processing of a /UE exclusion rule, even though it was likely that if you were setting UE because you had specific need. USMT now returns to the USMT 3.01 behavior of UE overriding UEL.

USMT 5.0 still works with Windows XP through Windows 7, and adds Windows 8 x86 and AMD64 support as well. All of the old rules around CPU architecture and application migration are unchanged in the beta version (USMT 6.2.8250).

Feedback and Reminder about the Windows 8 Consumer Preview

The place to send issues is the IT Pro TechNet forums. That engages everyone from our side through our main conduits and makes your feedback noticeable. Not all developers are readers of this blog, naturally.

Furthermore, Windows 8 Consumer Preview is a pre-release product and is not officially supported by Microsoft. In general, it is not recommended pre-release products be used in production environments. For more information on the Windows 8 Consumer Preview, read this blog post from the Windows Experience Blog.

Until next time,

Ned “there are lots of new manifests too, but I just couldn’t be bothered” Pyle

Saturday Mail Sack: Because it turns out, Friday night was alright for fighting

$
0
0

Hello all, Ned here again with our first mail sack in a couple months. I have enough content built up here that I actually created multiple posts, which means I can personally guarantee there will be another one next week. Unless there isn't!

Today we answer your questions around:

One side note: as I was groveling old responses, I came across a handful of emails I'd overlooked and never responded to; <insert various excuses here>. People who know me know that I don’t ignore email lightly. Even if I hadn't the foggiest idea how to help, I'd have at least responded with a "Duuuuuuuuuuurrrrrrrr, no clue, sorry".

Therefore, I'll make you deal: if you sent us an email in the past few months and never heard back, please resend your question and I'll answer them as best I can. That way I don’t spend cycles answering something you already figured out later, but if you’re still stuck, you have another chance. Sorry about all that - what with Windows 8 work, writing our internal support engineer training, writing public content, Jonathan having some kind of south pacific death flu, and presenting at internal conferences… well, only the usual insane Microsoft Office clipart can sum up why we missed some of your questions:

clip_image002

On to the goods!

Question

Is it possible to create a WMI Filter that detects only virtual machines? We want a group policy that will apply specifically to our virtualized server guests.

Answer

Totally possible for Hyper-V virtual machines: You can use the WMI class Win32_ComputerSystem with a property of Model like “Virtual Machine” and property Manufacturer of “Microsoft Corporation”. You can also use class Win32_BaseBoard for the Product property, which will be “Virtual Machine” and property Manufacturer that will be “Microsoft Corporation”.

image

Technically speaking, this might also capture Virtual PC machines, but I don’t have one handy to see, and I doubt you are allowing those to handle production workloads anyway. As for EMC VMWare, Citrix Xen, KVM, Oracle Virtual Box, etc. you’ll have to see what shows for Win32_BaseBoard/Win32_ComputerSystem in those cases and make sure your WMI filter looks for that too. I don’t have any way to test them, and even if I did, I'd still make you do it out of spite. Gimme money!

Which reminds me - Tad is back:

image

Question

The Understand and Troubleshoot AD DS Simplified Administration in Windows Server "8" Beta guide states:

Microsoft recommends that all domain controllers provide DNS and GC services for high availability in distributed environments; these options default to on when installing a domain controller in any mode or domain.

But when I run Install-ADDSDomainController -DomainName corp.contoso.com -whatif it returns that that the cmdlet will not install the DNS Server (DNS Server: No).

If Microsoft recommends that all domain controllers provide DNS, why do I need to specify -InstallDNS argument?

Answer

The output of DNS Server: No is a cosmetic issue with the output of -whatif. It should say YES, but doesn't unless you specifically use the $true parameter. You don't have to specify -installdns; the cmdlet will always automatically install DNS server unless you specify -installdns:$false.

Question

How can I disable a user on all domain controllers, without waiting for (or forcing) AD replication?

Answer

The universal in-box way that works in all operating systems would be to use DSMOD.EXE USER and feed it the DC names in a list. For example:

1. Create a text file that contains all your DC in a forest, in a line-separated list:

2008r2-01
2008r2-02

2. Run a FOR loop command to read that list and disable the specified user against each domain controller.

FOR /f %i IN (some text file) DO dsmod user "some DN" -disabled -yes -s %i

For instance:

image

You also have the AD PowerShell option in your Win2008 R2 DC environment, and it’s much easier to automate and maintain. You just tell it the domain controllers' OU and the user and let it rip:

get-adcomputer -searchbase "your DC OU" -filter * | foreach {disable-adaccount "user logon ID" -server $_.dnshostname}

For instance:

image

If you weren't strictly opposed to AD replication (short circuiting it like this isn't going to stop eventual replication traffic) you can always disable the user on one DC then force just that single object to replicate to all the other DCs. Check out repadmin /replsingleobj or the new Windows Server "8" Beta " sync-adobject cmdlet.

image

 The Internet also has many further thoughts on this. It's a very opinionated place.

Question

We have found that modifying the security on a DFSR replicated folder and its contents causes a big DFSR replication backlog. We need to make these permissions changes though; is there any way to avoid that backlog?

Answer

Not the way you are doing it. DFSR has to replicate changes and you are changing every single file; after all, how can you trust a replication system that does not replicate? You could consider changing permissions "from the bottom up" - where you modify perms on lower level folders first - in some sort of staged fashion to minimize the amount of replication that has to occur, but it just sounds like a recipe to get things wrong or end up replicating things twice, making it worse. You will just have to bite the bullet in Windows Server 2008 R2 and older DFSR. Do it on a weekend and next time, treat this as a lesson learned and plan your security design better so that all of your user base fits into the model using groups.

However…

It is a completely different story if you switch to Windows Server "8" Beta - well really, the RTM version when it ships. There you can use Central Access Policies (similar to Windows Server 2008 R2's global object access auditing). This new kind of security system is part of the Dynamic Access Control feature and abstracts the user access from NTFS, meaning you can change security using claims policy and not actually change the files on the disk. It's amazing stuff; in my opinion, DAC is the first truly huge change in Windows file access control since Windows NT gave us NTFS.

image

Central Access Policy is not a trivial thing to implement, but this is the future of file servers. Admins should seriously evaluate this feature when testing Windows Server "8" Beta in their lab environments and thinking about future designs. Our very own Mike Stephens has written at length about this in the Understand and Troubleshoot Dynamic Access Control in Windows Server "8" Beta guide as well.

Question

[Perhaps interestingly to you the reader, this was my question to the developers of AD PowerShell. I don’t know everything after all… - Ned]

I am periodically seeing error "invalid enumeration context" when querying the Redmond domain using get-adcomputer. It’s a simple query to return all the active Windows 8 and Windows Server "8" computers that were logged into since February 15th and write them to a CSV file:

image

It runs for quite a while and sometimes works, sometimes fails. I don’t find any well-explained reference to what this error means or how to avoid it, but it smells like a “too much data asked for over too long a period of time” kind of issue.

Answer

The enumeration contexts do have a finite hardcoded lifetime and you will get an error if they expire. You might see this error when executing searches that search a huge quantity of data using limited indexed attributes and return a small data set. If we hit a DC that is not very busy then the query will run faster and could have enough time to complete for a big dataset like this query. Server hardware would also be a factor here. You can also try searching starting at a deeper level. You could also tweak the indexes, although obviously not in this case.

[For those interested, when the query worked, it returned roughly 75,000 active Windows 8 family machines from that domain alone. Microsoft dogfoods in production like nobody else, baby - Ned]

Question

Is there any chance that DFSR could be lock a file while it is replicating outbound and prevent user access to their data?

Answer

DFSR uses the BackupRead() function when copying a file into the staging folder (i.e. any file over 64KB, by default), so that should prevent any “file in use” issues with applications or users. Once staged and marshaled, the copy of the file is replicated and no user has any access to that version of the file.

For a file under 64KB, it is simply replicated without staging and that operation of making a copy and sending it into RPC is so fast there’s no reasonable way for anyone to ever see any issues. I have certainly never seen it for sure, I should have by now after six years.

Question

Why does TechNet state that USMT 4.0 offline migrations don’t work for certain OS settings? How do I figure out the complete list?

Answer

Manifests that use migration plugin DLLs aren’t processed when running offline migrations. It's just a by design limitation of USMT and not a bug or anything. To see which manifests you need to examine and consider creating custom XML to handle, review the complete list at Understanding what the USMT 4.0 CONFIG manifests migrate (Part 1: Introduction).

Question

One of my customers has found that the "Everyone" group is added to the below folders in Windows 2003 and Windows 2008:

Windows Server 2008

C:/ProgramData/Microsoft/Crypto/DSS/MachineKeys

C:/ProgramData/Microsoft/Crypto/RSA/MachineKeys

Windows Server 2003

C:/Documents and Settings/All Users/Application Data/Microsoft/Crypto/DSS/MachineKeys

C:/Documents and Settings/All Users/Application Data/Microsoft/Crypto/RSA/MachineKeys

1. Can we remove the "Everyone" group and give permissions to another group like - Authenticated users for example?

2. Will replacing that default cause issues?

3. Why is this set like this by default?

Answer

[Courtesy of:

image

]

These permissions are intentional. They are intended to allow any process to generate a new private key, even an Anonymous one. You'll note that the permissions on the MachineKeys folder are limited to the folder only. Also, you should note that inheritance has been disabled, so the permissions on the MachineKeys folder will not propagate to new files created therein. Finally, the key generation code itself modifies the permissions on new key container files before the private key is actually written to the container file.

In short, messing with these permissions will probably lead to failures in creating or accessing keys belonging to the computer. So please don't touch them.

1. Exchanging Authenticated Users with Everyone probably won't cause any problems. Microsoft, however, doesn't test cryptographic operations after such a permission change; therefore, we cannot predict what will happen in all cases.

2. See my answer above. We haven't tested it. We have, however, been performing periodic security reviews of the default Windows system permissions, tightening them where possible, for the last decade. The default Everyone permissions on the MachineKeys folder have cleared several of these reviews.

3. In local operations, Everyone includes unidentified or anonymous users. The theory is that we always want to allow a process to generate a private key. When the key container is actually created and the key written to it, the permissions on the key container file are updated with a completely different set of default permissions. All the default permissions allow are the ability to create a file, read and write data. The permissions do not allow any process except System to launch any executable code.

Question

If I specify a USMT 4.0 config.xml child node to prevent migration, I am still seeing the settings migrate. But if I set the parent node, those settings do not migrate. The consequence being that no child nodes migrate, which I do not want.

For example, on XP the Dot3Svc service is set to Manual startup.  On Win7, I want the Dot3Svc service set to Automatic startup.  If I use this config.xml on the loadstate, the service is set to manual like the XP machine and my "no" setting is ignored:

<component displayname="Networking Connections" migrate="yes"ID="network_and_internet\networking_connections">

  <component displayname="Microsoft-Windows-Wlansvc" migrate="yes"ID="<snip>"/>

  <component displayname="Microsoft-Windows-VWiFi" migrate="yes"ID="<snip>"/>

  <component displayname="Microsoft-Windows-RasConnectionManager" migrate="yes"ID="<snip>"/>

  <component displayname="Microsoft-Windows-RasApi" migrate="yes"ID="<snip>"/>

  <component displayname="Microsoft-Windows-PeerToPeerCollab" migrate="yes"ID="<snip>"/>

  <component displayname="Microsoft-Windows-Native-80211" migrate="yes"ID="<snip>"/>

  <component displayname="Microsoft-Windows-MPR" migrate="yes"ID="<snip>"/>

  <component displayname="Microsoft-Windows-Dot3svc" migrate="no"ID="<snip>"/>

</component>

Answer

Two different configurations can cause this symptom:

1. You are using a config.xml file created on Windows 7, then running it on a Windows XP computer with scanstate /config

2. The source computer was Windows XP and it did not have a config.xml file set to block migration.

When coming from XP, where downlevel manifests were used, loadstate does not process those differently-named child nodes on the destination Win7 computer. So while the parent node set to NO would work, the child nodes would not, as they have different displayname and ID.

It’s a best practice to use a config.xml in scanstate as described in http://support.microsoft.com/kb/2481190, if going from x86 to x64; otherwise, you end up with damaged COM settings. Otherwise, you only need to generate per-OS config.xml files if you plan to change default behavior. All the manifests run by default if there is a config.xml with no modifications or if there is no config.xml at all.

Besides being required for XP to block settings, you should also definitely lean towards using config.xml on the scanstate rather than the loadstate. If using Vista to Vista, Vista to 7, or 7 to 7, you could use the config.xml on either side, but I’d still recommend sticking with the scanstate; it’s typically better to block migration from adding things to the store, as it will be faster and leaner.

Other Stuff

[Many courtesy of our pal Mark Morowczynski -Ned]

Happy belated 175th birthday Chicago. Here's a list of things you can thank us for, planet Earth; where would you be without your precious Twinkies!?

Speaking of Chicago…

All the new MCSE and certification news reminded me of the other side to that coin.

Do you know where your nearest gun store is located? Map of the Dead does. Review now; it will be too late when the zombies rise from their graves, and I don't plan to share my bunker, Jim.

image

If you call yourself an IT Pro, you owe it to yourself to visit moviecarposters.com right now and buy… everything. They make great alpha geek conversation pieces. To get things started, I recommend these:

clip_image002[6] clip_image004 clip_image006
Sigh - there is never going to be another Firefly

And finally…

I started reading Terry Pratchett again, picking up where from where I left off as a kid. Hooked again. Damn you English writers, with your understated awesomeness!

Ok, maybe not all English Writers…

image

Until next time,

- Ned "Jonathan is seriously going to kill me" Pyle

Exclusive! Shocking New Windows Names Revealed!!!

$
0
0

Ok, that might have been a slightly inflammatory and somewhat misleading title.

  • Windows 8 is now officially called... Windows 8. The full set of edition names are Windows 8, Windows 8 Pro, Windows 8 RT, and Windows 8 Enterprise. Brandon Leblanc has the full breakout.
  • Windows Server "8" is now officially called... Windows Server 2012. You can read more about the strategy from Brad Anderson here. Editions to follow at a later time.

That server name also tells you two things: One, if you had bet against that name in the office pool, you are a born loser. Two, that we may make radical changes in OS capabilities, but when it comes to server branding, we are more conservative than a prom chaperon. Who is also a nun. And voted libertarian. In Switzerland.

Back to work, you!

 - Ned "Ned Pyle" Pyle

How to NOT Use Win32_Product in Group Policy Filtering

$
0
0

Hi all, Ned here again. I have worked many slow boot and slow logon cases over my career. The Directory Services support team here at Microsoft owns a sizable portion of those operations - user credentials, user profiles, logon and startup scripts, and of course, group policy processing. If I had to pick the initial finger pointing that customers routinely make, it's GP. Perhaps it's because group policy is the least well-understood part of the process, or maybe because it's the one with the most administrative fingers in the pie. When it comes down to reality though, group policy is more often not the culprit. Our new changes in Windows 8 will help you make that determination much quicker now.

Today I am going to talk about one of those times that GPO is the villain. Well, sort of... he's at least an enabler. More appropriately, the optional WMI Filtering portion of group policy using the Win32_Product class. Win32_Product has been around for many years and is both an inventory and administrative tool. It allows you to see all the installed MSI packages on a computer, install new ones, reinstall them, remove them, and configure them. When used correctly, it's a valuable option for scripters and Windows PowerShell junkies.

Unfortunately, Win32_Product also has some unpleasant behaviors. It uses a provider DLL that validates the consistency of every installed MSI package on the computer - or off of it, if using a remote administrative install point. That makes it very, very slow.

Where people trip up usually is group policy WMI filters. Perhaps the customer wants to apply managed Internet Explorer policy based on the IE version. Maybe they want to set AppLocker or Software Restriction policies only if the client has a certain program installed. Perhaps even use - yuck - Software Installation policy in a more controlled fashion.

Today I talk about some different options. Mike didn’t write this but he had some good thoughts when we talked about this offline so he gets some credit here too. A little bit. Tiny amount, really. Hardly worth mentioning.

If you have no idea what group policy WMI filters are, start here:

Back? Great, let's get to it.

Don’t use Win32_Product

The Win32_Product WMI class is part of the CIMV2 namespace and implements the MSI provider (msiprov.dll and associated msi.mof) to list and validate installed installation packages. You will see MsiInstaller event 1035 in the Application log for each application queried by the class:

Source: MsiInstaller
Event ID: 1035
Description:
Windows Installer reconfigured the product. Product Name: <ProductName>. Product Version: <VersionNumber>. Product Language: <languageID>. Reconfiguration success or error status: 0.

And constantly repeated System events:

Event Source: Service Control Manager

Event ID: 7035

Description:

The Windows Installer service was successfully sent a start control.

 

Event Type: Information

Event Source: Service Control Manager

Event ID: 7036

Description:

That validation piece is the real speed killer. So much, in fact, that it can lead to group policy processing taking many extra minutes in Windows XP when you use this class in a WMI filter - or even cause processing to time out and fail altogether.. This is even more likely when:

  • The client contains many installed applications
  • Installation packages are sourced from remote file servers
  • Install packages used certificate validation and the user cannot access the certificate revocation list for that package
  • Your client hardware is… crusty.

Furthermore, Windows Vista and later Windows versions cap WMI filters execution times at 30 seconds; if they fail to complete by then, they are treated as FALSE. On those OS versions, it will often appear that Win32_Product just doesn’t work at all.

image

What are your alternatives?

Group Policy Preferences, maybe

Depending on what you are trying to accomplish, Group Policy Preferences could be the solution. GPP includes item-level targeting that has fast, efficient filtering of just about any criteria you can imagine. If you are trying to set some computer-based settings that a user cannot change and don’t mind preferences instead of managed policy settings, GPP is the way to go. As with all software, make sure you evaluate our latest patches to ensure it works as desired. As of this writing, those are:

For instance, let's say you have a plotting printer that Marketing cannot correctly use without special Contoso client software. Rather than using managed computer policy to control client printer installation and settings, you can use GPP Registry or Printer settings to modify the values needed.

image

Then you can use Item Level Targeting to control the installation based on the specialty software's presence and version.

image

image

Alternatively, you can use the registry and file system for your criteria, which works even if the software doesn't install via MSI packages:

image

An alternative to Win32_Product

What to do if you really, really need to use a WMI filter to determine MSI installed versions and names though? If you look around the Internet, you will find a couple of older proposed solutions that - to be frank - will not work for most customers.

  1. Use the Win32reg_AddRemovePrograms class instead.
  2. Use a custom class (like described here and frequently copied/pasted on the Interwebz).

The Win32reg_AddRemovePrograms is not present on most client systems though; it is a legacy class, first delivered by the old SMS 2003 management WMI system. I suspect one of the reasons the System Center folks discarded its use years ago for their own native inventory system was the same reason that the customer class above doesn’t work in #2 - it didn’t return 32-bit software installed on 64-bit computers. The class has not been updated since initial release 10 years ago.

#2 had the right idea though, at least as a valid customer workaround to avoid using Win32_Product: by creating your own WMI class using the generic registry provider to examine just the MSI uninstall registry keys, you can get a fast and simple query that reasonably detects installed software. Armed with the "how", you can also extend this to any kind of registry queries you need, without risk of tanking group policy processing. To do this, you just need notepad.exe and a little understanding of WMI.

Roll Your Own Class

Windows Management Instrumentation uses Managed Operation Framework (MOF) files to describe the Common Information Model (CIM) classes. You can create your own MOF files and compile them into the CIM repository using a simple command-line tool called mofcomp.exe.

You need to be careful here. This means that once you write your MOF you should validate it by using the mofcomp.exe -check argument on your standard client and server images. It also means that you should test this on those same machines using the -class:createonly argument (and not setting the -autorecover argument or #PRAGMA AUTORECOVER pre-processor) to ensure it doesn't already exist. The last thing you want to do is break some other class.

When done testing, you're ready to give it a go. Here is a sample MOF, wrapped for readability. Note the highlighted sections that describe what the MOF examines and what the group policy WMI filter can use as query criteria. Unlike the oft-copied sample, this one understands both the normal native architecture registry path as well as the Wow6432node path that covers 32-bit applications installed on a 64-bit system.

Start copy below =======>

// "AS-IS" sample MOF file for returning the two uninstall registry subkeys

// Unsupported, provided purely as a sample

// Requires compilation. Example: mofcomp.exe sampleproductslist.mof

// Implements sample classes: "SampleProductList" and "SampleProductlist32"

//   (for 64-bit systems with 32-bit software)

 

#PRAGMA AUTORECOVER

 

[dynamic, provider("RegProv"),

ProviderClsid("{fe9af5c0-d3b6-11ce-a5b6-00aa00680c3f}"),ClassContext("local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Uninstall")]

class SampleProductsList {

[key] string KeyName;

[read, propertycontext("DisplayName")] string DisplayName;

[read, propertycontext("DisplayVersion")] string DisplayVersion;

};

 

[dynamic, provider("RegProv"),

ProviderClsid("{fe9af5c0-d3b6-11ce-a5b6-00aa00680c3f}"),ClassContext("local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Wow6432node\\Microsoft\\Windows\\CurrentVersion\\Uninstall")]

class SampleProductsList32 {

[key] string KeyName;

[read, propertycontext("DisplayName")] string DisplayName;

[read, propertycontext("DisplayVersion")] string DisplayVersion;

};

<======= End copy above

Examining this should also give you interesting ideas about other registry-to-WMI possibilities, I imagine.

Test Your Sample

Copy this sample to a text file named with a MOF extension, store it in the %systemroot%\system32\wbem folder on a test machine, and then compile it from an administrator-elevated CMD prompt using mofcomp.exe filename. For example:

image

To test if the sample is working you can use WMIC.EXE to list the installed MSI packages. For example, here I am on a Windows 7 x64 computer with Office 2010 installed; that suite contains both 64 and 32-bit software so I can use both of my custom classes to list out all the installed software:

image

Note that I did not specify a namespace in the sample MOF, which means it updates the \\root\default namespace, instead of the more commonly used \\root\cimv2 namespace. This is intentional: the Windows XP implementation of registry provider is in the Default namespace, so this makes your MOF OS agnostic. It will work perfectly well on XP, 2003, 2008, Vista, 7, or even the Windows 8 family. Moreover, I don’t like updating the CIMv2 namespace if I can avoid it - it already has enough classes and is a bit of a dumping ground.

Deploy Your Sample

Now I need a way to get this MOF file to all my computers. The easiest way is to return to Group Policy Preferences; create a GPP policy that copies the file and creates a scheduled task to run MOFCOMP at every boot up (you can change this scheduling later or even turn it off, once you are confident all your computers have the new classes).

image

image

image

image

You can also install and compile the MOF manually, use psexec.exe, make it part of your standard OS image, deploy it using a software distribution system, or whatever. The example above is just that - an example.

Now that all your computers know about your new WMI class, you can create a group policy WMI filter that uses it. Here are a couple examples; note that I remembered to change the namespace from CIMv2 to DEFAULT!

image

image

image

You're in business with a system that, while not optimal, is certainly is far better than Win32_Product. It’s fast and lightweight, relatively easy to manage, and like all adequate solutions, designed not to make things worse in its efforts to make things different.

An aside

Software Installation policy is not designed to be an enterprise software management solution and neither are individual application self-update systems. SI works fine in a small business network as a "no frills" solution but doesn’t offer real monitoring or remediation, and requires too much of the administrator to manage. If you are using these because of the old "we only fix IT when it's broken" answer, one argument you might take to management is that you are broken and operating at great risk: you have no way to deploy non-Microsoft updates in a timely and reliable fashion.

Even though the free Windows Update and Windows Software Update Service support Windows, Office, SQL, and Exchange patching, it’s probably not enough; anyone with more than five minutes in the IT industry knows that all of your software should be receiving periodic security updates. Does anyone here still think it's safe to run Adobe, Oracle, or thousands of other vendor products without controlled, monitored, and managed patching? If your network doesn't have a real software patching system, it's like a building with no sprinklers or emergency exits: nothing to worry about… until there's a fire. You wouldn’t run computers without anti-virus protection, but the number of customers I speak to that have zero security patching strategy is very worrying.

It's not 1998 anymore, folks. A software and patch management system isn’t an option anymore if you have a business with more than a hundred computers; those days are done for everyone. Even for Apple, although they haven't realized it yet. We make System Center, but there are other vendors out there too, and I’d rather you bought a competing product than have no patch management at all.

Until next time,

- Ned "pragma-tism" Pyle

Viewing all 274 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>