Vous êtes sur la page 1sur 19

1. What is White Space In AD? During ordinary operation, the customer will delete objects from Active Directory.

When an object is deleted, it results in white space (or unused space) being created in the database. On a regular basis, the database will consolidate this white space through a process called defragmentation, and this white space will be reused when new objects are added (without adding any size to the file itself). This automatic online defragmentation redistributes and retains white space for use by the database, but does not release it to the file system. Therefore, the database size does not shrink, even though objects might be deleted. In cases where the data is decreased significantly, such as when the global catalog is removed from a domain controller, white space is not automatically returned to the file system. Although this condition does not affect database operation, it does result in large amounts of white space in the database. You can use offline defragmentation to decrease the size of the database file by returning white space from the database file to the file system. Managing the Active Directory database also allows you to upgrade or replace the disk on which the database or log files are stored or to move the files to a different location, either permanently or temporarily. 2. FSMO roles ? There are five different FSMO roles and they each play a different function in making Active Directory work:

PDC Emulator - This role is the most heavily used of all FSMO roles and has the widest range of functions. The domain controller that holds the PDC Emulator role is crucial in a mixed environment where Windows NT 4.0 BDCs are still present. This is because the PDC Emulator role emulates the functions of a Windows NT 4.0 PDC. But even if you've migrated all your Windows NT 4.0 domain controllers to Windows 2000 or Windows Server 2003, the domain controller that holds the PDC Emulator role still has a lot to do. For example, the PDC Emulator is the root time server for synchronizing the clocks of all Windows computers in your forest. It's critically important that computer clocks are synchronized across your forest because if they're out by too much then Kerberos authentication can fail and users won't be able to log on to the network. Another function of the PDC Emulator is that it is the domain controller to which all changes to Group Policy are initially made. For example, if you create a new Group Policy Object (GPO) then this is first created in the directory database and within the SYSVOL share on the PDC Emulator, and from there the GPO is replicated to all other domain controllers in the domain. Finally, all password changes and account lockout issues are handled by the PDC Emulator to ensure that password changes are replicated properly and account lockout policy is effective. So even though the PDC Emulator emulates an NT PDC (which is why this role is called PDC Emulator), it also does a whole lot of other stuff. In fact, the PDC Emulator role is the most heavily utilized FSMO role so you should make sure that the domain controller that holds this role has sufficiently beefy hardware to handle the load. Similarly, if the PDC Emulator role fails then it can potentially cause the most problems, so the hardware it runs on should be fault tolerant and reliable. Finally, every domain has its own PDC Emulator role, so if you have N domains in your forest then you will have N domain controllers with the PDC Emulator role as well. RID Master - This is another domain-specific FSMO role, that is, every domain in your forest has exactly one domain controller holding the RID Master role. The purpose of this role is to replenish the pool of unused relative IDs (RIDs) for the domain and prevent this pool from becoming exhausted. RIDs are used up whenever you create a new security principle (user or computer account) because the SID for the new security principle is constructed by combining the domain SID with a unique RID taken from the pool. So if you run out of RIDS, you won't be able to create any new user or computer accounts, and to prevent

this from happening the RID Master monitors the RID pool and generates new RIDs to replenish it when it falls beneath a certain level. Infrastructure Master - This is another domain-specific role and its purpose is to ensure that cross-domain object references are correctly handled. For example, if you add a user from one domain to a security group from a different domain, the Infrastructure Master makes sure this is done properly. As you can guess however, if your Active Directory deployment has only a single domain, then the Infrastructure Master role does no work at all, and even in a multi-domain environment it is rarely used except when complex user administration tasks are performed, so the machine holding this role doesn't need to have much horsepower at all. Schema Master - While the first three FSMO roles described above are domain-specific, the Schema Master role and the one following are forest-specific and are found only in the forest root domain (the first domain you create when you create a new forest). This means there is one and only one Schema Master in a forest, and the purpose of this role is to replicate schema changes to all other domain controllers in the forest. Since the schema of Active Directory is rarely changed however, the Schema Master role will rarely do any work. Typical scenarios where this role is used would be when you deploy Exchange Server onto your network, or when you upgrade domain controllers from Windows 2000 to Windows Server 2003, as these situations both involve making changes to the Active Directory schema. Domain Naming Master - The other forest-specific FSMO role is the Domain Naming Master, and this role resides too in the forest root domain. The Domain Naming Master role processes all changes to the namespace, for example adding the child domain vancouver.mycompany.com to the forest root domain mycompany.com requires that this role be available, so you can't add a new child domain or new domain tree, check to make sure this role is running properly.

To summarize then, the Schema Master and Domain Naming Master roles are found only in the forest root domain, while the remaining roles are found in each domain of your forest. Now let's look at best practices for assigning these roles to different domain controllers in your forest or domain.

What if a FSMO server fails? Top

Schema Master No updates to the Active Directory schema will be possible. Since schema updates are rare (usually done by certain applications and possibly an Administrator adding an attribute to an object), then the malfunction of the server holding the Schema Master role will not pose a critical problem. Domain Naming Master The Domain Naming Master must be available when adding or removing a domain from the forest (i.e. running DCPROMO). If it is not, then the domain cannot be added or removed. It is also needed when promoting or demoting a server to/from a Domain Controller. Like the Schema Master, this functionality is only used on occasion and is not critical unless you are modifying your domain or forest structure.

PDC Emulator The server holding the PDC emulator role will cause the most problems if it is unavailable. This would be most noticeable in a mixed mode domain where you are still running NT 4 BDCs and if you are using downlevel clients (NT and Win9x). Since the PDC emulator acts as a NT 4 PDC, then any actions that depend on the PDC would be affected (User Manager for Domains, Server Manager, changing passwords, browsing and BDC replication). In a native mode domain the failure of the PDC emulator isn't as critical because other domain controllers can assume most of the responsibilities of the PDC emulator. RID Master The RID Master provides RIDs for security principles (users, groups, computer accounts). The failure of this FSMO server would have little impact unless you are adding a very large number of users or groups. Each DC in the domain has a pool of RIDs already, and a problem would occur only if the DC you adding the users/groups on ran out of RIDs. Infrastructure Master This FSMO server is only relevant in a multi-domain environment. If you only have one domain, then the Infrastructure Master is irrelevant. Failure of this server in a multi-domain environment would be a problem if you are trying to add objects from one domain to another.

Placing FSMO Server Roles Top

So where are these FSMO server roles found? Is there a one to one relationship between the server roles and the number of servers that house them?

The first domain controller that is installed in a Windows 2000 domain, by default, holds all five of the FSMO server roles. Then, as more domain controllers are added to the domain, the FSMO roles can be moved to other domain controllers. Moving a FSMO server role is a manual process, it does not happen automatically. But what if you only have one domain controller in your domain? That is fine. If you have only one domain controller in your organization then you have one forest, one domain, and of course the one domain controller. All 5 FSMO server roles will exist on that DC. There is no rule that says you have to have one server for each FSMO server role.

However, it is always a good idea to have more than one domain controller in a domain for a number of reasons. Assuming you do have multiple domain controllers in your domain, there are some best practices to follow for placing FSMO server roles.

The Schema Master and Domain Naming Master should reside on the same server, and that machine should be a Global Catalog server. Since all three are, by default, on the first domain controller installed in a forest, then you can leave them as they are.

Note: According to MS, the Domain Naming master needs to be on a Global Catalog Server. If you are going to separate the Domain Naming master and Schema master, just make sure they are both on Global Catalog servers.

The Infratructure Master should not be on the same server that acts as a Global Catalog server. The reason for this is the Global Catalog contains information about every object in the forest. When the Infrastructure Master, which is responsible for updating Active Directory information about cross domain object changes, needs information about objects not in it's domain, it contacts the Global Catalog server for this information. If they both reside on the same server, then the Infratructure Master will never think there are changes to objects that reside in other domains because the Global Catalog will keep it contantly updated. This would result in the Infrastructure Master never replicating changes to other domain controllers in it's domain. Note: In a single domain environment this is not an issue.

Microsoft also recommeds that the PDC Emulator and RID Master be on the same server. This is not mandatory like the Infrastructure Master and the Global Catalog server above, but is recommended. Also, since the PDC Emulator will receive more traffic than any other FSMO role holder, it should be on a server that can handle the load.

It is also recommended that all FSMO role holders be direct replication partners and they have high bandwidth connections to one another as well as a Global Catalog server. 3. BOOT.INI Switches how many? The following are the different switches that can be added to the Boot.ini file: Back to the top /3GB Enables user-mode programs to access 3 GB of memory instead of the usual 2 GB that Windows NT normally allocates to usermode programs. It moves the starting point of kernel memory to 3 GB. This switch is used only in the Windows NT Server Enterprise Edition of Windows NT with Service Pack 3.

For additional information, click the article number below to view the article in the Microsoft Knowledge Base: 171793 Information on Application Use of 4GT RAM Tuning Back to the top /BASEVIDEO

The /basevideo switch forces the system into standard 640x480 16-color VGA mode. This is used to enable the system to load if the wrong video resolution or refresh rate had been selected.

For more information, please see the following Microsoft Knowledge Base article: 126690 Windows NT 4.0 Setup Troubleshooting Guide Back to the top /BAUDRATE=nnnn This switch sets baudrate of the debug port. If you do not set the baud rate, the default baud rate is 19,200. 9,600 is the normal rate for remote debugging over a modem. This also enables the /debug switch. For example, /BAUDRATE=9600

For more information on modem configuration, please see the following Microsoft Knowledge Base article: 148954 How to Set Up a Remote Debug Session Using a Modem For more information on null modem configuration, please see the following Microsoft Knowledge Base article: 151981 How to Set Up a Remote Debug Session Using a Null Modem Cable Back to the top /CRASHDEBUG Enables the COM port for debugging in the event that Windows NT crashes. This enables you to use the COM port for normal operations while Windows NT is running, but converts the port to a debug port if Windows NT crashes (to enable remote debugging).

For more information, please see the following Microsoft Knowledge Base article: 151981 How to Set Up a Remote Debug Session Using a Null Modem Cable Back to the top /DEBUG The /debug switch enables the kernel debugger. This enables live remote debugging of a Windows NT system through the COM ports. Unlike /crashdebug, /debug uses the COM port whether or not you are debugging.

For more information on remote debugging, please see the following Microsoft Knowledge Base article: 121543 Setting Up for Remote Debugging Back to the top /DEBUGPORT=comx The /debugport=comx switch selects a COM port for the debug port (com1, com2, com3...) DEBUGPORT defaults to COM2 if it exists, otherwise it uses COM1. This also enables the /debug switch. For example, /DEBUGPORT=COMx where x is the com port.

For more information, please see the following Microsoft Knowledge Base article: 151981 How to Set Up a Remote Debug Session Using a Null Modem Cable Back to the top /HAL=filename Enables you to define the actual hardware abstraction layer (HAL) to be loaded at startup. This switch is useful in trying out a different HAL before renaming it to hal.dll. This switch is also useful when you want to try booting between multiprocessor and single processor mode when used in conjunction with the /kernel switch. For example, /HAL=halmps.dll this loads the Halmps.dll in the System32 directory. Back to the top /KERNEL=filename The /kernel=filename switch enables you to define the actual KERNEL to be loaded at startup. This is useful in switching between a debug enabled kernel full of debugging code and a regular kernel. It is also useful for forcing Windows NT to load a specific kernel. For example, /KERNEL=ntkrnlmp.exe. This switch command loads the Ntkrnlmp.exe in the System32 directory. Back to the top /MAXMEM=nn The /maxmem=nn switch selects the amount of memory Windows NT detects and can use at startup. This setting should never be set to less than 12. This option is good for checking for bad memory chips. For example, /MAXMEM=12

For more information, please see the following Microsoft Knowledge Base article: 108393 MAXMEM Option in Windows NT Boot.ini File Back to the top

/NODEBUG This switch disables the kernel debugger. The switch turns off debugging. This can cause a blue screen if a piece of code has a debug hardcoded breakpoint in its software. Back to the top /NOSERIALMICE:comx This switch disables the mouse port check for this com port. For example, /noserialmice:comx, where X is the number of the serial port. Ports may be separated with commas to disable more than one port. If no serial port is given then all ports are disabled for mouse devices.

This is used with uninterruptible power supply (UPS), like those from American Power Conversion brand (APC), that connects to a serial port. If this option is not available when Windows NT starts, and Windows NT tries to detect a mouse on this port, then the UPS will accidentally start its shutdown mode.

For more information, please see the following Microsoft Knowledge Base article: 131976 How to Disable Detection of Devices on Serial Ports Back to the top /NUMPROC= This switch sets the number of processors that Windows NT will run at startup. This will help test out performance problems and defective CPUs. For example, /NUMPROC=3 Back to the top /PCILOCK This switch prevents the HAL from moving anything on the PCI bus. The I/O and Memory resources are to be left exactly as they were set by the BIOS.

For additional information, click the article number below to view the article in the Microsoft Knowledge Base: 148501 Preventing PCI Resource Conflicts on Intel-Based Computers Back to the top /SOS

The /sos switch causes the loader to print the name of loaded modules. When Windows NT comes up instead of displaying dots while the devices load, Windows NT will show the actual names of the drivers as they load.

For more information, please see the following Microsoft Knowledge Base article: 99743 Purpose of the BOOT.INI File in Windows 2000 or Windows NT Back to the top /ONECPU This switch is part of Compaq's HAL. The switch tells Windows NT to use only 1 CPU at startup. This will enable you to run a single CPU in a multikernel configuration. For example, /ONECPU.

For more information on other Boot.ini switches that do not relate to Windows NT, please see the following Microsoft Knowledge Base article: 157992 How to Triple Boot to Windows NT, Windows 95/98, and MS-DOS Back to the top /WIN95 The /win95 switch loads bootsec.dos. Back to the top /WIN95DOS The /wind95dos switch loads bootsec.w40. 4. Bridgehead Server Overview and its topology? When domain controllers for the same domain are located in different sites, at least one bridgehead server per directory partition and per transport (IP or SMTP) replicates changes from one site to a bridgehead server in another site. A single bridgehead server can serve multiple partitions per transport and multiple transports. Replication within the site allows updates to flow between the bridgehead servers and the other domain controllers in the site. Bridgehead servers help to ensure that the data replicated across WAN links is not stale or redundant.

KCC selection of bridgehead servers guarantees bridgehead servers that are capable of replicating all directory partitions that are needed in the site, including partial global catalog partitions. By default, bridgehead servers are selected automatically by the KCC on the domain controller that holds the ISTG role in each site. If you want to identify the domain controllers that can

act as bridgehead servers, you can designate preferred bridgehead servers, from which the ISTG selects all bridgehead servers. Alternatively, if the ISTG is not used to generate the intersite topology, you can create manual intersite connection objects on domain controllers to designate bridgehead servers. In sites that have at least one domain controller that is running Windows Server 2003, the ISTG can select bridgehead servers from all eligible domain controllers for each directory partition that is represented in the site 5. Active Directory Databse Overview. The Active Directory Database is Stored in %SYSTEM ROOT%\NDTS folder. The file is called as ntds.dit. Along with this file there are other files also present in this folder. The files are created when you run dcpromo. The list of files and use of those files are listed below 1. ntds.dit : This is the main database file for active directory. 2. edb.log : When a transaction performed to ad database, like writing some data first the data will be stored to this file. And after that it will be sent to database. So the system performance will be depends on how this data from edb.log file will be written to ntds.dit. multiple .log files will be created according to the requirement. 3. res1.log : Used as reserve space in the case when drive had low space. It is basically 10MB in size and created when we run dcpromo. 4. res2.log : Same as res1.log. It is also 10MB in size and the purspose also same. 5. edb.chk : This file records the transactions committed to ad database. During shutdown, shutdown statement is written to this file. If it is not found when the system rebooted, the ad database tries to check with edb.log for the updated information. 6. Drop folder : It contains details related to SMTP.

6. RAID Technology? RAID 0, RAID 1, RAID 5, RAID 10 Explained with Diagrams

by Ramesh Natarajan on August 10, 2010

RAID stands for Redundant Array of Inexpensive (Independent) Disks.

On most situations you will be using one of the following four levels of RAIDs.

RAID 0 RAID 1 RAID 5 RAID 10 (also known as RAID 1+0)

This article explains the main difference between these raid levels along with an easy to understand diagram.

In all the diagrams mentioned below:

A, B, C, D, E and F represents blocks p1, p2, and p3 represents parity

Following are the key points to remember for RAID level 0.

Minimum 2 disks.

Excellent performance ( as blocks are striped ). No redundancy ( no mirror, no parity ). Dont use this for any critical system.

RAID LEVEL 1

Following are the key points to remember for RAID level 1.

Minimum 2 disks. Good performance ( no striping. no parity ). Excellent redundancy ( as blocks are mirrored ).

RAID LEVEL 5

Following are the key points to remember for RAID level 5.

Minimum 3 disks. Good performance ( as blocks are striped ). Good redundancy ( distributed parity ). Best cost effective option providing both performance and redundancy. Use this for DB that is heavily read oriented. Write operations will be slow.

RAID LEVEL 10

7. Cluster Server cannot start if the Quorum disk space is full??? You may be unable to start Windows Clustering. Also, any combination of the following event messages may be listed in the System Event log: * Event 1021: There is insufficient disk space remaining on the Quorum device. Please free up some space on the Quorum device. If there is no space on the disk for the Quorum Log files, then changes to the cluster registry will be prevented. To resolve this issue, follow these steps:

1. Turn off one of the nodes. For example, turn off Node1. 2. On the remaining node (Node2), click Start, point to Programs, point to Administrative Tools, and then click Computer Management. 3. Click Device Manager, and then click Show Hidden Devices on the View menu. 4. In the right pane, expand the non-Plug and Play drivers, and then double-click the Clusdisk driver. 5. On the Driver tab, change the Startup type option from System to Disabled. 6. In the right pane, double-click the Cluster service, and then click Disabled in the Startup type box. 7. On Node2, click Services in Control Panel, and set the ClusSvc service to disabled.

8. Restart Node2. Because the Cluster Disk device driver does not start, the disk drives on the shared SCSI bus appear as regular disk drives to the computer. 9. Delete the files on the disk that contains the Quorum log to free space on the disk drive. Do not access any files that might be required by the Cluster service or by shared applications unless it is absolutely necessary. 10. Set the startup value of the ClusDisk device back to "System" and the ClusSvc service back to "Automatic." 11. Restart the computer. The Cluster Service should start and the shared SCSI disk drives and all other resources should be "owned" by the node that is turned on (Node2). 12. Restart the other node (Node1). If it is necessary, you can now move the Quorum log to a disk that has more available free space.

8. Recovering from a lost or corrupted quorum log??? The Cluster service may not start if a hardware failure occurs or power is lost to both nodes of a cluster and to the storage device known as the quorum on the shared device bus. In such cases, the following error message may occur when you attempt to start the Cluster service on the forming node of the server cluster: If you have a backup of the system state on one of the computers after the last changes were made to the cluster, you can restore the quorum by restoring this information. For more information about backing up and restoring cluster configuration information, click the following article number to view the article in the Microsoft Knowledge Base: 248998 (http://support.microsoft.com/kb/248998/ ) How to properly restore cluster information If you do not have a backup of the Quorum log file, re-create a new quorum log file based on the cluster configuration information in the local system's cluster hive by starting the Cluster service with the -ResetQuorumLog switch. To do this, follow these steps:

1. Start the Services snap-in. (Click Start, point to Programs, click Administrative Tools, and then click Services.) 2. Right-click and select the properties of the Cluster service. 3. In the Start Parameters box, type: -resetquorumlog

Then click the Start button.

9. How the Cluster service reserves a disk and brings a disk online??? The following procedure describes how a server cluster starts and gains control of the shared disks. This scenario assumes that only one node is being turned on at a time:

When the computer is started, the Cluster Disk Driver (Clusdisk.sys) reads the following local registry key to obtain a list of the signatures of the shared disks under cluster management: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\ClusDisk\Parameters \Signatures After the list is obtained, the cluster service attempts to scan all of the devices on the shared SCSI bus to find matching disk signatures.

When the first node in the cluster starts, the cluster disk driver first marks all LUNs (LUN: logical unit number, a unique identifier used on a SCSI bus to distinguish between devices that share the same bus) matching the Signatures key as offline volumes. Note that this is not the same as taking a cluster resource offline. The volume is marked offline to prevent multiple nodes from having write access to the volumes simultaneously. If the cluster is a shared disk cluster, one of the disks is designated as quorum disk by the cluster service. Quorum disk is the first resource brought online when cluster service attempts to form a cluster.

When the cluster service on the forming node starts, it first tries to bring online the physical device designated as quorum disk. It executes the disk arbitration algorithm on the quorum disk to gain ownership. On successful arbitration, cluster service sends a request to clusdisk to start sending periodic reserves to the disk (to maintain ownership). Then cluster service sends a request to clusdisk to unblock access to the quorum disk and mounts the volumes on the disk. Successful mounting of the volume(s), completes the online procedure and the cluster service then continues with the cluster form process. The request is passed from the cluster disk driver to the Microsoft storage driver stack and finally to the driver specific to the HBA that communicates to the disks. It may also be passed to any multipath software running in the storage stack. For more information regarding storage stacks and driver models, please click the following links: After the storage controller/device driver reports that the device has been successfully reserved, the cluster service ensures that the drive can be read from and written to. Once the disk has passed all of these tests, the disk resource is marked as online and the cluster service then continues to bring all other resources online.

Each node in the cluster renews reservations for any LUNs it owns every three seconds. If the nodes of a cluster lose network communication with each other (for example, if there is no communication over the private or public network), the nodes begin a process known as arbitration to determine ownership of the quorum disk. The node that wins ownership of the quorum disk resources in total communication loss between cluster node will remain functional. Any nodes that cannot communicate and cannot maintain or acquire ownership of the quorum disk will terminate the cluster service and any resources that node was hosting will be moved to another node in the cluster.

1. The node that currently owns the quorum disk is the defending node. The defender assumes that it is defending against any cluster nodes that it cannot communicate with and for which it did not receive a shutdown notification. The defender continually renews its reservation to the quorum by requesting a SCSI reserve be placed on the LUN every three seconds. 2. All other nodes (nodes that do not own the quorum disk and cannot communicate with the node that owns the quorum resource) become challenging nodes. 3. When the challenger detects the loss of all communications, it immediately requests a bus-wide SCSI reset to break any existing reservations. 4. Seven seconds after the SCSI reset requested, the challenger tries to reserve the quorum disk. If the defender node is online and functioning, it will have already reserved the quorum disk as it typically does every three seconds. The challenger detects that it cannot reserve the quorum, and terminates the cluster service. If the defender is not functioning properly, the challenger can successfully reserve the quorum disk. After ten seconds, the challenger brings the quorum online and takes ownership of all resources in the cluster. If the defending node loses ownership of the quorum device, then the cluster service on the defending node terminates immediately.

When a cluster node takes a disk resource offline, it requests that the SCSI reserve be released and then the drive will once again be unavailable to the operating system. Anytime a disk resource is offline in a cluster, the volume that the resource points to (the disk with the matching signature) will be inaccessible to the operating system on any of the cluster nodes.

10. Universal Groups, Global Groups and Domain Local Groups.??? * Universal Group: can contain users and groups (global and universal) from any domain in the forest. Universal groups do not care about trust. Universal groups can be a member of domain local groups or other universal groups but NOT global groups. * Global Group: can contain users, computers and groups from same domain but NOT universal groups. Can be a member of global groups of the same domain, domain local groups or universal groups of any domain in the forest or trusted domains. * Domain Local Group: Can contain users, computers, global groups and universal groups from any domain in the forest and any trusted domain, and domain local groups frm the same domain. Can be a member of any domain local group in the same domain.The short answer is that domain local groups are the only groups that can have members from outside the forest. And use global groups if you have trust, universal groups if you don't care about trust.

Disclaimer: this might still be wrong. But nobody has disputed it yet; thankfully, with good comments, I can always edit the blog post if I have something wrong. Complete memory dump

A complete memory dump records all the contents of system memory when your computer stops unexpectedly. A complete memory dump may contain data from processes that were running when the memory dump was collected. If you select the Complete memory dump option, you must have a paging file on the boot volume that is sufficient to hold all the physical RAM plus 1 megabyte (MB). If a second problem occurs and another complete memory dump (or kernel memory dump) file is created, the previous file is overwritten. Kernel memory dump A kernel memory dump records only the kernel memory. This speeds up the process of recording information in a log when your computer stops unexpectedly. You must have a pagefile large enough to accommodate your kernel memory. For 32-bit systems, kernel memory is usually between150MB and 2GB. Additionally, on Windows 2003 and Windows XP, the page file must be on the boot volume. Otherwise, a memory dump cannot be created. This dump file does not include unallocated memory or any memory that is allocated to User-mode programs. It includes only memory that is allocated to the kernel and hardware abstraction layer (HAL) in Windows 2000 and later, and memory allocated to Kernel-mode drivers and other Kernel-mode programs. For most purposes, this dump file is the most useful. It is significantly smaller than the complete memory dump file, but it omits only those parts of memory that are unlikely to have been involved in the problem. If a second problem occurs and another kernel memory dump file (or a complete memory dump file) is created, the previous file is overwritten when the 'Overwrite any existing file' setting is checked. Small memory dump A small memory dump records the smallest set of useful information that may help identify why your computer stopped unexpectedly. This option requires a paging file of at least 2 MB on the boot volume and specifies that Windows 2000 and later create a new file every time your computer stops unexpectedly. A history of these files is stored in a folder. This dump file type includes the following information: * The Stop message and its parameters and other data * A list of loaded drivers * The processor context (PRCB) for the processor that stopped * The process information and kernel context (EPROCESS) for the process that stopped * The process information and kernel context (ETHREAD) for the thread that stopped * The Kernel-mode call stack for the thread that stopped

This kind of dump file can be useful when space is limited. However, because of the limited information included, errors that were not directly caused by the thread that was running at the time of the problem may not be discovered by an analysis of this file. If a second problem occurs and a second small memory dump file is created, the previous file is preserved. Each additional file is given a distinct name. The date is encoded in the file name. For example, Mini022900-01.dmp is the first memory dump generated on February 29, 2000. A list of all small memory dump files is kept in the %SystemRoot%\Minidump folder. Tools to read the small memory dump file You can load small memory dump files by using the Dump Check Utility (Dumpchk.exe). You can also use Dumpchk.exe to verify that a memory dump file has been created correctly. The Dump Check Utility does not require access to debugging symbols. The Dump Check Utility is included with the Microsoft Windows 2000 Support Tools and the Microsoft Windows XP Support Tools. For additional information about how to use the Dump Check Utility in Windows 2000 and in Windows NT, view the article in the Microsoft Knowledge Base How to use Dumpchk.exe to check a memory dump file For additional information about how to use the Dump Check Utility in Windows XP, view the article in the Microsoft Knowledge Base: How to use Dumpchk.exe to check a memory dump file Note: The Dump Check Utility is not included in the Microsoft Windows Server 2003 Support Tools. To obtain the Dump Check Utility if you are using Microsoft Windows Server 2003, download and install the Debugging Tools for Windows package from the Microsoft's Web site. You can also read small memory dump files by using the WinDbg tool or the KD.exe tool. WinDbg and KD.exe are included with the latest version of the Debugging Tools for Windows package. This Web page also provides access to the download symbol packages for Windows. To use the resources, create a folder on the disk drive where the downloaded local symbols or the symbol cache for symbol server use will reside; for example, use C:\Symbols. * NOTE: To use the Microsoft Symbol Server. Make sure you have installed the latest version of Debugging Tools for Windows. You can use the following symbol path with all the commands that are described in this article: * NOTE: Start a debugging session. Set the debugger symbol path as follows, substituting your symbols path with C:\symbols. SRV*c:\symbols*http://msdl.microsoft.com/download/symbols For additional information about the dump file options in Windows, go to Overview of memory dump file options for Windows 2000, for Windows XP, and for Windows Server 2003 Install the debugging tools

To download and install the Windows debugging tools, visit Debugging Tools and Symbols: Getting Started Select the Typical installation. By default, the installer installs the debugging tools in the following folder: * C:\Program Files\Debugging Tools for Windows Open the dump file To open the dump file after the installation is complete, follow these steps: 1. Click Start, click Run, type cmd, and then click OK. 2. Change to the Debugging Tools for Windows folder. To do this, type the following at the command prompt, and then press ENTER: cd c:\program files\debugging tools for windows 3. To load the dump file into a debugger, type one of the following commands, and then press ENTER: windbg -y SymbolPath -i ImagePath -z DumpFilePath kd -y SymbolPath -i ImagePath -z DumpFilePath