Memory dump analysis when BSoD occurs ================================================================================ When BSoD occurs in Endpoints, you can guess the cause by following the procedure below. If the Insights-related driver file name is checked on the BSoD screen -------------------------------------------------------------------------------------------------------------------------- #. Collect %windir%\memory.dmp and agent full log and make an analysis request. #. Conversely, if another product driver name is displayed on the BSoD screen, a cause analysis request is made to the corresponding product developer. If the driver file name is not confirmed on the BSoD screen ------------------------------------------------------------------------------------------------------------------- Install the necessary Analysis program to determine which driver files are causing the problem. 1. **windbg installation:** windbg is the `windows sdk<https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/debugger-download-tools> You can install it via `_ . (When installing windows sdk, select only "Debugging Tools for Windows" among the elements to be installed and uncheck the rest) 2. **Open dump file:** After installing windbg on a PC with internet access, try to open %windir%memory.dmp in windbg. (After setting read permission in memory.dmp, drag it to the windbg window.) 3. **Symbol path Settings:** When the dump is opened, execute the following command. :: .symfix+ .reload 4. **Run Automatic Analysis:** Carefully read the Analysis report that is output after running !analyze -v. If there is a part (MODULE_NAME) like the one below in the analysis result, it is most likely that this driver is the problem. If the problem driver is identified, request the cause analysis from the driver developer. :: 6: kd> !analyze -v ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* DPC_WATCHDOG_VIOLATION (133) The DPC watchdog detected a prolonged run time at an IRQL of DISPATCH_LEVEL or above. Arguments: Arg1: 0000000000000001, The system cumulatively spent an extended period of time at DISPATCH_LEVEL or above. The offending component can usually be identified with a stack trace. ... MODULE_NAME: check64 <<< This part!! IMAGE_NAME: check64.sys ... If you have identified a suspicious driver name, but cannot determine the exact information about this driver, you will need to find the driver file on your PC to verify the information. For example, if the resolved driver name is f_ih.sys, you can check the driver location (Image path) with the lmvm command. You can infer the developer by finding the driver file and checking the registration information. :: 6: kd> lmvm f_ih Browse full module list start end module name fffff805`37030000 fffff805`3703a000 f_ih (deferred) Image path: \??\C:\windows\SYSTEM32\DRIVERS\f_ih.sys Image name: f_ih.sys Browse all global symbols functions data Timestamp: Tue Oct 18 09:43:46 2016 (58057042) CheckSum: 0001256B ImageSize: 0000A000 Translations: 0000.04b0 0000.04e4 0409.04b0 0409.04e4 Information from resource tables: 6. If the !analyze -v result does not identify the suspect driver, look carefully at the CallStack section. :: STACK_TEXT: ffff9881`99fe5b08 : nt!KeBugCheckEx ffff9881`99fe5b10 : nt!KeAccumulateTicks+0x181641 ffff9881`99fe5b70 : nt!KeClockInterruptNotify+0x98c ffff9881`99fe5f30 : hal!HalpTimerClockInterrupt+0xf7 ffff9881`99fe5f60 : nt!KiCallInterruptServiceRoutine+0xa5 ffff9881`99fe5fb0 : nt!KiInterruptSubDispatchNoLockNoEtw+0xfa ffffbd05`dfd57660 : nt!KiInterruptDispatchNoLockNoEtw+0x37 ffffbd05`dfd577f0 : nt!KxWaitForSpinLockAndAcquire+0x30 ffffbd05`dfd57820 : nt!KeAcquireSpinLockRaiseToDpc+0x87 ffffbd05`dfd57850 : check64!test::Lock+0x30 [c:\test.cpp @ 205] ffffbd05`dfd57880 : check64!test::EnumElement+0x65 [c:\test.cpp @ 277] ffffbd05`dfd578d0 : check64!testanalyze::fileinfo+0x10f [c:\testanalyze.cpp @ 1786] ffffbd05`dfd57950 : check64!testcheckInfo+0x100 [c:\testcheck.cpp @ 7214] ffffbd05`dfd579f0 : check64!testcheckCallback+0x1ca [c:\testcheck.cpp @ 3189] ffffbd05`dfd57a50 : check64!stest::memoryQueue+0x9e [c:\stest.cpp @ 118] ffffbd05`dfd57a90 : check64!stest::checkFunc+0x9d [c:\stest.cpp @ 145] ffffbd05`dfd57b10 : nt!PspSystemThreadStartup+0x55 ffffbd05`dfd57b60 : nt!KiStartSystemThread+0x28 Each entry in Callstack has the following meaning: :: [Address/argument, etc. hexadecimal] : [Module name] ! [Address in the module / Offset] For example, the entry below means the KiStartSystemThread+0x28 memory address of the nt kernel. :: ffffbd05dfd57b60 : nt!KiStartSystemThread+0x28 Callstack means the function called first at the bottom and the function called last at the top. We go through the callstack from top to bottom, starting with the module that was called later. In this case, it is most likely that the first module that appears among the non-components of Windows is causing the problem. For example, in the Callstack above, modules appear in the following order: :: nt >> hal >> nt >> check64 >> nt Among these, nt and hal are components of windows, so check64, which appeared first among non-windows modules, is the module that caused the problem. Windows module names that appear frequently in Callstack are as follows. .. list-table:: windows module name :widths: 40 60 :align: left :header-rows: 1 * - module name - role * - nt - Windows Kernel * - hal - H/W director * - io - IO manager * - netio - Network I/O Subsystem * - fltmgr - Filter manager * - ob - Object manager If you find a suspicious module, check the path of the file with the lmvm command, and check the information such as Investigation in the file's properties or digital signature information. If the identified suspicious module is an Insights-related module or a Windows-related module, collect %windir%\memory.dmp and agent logs to request cause analysis.