Oct 2018 build (package)

Distribution Contents

This package has been updated as of 20 Oct 2018.

To use the tools, one needs to already have a valid license with an active maintenance subscription. The license file needs to be placed in the same directory where the tools are located (in the /bin directory). If you experience problems with this distribution, contact info@tzworks.net for assistance.


Release Highlights (20 Oct 2018)


Maintenance Updates

No new features were added in this build, however, various bugs and boundary conditions were fixed primarily across the various string handling routines. A few were Unicode based; others were boundary conditions that may be invoked if the artifact data is corrupted (usually from carved data). Since some of these routines were used throughout all the tools, a complete build needed to be done.


Release Highlights (22 Sept 2018)


New Tool Added - wpn

wpn is short for Windows Push Notification database parser. The Window Push Notification Services (WNS) allows applications to send notifications to the user, either as a popup message, a sound that is played, or as an image overlay over an icon/tile present on the status bar or desktop. This service was added by Microsoft starting with Windows 8. If one fast forward's to Windows 10 build 1607, you find the WNS changed the format to store its records from a Microsoft proprietary database into the commonly used database architecture of SQLite. This decision has continued on in the later Windows 10 builds as well. From an artifact parsing standpoint, a database that can be quered with SQL statements is easier to handle from an analyst or developer point of view.

The initial version of the wpn tool targets this newer SQLite format. The tool doesn't require one to understand Structure Query Language (or SQL) and it offers the ability to recover records that have been discarded and/or partially overwritten. From the empirical testing done thusfar, the number of records recovered from slack/invalid pages usually exceeds the number of valid records; the conclusion here, is without looking into the slack/invalid pages one could leave plenty of artifact data unanalyzed. Albiet, some of these recovered records are duplicative in nature, many are not, and consequently do offer additional data over and beyond the normal valid records. For those interested in more information, see the tool's readme file and/or user's guide.

tac Updates

The tac tool has been updated to include two more parsing options. These options strickly focus on carving target the records in the Activity and ActivityOperations tables that have either been corrupted or only partial versions of the database can be recovered. See the tool's readme file and/or user's guide for more information.

pescan Updates

There have been a number of folks using the pescan tool to identify suspicious portable executable (PE) files (eg. exe's, dll's, sys's, etc type binaries), using the -anomalies option. For those unfamiliar with this option, it tells the tool to analyze a PE file's internals and determine if the composition of the PE file looks abnormal. While what is normal or abnormal can be argued as highly subjective, this option does a pretty good job of finding those apps, libraries or driver files that try to do something different than a normal Windows build of a tool.

The tool works fine for PE files that are not missing chunks of data or are not severly corrupted. However, when carving out PE files from unused space on an image using a 3rd party tool, the data can be severely corrupted and/or large chunks of data missing, which will result in the reconstructed PE file containing garbage. This can result in the pescan tool to try to read the garbage, and if it is interpreted as a reference that is not backed by file data, it could result in the tool crashing. Therefore, the updates to this tool try to improve the robustness of handling PE files that have some sort of corruption or missing data.


Release Highlights (1 Aug 2018)


New Tool Added - tac

tac is short for Timeline ActivitiesCache parser; ActivitiesCache defined here is the ActivitiesCache.db database associated with the new Timeline application which was released as part of the April 2018 Win10 update. The Timeline application advertises that one can to go back in time to find the items previously worked on. It has a history from the most recent tasks to up to a month ago. Whether going back to a previous Internet search that done in the past or continuing on with the document that was been read or edited, the functionality is built into the Timeline application to do this.

For obvious reasons, this type of user activity history is useful for the forensic analyst. The tac tool targets this database and extracts any user activity recorded. The database relies on the SQLite archtitecture to store the user activity. For the initial version of tac, one can not only parse out the valid SQLite records, but can recover past records that are in invalid pages and/or slack space. The recovery ability is done on a best effort basis and should be considered experimental in nature. See the readme file and/or user's guide for more information about this tool.

Additional Registry Artifacts parsed in cafae and yaru

A number of improvements were made on the registry parsing front. Listed in no particular order, these include: reporting on additional timestamps embedded into the Task schedule; updated the Background Activity Monitor (bam) parsing; added a translation for the Windows DigitalProductId to retrieve the activation key used for the OS; added parsing for Terminal Server Clients; and a few others. Various other improvements were made in the form of bug fixes.

ntfswalk Zone Information option

For NTFS filesystems, when a file is downloaded from the Internet, a hidden alternate data stream (ADS) is embedded into the file. The ADS is named the "Zone.Information" and used to include just what zone the data was downloaded from. Presently this ADS is being used to store additional information, such as the URL of where the file was downloaded from. Other information may also be present and depends on the browser used.

Starting with version 0.74, an option was added to extract the "Zone.Information" data and include it into the final output. For more information about this option, refer to the readme or user's guide documentation.

Various additions and bug fixes

See the tool change log sections in the individual readme files for more information.


Release Highlights (23 Apr 2018)


New Tool Added - minx

minx is short for Modular Inspection Network Xfer (minx). It acts as an programmable agent to gather forensics data from one or more endpoints and send the data across the network to a central collection point. For this tool, the collection point is one of the older TZWorks tools called nx that acts in server mode.

The network relationship between the minx client and the nx service uses peer-to-peer communication. For peer to peer communication, no domain credentials are required to be set up if deployed in an enterprise network. As long as minx can communicate to the nx service's IP address/port, without being impacted by firewalls or other network devices that can block IP traffic, the communications should be seamless.

The functionality put into minx includes: (a) an integrated NTFS engine to allow minx to copy any file from a host Windows computer by accessing the file data via raw cluster reads, (b) an ability to scan all drives attached to a Windows computer, (c) an ability to image or copy any number of bytes from a specific drive or volume, (d) an internal directory enumerator with filtering to target specific files within one or a group of subdirectories, (e) the ability to spawn other applications and act on their output, (f) the ability pull common artifacts from all the volume shadow copies, and (g) an internal scripting engine that allows minx to receive instructions from the nx service and act on them.

More information about minx can be found here.

Handling of extremely fragmented/large NTFS Master File Table ($MFT)

Our internal NTFS library has been updated to handle very large and fragmented $MFT data. All the tools that examine the $MFT as part of their collection or parsing of data have been updated with this change.

As background, the NTFS Master File Table (or $MFT) keeps track of all the cluster runs for the FILE records that identify all the files and folders on the volume. Typically the cluster runs for the $MFT are small enough from a data storage standpoint to house this data within its own resident FILE record. This is because when the NTFS is initialized, it tries to reserve a set of clusters that are continuous (1 cluster run if it is successful) to handle the $MFT data. This is the normal behavior of most systems. However, when it comes to servers or systems that have an application or service that is generating hundreds of thousands of tiny files on the system in a short period of time, the $MFT can grow very large beyond its initial reserved cluster allocation, and consequently becomes fragmented. As new cluster runs are added to the $MFT during this process, it eventually grows beyond the size that is available in resident storage, and therefore resorts to allocating new FILE records with the sole purpose of storing the new cluster run data. This situation doesn't happen very often, but when it does, this case is now handled by our tools.


Release Highlights (20 Feb 2018)


USB Artifact Analysis Improvements (evtwalk, evtx_view, cafae, and yaru)

We finally added event log analysis to the usp tool. Event logs used here are the .evtx type logs and not the WinXP .evt type logs. For USB type artifacts, the tool looks at the following logs: (a) System.evtx, (b) Microsoft-Windows-DriverFrameworks-UserMode%4Operational.evtx, (c) Microsoft-Windows-Kernel-PnP%4Configuration.evtx, and (d) Microsoft-Windows-Partition%4Diagnostic.evtx.

The challenge in using event logs for USB report generation, is an event log, in general, can contain numerous events for one USB action, so filtering and translation needs to be done to highlight significant USB actions. A second issue for a tool designer, is how to effectively merge the filtered eventlog transactions with the artifacts obtained from other sources (eg. various registry hives and setupAPI logs). A typical strategy that is helpful, is to record all the transactions in a log2timeline output. This merges the temporal sequence of transactions that occurred on the system into a linear timeline. Unfortunately, merging sequences of transactions doesn't necessarily consider the association of a transaction between one USB device and some other device, so the analyst must use other techniques to group which transactions go to which device. To compound the problem, for plug-n-play events, there may be a single event ID to identify (insertion and removal of the device); this means the event log parsing tool needs to go further and look at the plug-n-play function number used to determine whether the device is being 'started' (eg. inserted) or 'stopped' (eg removed).

With this new version of usp, all the above issues are addressed. The sequence of transactions are preserved, as well as the merging of eventlog transactions to the other artifacts so that the association the various devices on the system are maintained. The also tool addresses the deeper event log parsing so it can categorize the event log transactions into 3 basic areas: (a) when the USB device was inserted, (b) when the USB device was removed, and (c) when the USB device driver/service was deleted. As a bonus, additional data is extracted from the Microsoft-Windows-Partition%4Diagnostic.evtx type log, such as the partition table and volume boot record of the USB device. To see combined grouping of USB devices and the sequence of timelined actions, the log2timeline output (via -csvl2t) has been modified to capture this and retain a traceability back to each event record. Please refer to the usp user guide for more details.

Debug symbols - discussion (sf, pescan, pe_view)

Not so much so for forensics types, but this area is for those who spend a fair amount of time debugging and reverse engineering. Having the proper debug symbols make the job go from difficult (without symbols) to much easier. In most cases, if you need to get a set of debug symbols for the Windows OS, your debugger (WinDBG, for example) would go out to the Microsoft symbol server and request them without any user interaction. The Microsoft symbol server is public service that allows debugging tools to access their respository of public symbols. Recently, however, the Microsoft symbol server went from allowing tools to request compressed downloads, which was the norm since the inception of the service, to only allowing uncompressed downloads. This broke our sf (symbol fetch) tool. So we modified sf to handle either case (compressed or uncompressed), transparently; so it should work either way without the user doing anything different. For those not familar with this tool, it allows one to enumerate any portable executable (PE) file on your system, pull out the debug GUID signature/metadata, and query the Microsoft symbol server for the proper symbols that go with that PE file. The use-case for this tool is to collect various symbol files from a box with an Internet connection, and sneaker-net the collected symbol files to a closed system where development and analysis is done.

Related to this topic, and probably of more interest to the forensics analyst that deals with malware, is the enhancement made to the pescan report generation. At the request of one of our clients, we added (amongst other data), the debug directory data associated with the path/filename where the symbol file was generated. As background, to those unfamilar with this topic, each PE file can have a debug directory as part of its composition. This is the primary way a debugger knows how to find the debug symbols related to that executable, library or device driver file. Thus, when a developer creates a new tool (or builds an update to that tool), the compiler defaults with embedding certain debug metadata, so that the resulting PE file generated contains the file (and path, if necessary) where the symbol file was generated. This compiler setting would need to be explicilty turned off by the developer if they didn't want this functionality. What this means, in general, is that most PE files in the wild, have the symbol filename embedded into the PE file. This happens to be the same PE directory area where the debug GUID and associated metadata live, and consequently, where the the sf tool obtains its data as well.

So, why is the path/file of the symbol file of interest to some malware analysts? In short, if the path is present, it can give you some clues about the source of the local development tree. For example, when a developer builds a tool, they are usually in an area in some parent folder that is common to where other tools are being (or have been) developed. Therefore, if the path is correlated with other PE files, one can deduce whether the author of the tools is the same person, or in some cases, organization. Going further, this debug section also has a timestamp embedded into its data, and if the date/time doesn't closely match the compile timestamp, then it implies someone explicilty changed the compile timestamp after the build which is abnormal. For these cases, you are relying on many of the malware writers to understand what the compile timestamp is, but not necessarily the intricacies of the PE internals, and other less know areas where these other timestamps are located. Some of the more sophisticated malware authors remove the debug section from the PE file for this reason. Many of the less sophisticated authors do not. For those wanting to study this in more depth, one can look at the internals using any number of PE viewer tools, including our pe_view tool.

gena update

When looking at a VMWare volume, gena was hardcoded to assume the volume started at offset 0. If it didn't find an NTFS volume at offset 0, it would fail. This has now been changed so gena will examine the VMDK (eg. disk) and display the various volume offsets it found allowing you to selected the proper NTFS volume to analyze.


Release Highlights (15 Dec 2017)


Win10 Compatibilty Fixes

This release was focused various bugs fixes and Win10 compatibilty issues. See individual readme files for the details.


Release Highlights (27 Oct 2017)


cafae and yaru updates

With the October release of Windows 10 Fall Creators Update, the number of Amcache artifacts has grown. In addition, the Amcache format was changed on some existing artifacts. Therefore, both cafae and yaru have been updated to accommodate the additions and changes. The tools will automatically sense which version of the Amcache hive is present during the parsing process, making the operation transparent to the user. The report generated, however, will either have the same artifacts as before or more, depending on the version of the Amcache processed.

Also updated with these tools is the user hive registry parsing to include the HKCU\Software\Microsoft\Windows\CurrentVersion\Search\RecentApps artifact in the output report.

usp updates

Since the newer version of the Amcache hive has device plug-n-play data, there was an attempt to integrate portions of this data into usp's reporting. This enhancement is still beta, and as such, to invoke this new option, one is explicitly required to use a separate command. This was done to ensure the older usp processing is minimally impacted with the change in the codebase, while more testing and analysis is done. Information on how to use this new option is in the readme or user's guide for the tool.

At the suggestion of one of our clients, we enhanced the output of usp's -csvl2t option, by adding the LastRemoval and LastArrival times to the MACB output. Previously, it was only included in the overflow field (or field labeled 'extra') in the log2timeline output. This change allows the additional data to be integrated into one's timeline analysis.

The final enhancement includes the processing of some additional setupapi log files introduced with Windows 10. Previously usp targeted the setupapi.dev.log and its related archived versions (eg. setupapi.dev.yyyymmdd_hhmmss.log), since this log recorded installations (and de-installations) of USB devices. With this new version of usp, it will now will look at other variants of setupapi log files, such as: setupapi.upgrade.log, setupapi.setup.log, etc, since these other logs also contain information about USB devices.

csvdx update

A new option was added to csvdx to allow one to take a mixed artifact CSV report and group each unqiue artifact type into separate CSV files. As background, some of the TZWorks tools (like cafae, evtwalk and others) will process raw artifact files and produce a combined CSV report. This allows them to operate in a batch processing mode and will allow one to process many files in one session. The other benefits are faster processing times while minimizing the footprint of new files generated on the target box (since our tools are designed for live collection/processing). The challenge with merging differing artifact types into one report, is separating those same artifacts out later when desiring to put them into a database. So that was the objective with this new option, to take any of the reports generated by cafae or evtwalk and group unique artifacts together in their own CSV file.

While still beta, this option contains some other nice things, like: (a) handling interspersed artifacts and grouping them appropriately, (b) pulling out all banner information from the original CSV file, and (c) allowing one to continuously process other CSV reports and merge their artifacts into previous files generated. More information about this option is in csvdx's user's guide and/or readme file.

Release Highlights (30 Aug 2017)


New Tool Added - tela

tela is short for Trace Event Log & Analysis. It was designed to parse ETL (Event Trace Log) files that have been common in Windows for some years now. As background, Windows incorporates a built-in framework for logging and diagnostics that go beyond the standard event logs (.evt or .evtx files). From a forensics standpoint, these trace logs (.etl files) contain information that could be useful to the DFIR analyst, since timestamps for the events are present, which user SID/process ID was responsible for the action, and they can contain formatted messages provided by the application. The information is similar to that in the normal event logs, but with the trace log, much more information is provided from a frequency standpoint. The time intervals between each record is much shorter. While great for performance tuning and debugging, they can also be used to subvert security to gain information about a system by attackers. These logs can be turned on and off easily, and the ETL framework allows easy remoting of the log data to another machine.

To date, there are only a few tools available that parse ETL files reliably, and most of the good ones are from Microsoft. The goal with tela was to have a portable version (to work on Linux, OSX as well as Windows) that could parse ETL data across the various types of providers. The other goal was to break out the various disjointed provider data into common fields to make it easier to export the parsed data into a separate database. Even though tela is still in the prototype/experimental phase, it does a good job of parsing out much of the metadata contained in these files across the various providers. The current version only authenticates for clients that have an enterprise license. More information about this tool and its capabilities can be found here.

Registry Stats and Entropy

This subject affects both cafae and yaru. Based on suggestions from a client, the functionality to scan for 'very large values' as well as 'high entropy values' was added into these tools. The term 'high entropy' means close to random data, which occurs when something is encrypted (highly random) or uses a compression algorithm (mostly random). Both tools now have this capability. See the respective readme or user guide to get the details about how to use these options, if interested.

TypedURLs & TypedURLsTimes

These are subkeys in the ntuser.dat registry hive. One provides the browser URLs and the other provides the timestamp associated with the URL entry. Since they are separate subkeys, the reporting was disjointed. With this update, the data from both subkeys are merged into one report.

Message Table extraction

This is something that relates to certain Portable Executable (PE) files. Message Tables, if they are present in the PE file, are embedded in the resource section. Message Tables are used to store event log templates and ETL (tracelog) templates. Both pescan and pe_view had (and still have) this parsing ability. This updated version of the tools, however, required us to 'beef up' the parsing in this area to accomodate the reversing the ETL internal structures for the tela tool discussed above. Secondly, we incorporated the ability for tela to invoke pescan, if desired, to pull out the Message Table resource quickly and display it, to assist in analyzing differing provider template data.

USB history

A couple of bugfixes where added to usp as they relate to the WinXP version of the SetupAPI log. If you are using an older version of usp and are parsing USB artifacts on an WinXP image or box, then upgrade to this newer version.


Release Highlights (04 May 2017)


Eventlog Tool Updates

Enhancements were added to the eventlog parsing engine to handle those cases where EVTX logs don't contain template references. As background, normal EVTX logs have embedded templates to identify the context of the binary XML data. These templates provide for a more compact way of storing the complete log information. Without the template, each record in the log needs to include the context of the data, which in turn, makes a larger log file. However, doing it this way, allows for ease in forwarding events from a client machine to another machine acting as a log collector, since all the state/context information is encapsulated in the record itself. These logs are referred to as forwarded event logs, and because of the reasons just stated, usually will not contain any template references. These logs also contain other nuances, such as, 2 timestamps and 2 record identifiers per record, since one of the record id's and timestamps will be from the client machine and the other will be from the collector. These types of logs can now be handled with version 0.38 of evtwalk and version 0.94 of evtx_view.

A second enhancement was made to evtwalk to allow on to create a new EVTX log that is a subset of an existing EVTX log. Occasionally, it is necessary to strip out specific records and/or certain events from a very large log, and create a separate log that is much more managable. This is usually done to aid in debugging problem logs. Since this may also be useful for clients for other situations, it was made available as a new option [-createlog], which will allow one to create a subset log based on either an event identifer(s) or record number/range.

Shimcache data for Win10 Creators Update

The Shimcache data structure was extended with the latest update for Win10. Microsoft has coined this new update as the "Creators Update". Changes were made to the tools: wacu, cafae and yaru to allow them to handle the changes in the new format.

Registry Parsing of Corrupted Hives

There are cases when one will come across corrupted hives, or be able to partially reconstruct a hive from another tool. One situation, where this happens frequently, is in the reconstruction of hives from a memory capture (ref: the Volatility plugin dumpregistry). In some cases, the desired hive(s) can be reconstructed completely. In other cases, the desire hive(s) may have portions of the data paged out (eg. which means the needed data was not in physical memory) at the time of the memory capture. For the latter case, the reconstruction of the hive(s) would be incomplete due to the hive having holes in the dataset and can cause any number of registry parsers to crash. Our tools were no exception, and were susceptible to this issue as well. So we decided to beef up the error checking on our registry parsing engine to try to account for these types of corrupted hives. This new update makes the registry parsing engine more robust, and from our preliminary tests, show it can withstand most hives with holes in them. Keep in mind, this is an on-going process, and there will be some situations where the tool may not handle the corruption. Therefore, if you encounter a hive that causes any of our tools a problem, please contact us.

Improvements to dup

Modified dup to be more robust for drives with bad sectors. In addition, updated the -pull_evtlogs command to also extract Event Trace Logs (ETL) from the system directory.


Release Highlights (23 Mar 2017)


Additional improvements to jp

In addition to the improvements discussed earlier in the month, we had a new request to see if we could address USNJRNL artifacts in slack space. So with this latest version, there is a new option called -include_slack_space. It will traverse all the MFT records, scanning slack space for change log journal records and, if found, add their parsed content to the report. Also, to help out with the manual verification of the data, whether it be from unallocated data, Volume Shadow clusters, or just slack space, we improved the option -show_offset, which will annotate the drive offset of the where the USNJRNL artifact was found.

Improvements to lp

We have been getting requests to extend the functionality of our LNK parser. So with this update, we added our NTFS engine into the parser. This allows the tool to specifically target the MFT records, unallocated clusters and Volume Shadow clusters. When combining these options together, the tool yields more results than the previous -rawscan option, which just looked at the sector signatures to locate and parse LNK metadata. For those curious why the new option yields more results, it is because with the tool now NTFS aware, it can reconstruct fragmented files, and thus, more completely parse LNK data. While normal LNK data most likely is small and normally within a cluster size (meaning it isn't fragmented), Jump List data is typically larger since it has a collection of LNK data, and therefore, can be fragmented. So using the new -ntfs_scan option with the additional sub-options, allows lp to additionally pull out the internal LNK data from Jump Lists, thus yielding more results.

dup - bug fixes

We uncovered various bugs with imaging certain volume types using the -copyvolume option. This occured with certain GPT's (GUID Partition Tables) and extracting the partition information, or lack thereof, using some older APIs. With this updated version, this is fixed and should handle various partition tables whether MBR or GPT. Other bugs were also found in compression/decompression routines and those were fixed as well.


Release Highlights (2 Mar 2017)


New Tool Added - dup

dup is short for Disk Utility and Packer. It was designed for clients with an enterprise license to assist their incident responders in collecting artifacts from live endpoints. Later, after all the raw artifact data is collected, they can process and analyze those artifacts on a forensic workstation.

Still in the prototype/testing stages, the tool can: (a) generate disk stats, (b) do simple master boot record analysis, (c) image a drive, volume or a specific set of clusters, (d) target the volume shadows and (e) copy files or folders. dup offers an internal scripting capability to automate collection, as well as a few utilities to assist in merging and combining files. More information about this tool and its capabilities can be found here.

Improvements to jp

We expanded the functionality of jp to target Volume Shadow clusters whether the Volume Shadow Snapshots are mounted or not, via the new option -include_vss_clusters. This option can be used in combination with targeting unallocated clusters (via -include_unalloc_clusters). Using both of these options together can produce extensive change log journal entries.

Also, we added the ability of jp to recognize multiple change log journal formats within a single log file. Specifically, these newer formats are designated as USN v3 and USN v4 and can be use with the newer Windows operating systems. For those not familar with the versions, the normal version for a Windows operating system is USN v2. These other formats are not enabled by default. They, however, can be invoked by an administrator. The details of these other records are defined in the Microsoft SDK (sofware development kit) and the contents of internals are shown in jp's readme file. The newer formats allow for 128 bit inode values as opposed to 64 bit, range tracking, and other enhancements.

Improved handling of files with NTFS native compression

We beefed up the NTFS native compression library robustness, to improve handling of files that are compressed by NTFS on the filesystem. This improvement affects all the tools that access the raw NTFS clusters, including: ntfswalk, ntfscopy, ntfsdir, gena, wisp, sbag, cafae, wacu, yaru, usp, elmo, tia.


Release Highlights (Nov 2016)


New Tool Added - tia

At the request of one of our clients, we created this new tool. tia is short for Trash Inspection & Analysis, and as its name suggests, parses Windows trash (or Recycle Bin) artifacts. The tool is designed to work with the different versions of these artifact formats from WinXP to Win10. The tool has the ability to scan various locations looking for current or deleted trash artifacts, including: (a) the MFT table, (b) the volume shadow snapshots, and/or (c) unallocated clusters in a volume. More information about the tool can be found here.


Expanded functionality added to usp

We updated the parsing engine for usp to be more flexible in how it parses USB artifact data. Specifically, the older engine initially looked for the data from the System hive and then used what it found to proceed to the other artifacts (such as the Software hive, SetupAPI logs, etc). While this approach is fine and works, with the advent of USB cleaners and USB device removal software built into the new versions of Windows, the USB artifact data in the System hive can be mostly removed. Therefore, to account for this, we changed the usp algorithm to not rely on the System hive data. While the tool still uses the data in the System hive and integrates it into the reporting, one can run usp without using the System hive.

In addition, we added a number of new enhancements to the tool: (a) ability to parse multiple system hives (or software hives) in one session and merge the results into the reporting, (b) ability of the tool to injest all the artifact files via standard input, using the -pipe option (which makes it much easier to pass in artifact data into usp), (c) option to show additional metadata (and timestamps) in the report, and (d) the option to go after both the primary and RegBack (or backup) hives in one session. More information about the tool/updates can be found here.


Downloads

32-bit Version64-bit Version
Windows:2018.10.20.win32.zip2018.10.20.win64.zipmd5/sha1
Linux:Not Available2018.10.20.lin64.zipmd5/sha1
Mac OS X:Not Available2018.10.20.osx.zipmd5/sha1
*32bit apps can run in a 64bit linux distribution if "ia32-libs" (and dependencies) are present.