Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • irt/sleuthkit
1 result
Show changes
Commits on Source (10276)
Showing
with 7336 additions and 0 deletions
*.h text diff=cpp
*.c text diff=cpp
*.cpp text diff=cpp
*.java text diff=java
*.txt text
*.xml text
*.properties-MERGED text
*.html text diff=html
*.dox text
*.am text
*.ac text
*.m4 text
*.pro text
*.in text
*.1 text
Makefile text
Doxyfile text
*.py text diff=python
*.pl text diff=perl
*.pm text diff=perl
*.base text diff=perl
*.vcproj text eol=crlf
*.vcxproj* text eol=crlf
*.sln text eol=crlf
*.bat text eol=crlf
.gitignore text
# NetBeans user-specific settings
/bindings/java/nbproject/private/
# Bindings dependecies and build folders
/bindings/java/lib/
/bindings/java/build/
/bindings/java/dist
/bindings/java/doxygen/tskjni_doxygen.tag
/bindings/java/test/output/results
/bindings/java/test/output/gold/dummy
/bindings/java/test/output/gold/*_BU.txt
/bindings/java/test/output/gold/*_CPP.txt
/bindings/java/test/output/gold/*_CPP_SRT.txt
/bindings/java/test/input
/bindings/java/nbproject/genfiles.properties
/bindings/java/nbproject/nbjdk.properties
/bindings/java/nbproject/jdk.xml
/bindings/java/nbproject/nbjdk.xml
/bindings/java/libts*
*~
*.class
/bindings/java/build/
/bindings/java/dist/
/bindings/java/nbproject/*
!/bindings/java/nbproject/project.xml
!/bindings/java/nbproject/project.properties
# Nuget packages
/win32/packages/
# CASE-UCO build and release folder
/case-uco/java/build/
/case-uco/java/dist/
/case-uco/java/nbproject/private/
/case-uco/java/nbproject/genfiles.properties
# Windows build folders
/win32/Debug_NoLibs/
/win32/*/Debug_NoLibs/
/win32/Debug/
/win32/Debug_PostgreSQL/
/win32/*/Debug/
/win32/*/Debug_PostgreSQL/
/win32/Release/
/win32/Release_PostgreSQL/
/win32/*/Release/
/win32/*/Release_PostgreSQL/
/win32/Release_NoLibs/
/win32/*/Release_NoLibs/
/win32/*/x64/
/win32/x64/
/win32/*/*.user
win32/ipch
win32/BuildErrors.txt
win32/BuildErrors-64bit.txt
win32/.vs
win32/tsk-win.VC.VC.opendb
win32/tsk-win.VC.opendb
win32/tsk-win.VC.db
framework/msvcpp/framework/Debug/
framework/msvcpp/framework/Release/
framework/msvcpp/*/*.user
framework/msvcpp/*/Debug/
framework/msvcpp/*/Release/
framework/msvcpp/BuildLog.txt
framework/msvcpp/*/ipch
framework/runtime/
framework/SampleConfig/to_install/
framework/modules/*/win32/Debug/
framework/modules/*/win32/Release/
framework/modules/*/win32/*.user
framework/modules/c_InterestingFilesModule/tsk
framework/config.h
framework/tools/tsk_analyzeimg/tsk_analyzeimg
framework/tools/tsk_validatepipeline/tsk_validatepipeline
rejistry++/msvcpp/*/Debug
rejistry++/msvcpp/*/Release
rejistry++/msvcpp/*/Release_NoLibs
rejistry++/msvcpp/*/x64
rejistry++/msvcpp/*/*.user
rejistry++/msvcpp/rejistry++/ipch
# Release files
release/sleuthkit-*
release/clone
# IntelliSense data
/win32/*.ncb
/win32/*.sdf
framework/msvcpp/framework/*.ncb
framework/msvcpp/framework/*sdf
rejistry++/msvcpp/rejistry++/*.ncb
rejistry++/msvcpp/rejistry++/*sdf
# Visual Studio user options
/win32/tsk-win.suo
framework/msvcpp/framework/*.suo
rejistry++/msvcpp/rejistry++/*suo
*.sln.cache
win32/tsk-win.opensdf
# Make crud
*.o
*.lo
*.la
*.jar
Makefile
.deps
.libs
*.swp
#javadoc generated
/bindings/java/javadoc
# Files generated by running configure
*.in
stamp-h1
tsk/tsk_config.h
tsk/tsk_incs.h
tsk/tsk.pc
aclocal.m4
autom4te.cache
config.log
config.status
configure
libtool
m4/libtool.m4
m4/lt*.m4
config/*
# Executables
samples/callback_cpp_style
samples/callback_style
samples/posix_cpp_style
samples/posix_style
samples/*.exe
tests/*.exe
tests/*.log
tests/*.trs
tests/fs_attrlist_apis
tests/fs_fname_apis
tests/fs_thread_test
tests/read_apis
tools/autotools/tsk_comparedir
tools/autotools/tsk_gettimes
tools/autotools/tsk_imageinfo
tools/autotools/tsk_loaddb
tools/autotools/tsk_recover
tools/fiwalk/plugins/jpeg_extract
tools/fiwalk/src/fiwalk
tools/fiwalk/src/test_arff
tools/fstools/blkcat
tools/fstools/blkcalc
tools/fstools/blkls
tools/fstools/blkstat
tools/fstools/fcat
tools/fstools/ffind
tools/fstools/fls
tools/fstools/fsstat
tools/fstools/icat
tools/fstools/ifind
tools/fstools/ils
tools/fstools/istat
tools/fstools/jcat
tools/fstools/jls
tools/fstools/usnjls
tools/hashtools/hfind
tools/imgtools/img_cat
tools/imgtools/img_stat
tools/pooltools/pstat
tools/sorter/sorter
tools/srchtools/sigfind
tools/srchtools/srch_strings
tools/timeline/mactime
tools/vstools/mmcat
tools/vstools/mmls
tools/vstools/mmstat
tools/*/*.exe
tools/*/*/*.exe
unit_tests/base/*.log
unit_tests/base/*.trs
unit_tests/base/test_base
# EMACS backup files
*~
# Mac Junk
.DS_Store
# Test images
*.img
*.vhd
*.E01
*.vmdk
sleuthkit-*.tar.gz
#Test data folder
tests/data
language: cpp
matrix:
include:
- compiler: clang
os: linux
dist: bionic
sudo: required
group: edge
- compiler: gcc
os: linux
dist: bionic
sudo: required
group: edge
- compiler: clang
os: osx
- compiler: gcc
os: osx
addons:
apt:
update: true
packages:
- libafflib-dev
- libewf-dev
- libpq-dev
- autopoint
- libsqlite3-dev
- ant
- ant-optional
- libcppunit-dev
- wget
- openjdk-8-jdk
homebrew:
update: true
packages:
- ant
- wget
- libewf
- gettext
- cppunit
- afflib
taps: homebrew/cask-versions
casks: adoptopenjdk8
python:
- "2.7"
install:
- ./travis_install_libs.sh
before_script:
- if [ $TRAVIS_OS_NAME = linux ]; then
sudo update-alternatives --set java /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java;
sudo update-alternatives --set javac /usr/lib/jvm/java-8-openjdk-amd64/bin/javac;
export PATH=/usr/bin:$PATH;
unset JAVA_HOME;
fi
- if [ $TRAVIS_OS_NAME = "osx" ]; then
export PATH=${PATH}:/usr/local/opt/gettext/bin;
brew uninstall java --force;
brew cask uninstall java --force;
fi
script:
- javac -version
- ./bootstrap && ./configure --prefix=/usr && make
- pushd bindings/java/ && ant -q dist
# don't run tests on osx; libtsk not present due to SIP on osx: VIK-6971
- if test ${TRAVIS_OS_NAME} != "osx"; then
ant -q test;
fi
- popd
- pushd case-uco/java/ && ant -q && popd
- make check && if [ -f "tests/test-suite.log" ];then cat tests/test-suite.log; fi ; if [ -f "unit_tests/base/test-suite.log" ];then cat unit_tests/base/test-suite.log; fi
- if test ${TRAVIS_OS_NAME} = "linux"; then
pushd release && ./release-unix.pl ci && popd;
fi
Changes to make once we are ready to do a backwards incompatible change.
- TSK_SERVICE_ACCOUNT to TSK_ACCOUNT
- HashDB to use new TSK_BASE_HASHDB enum instead of its own ENUM
- Java SleuthkitCase.addArtifactType should return different if artifact already exists or getArtifactId should....
- Java SleuthkitCase.findFilesWhere should return AbstractFile like findFiles
- getUniquePath() should not throw exception.
- findFilesInImage should return an enum like TskDB methods differentiating if any data was found or not.
- remove addImageInfo in db_Sqlite that does not take MD5, and/or make it take IMG_INFO as argument
\ No newline at end of file
This program does not distribute an official ChangeLog file. You
can generate one from the subversion repository though using the
following command:
svn log http://svn.sleuthkit.org/repos/sleuthkit/
For a specific release, try something like:
svn log http://svn.sleuthkit.org/repos/sleuthkit/tags/sleuthkit-3.0.0
and replace 3.0.0 with the version you are interested in.
The Sleuth Kit
http://www.sleuthkit.org/sleuthkit
Installation Instructions
Last Modified: Oct 2022
REQUIREMENTS
=============================================================================
Tested Platform:
- FreeBSD 2-6.*
- Linux 2.*
- OpenBSD 2-3.*
- Mac OS X
- SunOS 4-5.*
- Windows
Build System (to compile from a source distribution):
- C/C++ compiler (C++ 14 required)
- GNU Make
- Java compiler / JDK (if you want the java bindings)
Development System (to extend TSK or compile from the repository):
- GNU autoconf, automake, and libtool
- Plus the build system requirements
Optional Programs:
- Autopsy: Provides a graphical HTML-based interface to The
Sleuth Kit (which makes it much easier to use). Install this AFTER
installing The Sleuth Kit.
Available at: http://www.sleuthkit.org/autopsy
Optional Libraries:
There are optional features that TSK can use if you have installed
them before you build and install TSK.
- AFFLIB: Allows you to process disk images that are stored in the
AFF format. Version 3.3.6 has been tested to compile and work with this
release.
Available at: http://www.afflib.org
- LibEWF: Allows you to process disk images that are stored in the
Expert Witness format (EnCase Format). Version 20130128 has been
tested to compile and work with this release. It is the last
stable release of libewf and therefore the only one that we
currently support. You can download it from:
https://github.com/sleuthkit/libewf_64bit
The official repository is available here, but there is not
a package of the last stable release:
https://github.com/libyal/libewf-legacy
Available at: http://sourceforge.net/projects/libewf/
- Libvhdi: Allows you to process disk images that are stored in the
Virtual Hard Disk format (VHD).
The official repository is available here:
https://github.com/libyal/libvhdi
- Libvmdk: Allows you to process disk images that are stored in the
VMware Virtual Disk format (VMDK).
The official repository is available here:
https://github.com/libyal/libvmdk
- Libvslvm: Allows you to access the Linux Logical Volume Manager (LVM) format
that is sotred on a disk image. A stand-alone version of libbfio is needed
to allow libvslvm to directly read from a TSK_IMAGE.
The official repository is available here:
https://github.com/libyal/libvslvm
https://github.com/libyal/libbfio
INSTALLATION
=============================================================================
Refer to the README_win32.txt file for details on Windows.
The Sleuth Kit uses the GNU autotools for building and installation.
There are a few steps to this process. First, run the 'configure'
script in the root TSK directory. See the CONFIGURE OPTIONS section
for useful arguments that can be given to 'configure.
$ ./configure
If there were no errors, then run 'make'. If you do not have a
'configure' script, then it is probably because you cloned the
source code repository. If so, you will need to have automake,
autoconf, and libtool installed and you can create the configure
script using the 'bootstrap' script in the root directory.
$ make
The 'make' process will take a while and will build the TSK tools.
When this process is complete, the libraries and executables will
be located in the TSK sub-directories. To install them, type
'make install'.
$ make install
By default, this will copy everything in to the /usr/local/ structure.
So, the executables will be in '/usr/local/bin'. This directory will
need to be in your PATH if you want to run the TSK commands without
specifying '/usr/local/bin' everytime.
If you get an error like:
libtool: Version mismatch error. This is libtool 2.2.10, but the
libtool: definition of this LT_INIT comes from libtool 2.2.4.
libtool: You should recreate aclocal.m4 with macros from libtool 2.2.10
libtool: and run autoconf again.
Run:
./bootstrap
and then go back to running configure and make. To run 'bootstrap',
you'll need to have the autotools installed (see the list at the
top of this page).
CONFIGURE OPTIONS
-----------------------------------------------------------------------------
There are some arguments to 'configure' that you can supply to
customize the setup. Currently, they focus on the optional disk
image format libraries.
--without-afflib: Supply this if you want TSK to ignore AFFLIB even
if it is installed.
--with-afflib=dir: Supply this if you want TSK to look in 'dir' for
the AFFLIB installation (the directory should have 'lib' and 'include'
directories in it).
--without-ewf: Supply this if you want TSK to ignore libewf even
if it is installed.
--with-libewf=dir: Supply this if you want TSK to look in 'dir' for
the libewf installation (the directory should have 'lib' and 'include'
directories in it).
--without-libvhdi: Supply this if you want TSK to ignore libvhdi even
if it is installed.
--with-libvhdi=dir: Supply this if you want TSK to look in 'dir' for
the libvhdi installation (the directory should have 'lib' and 'include'
directories in it).
--without-libvmdk: Supply this if you want TSK to ignore libvmdk even
if it is installed.
--with-libvmdk=dir: Supply this if you want TSK to look in 'dir' for
the libvmdk installation (the directory should have 'lib' and 'include'
directories in it).
--without-libvslvm: Supply this if you want TSK to ignore libvslvm even
if it is installed.
--with-libvslvm=dir: Supply this if you want TSK to look in 'dir' for
the libvslvm installation (the directory should have 'lib' and 'include'
directories in it).
--without-libbfio: Supply this if you want TSK to ignore libbfio even
if it is installed.
--with-libbfio=dir: Supply this if you want TSK to look in 'dir' for
the libbfio installation (the directory should have 'lib' and 'include'
directories in it).
-----------------------------------------------------------------------------
Brian Carrier
carrier <at> sleuthkit <dot> org
# File that we want to include in the dist
EXTRA_DIST = README_win32.txt README.md INSTALL.txt ChangeLog.txt NEWS.txt API-CHANGES.txt \
licenses/README.md licenses/GNUv2-COPYING licenses/GNUv3-COPYING licenses/IBM-LICENSE \
licenses/Apache-LICENSE-2.0.txt licenses/cpl1.0.txt licenses/bsd.txt licenses/mit.txt \
m4/*.m4 \
docs/README.txt \
packages/sleuthkit.spec \
win32/BUILDING.txt \
win32/*/*.vcxproj \
win32/tsk-win.sln \
win32/NugetPackages.props \
win32/docs/* \
bindings/java/README.txt \
bindings/java/*.xml \
bindings/java/doxygen/Doxyfile \
bindings/java/doxygen/*.dox \
bindings/java/doxygen/*.html \
bindings/java/nbproject/project.xml \
bindings/java/src/org/sleuthkit/datamodel/*.java \
bindings/java/src/org/sleuthkit/datamodel/*.html \
bindings/java/src/org/sleuthkit/datamodel/*.properties \
bindings/java/src/org/sleuthkit/datamodel/blackboardutils/*.java \
bindings/java/src/org/sleuthkit/datamodel/blackboardutils/attributes/*.java \
bindings/java/src/org/sleuthkit/datamodel/Examples/*.java \
bindings/java/src/*.html \
case-uco/java/*.xml \
case-uco/java/*.md \
case-uco/java/nbproject/*.xml \
case-uco/java/nbproject/*.properties \
case-uco/java/src/org/sleuthkit/caseuco/*.java \
case-uco/java/test/org/sleuthkit/caseuco/*.java
ACLOCAL_AMFLAGS = -I m4
# directories to compile
if CPPUNIT
UNIT_TESTS=unit_tests
endif
# Compile java bindings if all of the dependencies existed
if X_JNI
JAVA_BINDINGS=bindings/java
JAVA_CASEUCO=case-uco/java
else
JAVA_BINDINGS=
JAVA_CASEUCO=
endif
SUBDIRS = tsk tools tests samples man $(UNIT_TESTS) $(JAVA_BINDINGS) $(JAVA_CASEUCO)
nobase_include_HEADERS = tsk/libtsk.h tsk/tsk_incs.h \
tsk/base/tsk_base.h tsk/base/tsk_os.h \
tsk/img/tsk_img.h tsk/vs/tsk_vs.h tsk/img/pool.hpp tsk/img/logical_img.h \
tsk/vs/tsk_bsd.h tsk/vs/tsk_dos.h tsk/vs/tsk_gpt.h \
tsk/vs/tsk_mac.h tsk/vs/tsk_sun.h \
tsk/fs/tsk_fs.h tsk/fs/tsk_ffs.h tsk/fs/tsk_ext2fs.h tsk/fs/tsk_fatfs.h \
tsk/fs/tsk_ntfs.h tsk/fs/tsk_iso9660.h tsk/fs/tsk_hfs.h tsk/fs/tsk_yaffs.h tsk/fs/tsk_logical_fs.h \
tsk/fs/tsk_apfs.h tsk/fs/tsk_apfs.hpp tsk/fs/apfs_fs.h tsk/fs/apfs_fs.hpp tsk/fs/apfs_compat.hpp \
tsk/fs/decmpfs.h tsk/fs/tsk_exfatfs.h tsk/fs/tsk_fatxxfs.h tsk/fs/tsk_xfs.h \
tsk/hashdb/tsk_hashdb.h tsk/auto/tsk_auto.h \
tsk/auto/tsk_is_image_supported.h tsk/auto/guid.h \
tsk/pool/tsk_pool.h tsk/pool/tsk_pool.hpp tsk/pool/tsk_apfs.h tsk/pool/tsk_apfs.hpp \
tsk/pool/pool_compat.hpp tsk/pool/apfs_pool_compat.hpp \
tsk/pool/lvm_pool_compat.hpp \
tsk/util/crypto.hpp tsk/util/lw_shared_ptr.hpp tsk/util/span.hpp \
tsk/util/detect_encryption.h tsk/util/file_system_utils.h
nobase_dist_data_DATA = tsk/sorter/default.sort tsk/sorter/freebsd.sort \
tsk/sorter/images.sort tsk/sorter/linux.sort tsk/sorter/openbsd.sort \
tsk/sorter/solaris.sort tsk/sorter/windows.sort
api-docs:
doxygen tsk/docs/Doxyfile
cd bindings/java/doxygen; doxygen Doxyfile
man-html:
cd man;build-html
---------------- VERSION 4.12.1 --------------
C/C++:
- Bug fixes from Luis Nassif and Joachim Metz
- Added check to stop for very large folders to prevent memory exhausion
Java:
- Added File Repository concept for files to be stored in another location
- Schema updated to 9.4
- Fixed OS Account merge bug and now fire events when accounts are merged
---------------- VERSION 4.12.0 --------------
- There was a 1-year gap since 4.11.1 and the git log has 441 commits in that timeframe.
- Many for small fixes.
- This set of release notes is much more of an overview than other releases
What's New:
- LVM Support (non-Windows) from Joachim Metz
- Logical File System support (a folder structure is parsed by TSK libraries) from Ann Priestman (Basis)
What's Changed:
- Lots of bug fixes from the Basis team and Joachim Metz
- Additional fixes from Eran-YT, msuhanov, Joel Uckelman, Aleks L, dschoemantruter
- General themes of C/C++ bounds checks and Java improvements to OS Accounts, Ingest jobs, CaseDbAccessManager, and much more.
---------------- VERSION 4.11.1 --------------
C/C++:
- Several fixes from Joachim Metz
- NTFS Decompression bug fix from Kim Stone and Joel Uckelman
Java:
- Fixed connection leak when making OS Accounts in bridge
- OsAccount updates for instance types and special Windows SIDs
- Fixed issue with duplicate value in Japanese timeline translation
---------------- VERSION 4.11.0 --------------
C/C++:
- Added checks at various layers to detect encrypted file systems and disks to give more useful error messages.
- Added checks to detect file formats that are not supported (such as AD1, ZIP, etc.) to give more useful error messages.
- Added tsk_imageinfo tool that detects if an image is supported by TSK and if it is encrypted.
- Add numerous bound checks from Joachim Metz.
- Clarified licenses as pointed out by Joachim Metz.
Java:
- Updated from Schema 8.6 to 9.1.
- Added tables and classes for OS Accounts and Realms (Domains).
- Added tables and classes for Host Addresses (IP, MAC, etc.).
- Added tables and classes for Analysis Results vs Data Artifacts by adding onto BlackboardArtifacts.
- Added tables and classes for Host and Person to make it easier to group data sources.
- Added static types for standard artifact types.
- Added File Attribute table to allow custom information to be stored for each file.
- Made ordering of getting lock and connection consistent.
- Made the findFile methods more efficient by using extension (which is indexed).
---------------- VERSION 4.10.2 --------------
C/C++
- Added support for Ext4 inline data
Java
- New Blackboard Artifacts for ALEAPP/ILEAPP, Yara, Geo Area, etc.
- Upgraded to PostgreSQL JDBC Driver 42.2.18
- Added SHA256 to files table in DB and added utility calculation methods.
- Changed TimelineManager to make events for any artifact with a time stamp
- Added Japanese translations
- Fixed sychronization bug in getUniquePath
---------------- VERSION 4.10.1 --------------
C/C++:
- Changed Windows build to use Nuget for libewf, libvmdk, libvhdi.
- Fixed compiler warnings
- Clarrified licenses and added Apache license to distribution
- Improved error handling for out of memory issues
- Rejistry++ memory leak fixes
Java:
- Localized for Japanese
---------------- VERSION 4.10.0 --------------
C/C++:
- Removed PostgreSQL code (that was used only by Java code)
- Added Java callback support so that database inserts are done in Java.
Java:
- Added methods and callbacks as required to allow database population to happen in Java instead of C/C++.
- Added support to allow Autopsy streaming ingest where files are added in batches.
- Added TaggingManager class and concept of a TagSet to support ProjectVic categories.
- Fixed changes to normalization and validation of emails and phone numbers.
- Added a CASE/UCO JAR file that creates JSON-LD based on TSK objects.
---------------- VERSION 4.9.0 --------------
C/C++
- Removed framework project. Use Autopsy instead if you need an analysis framework.
- Various fixes from Google-based fuzzing.
- Ensure all reads (even big ones) are sector aligned when reading from Windows device.
- Ensure all command line tools support new pool command line arguments.
- Create virtual files for APFS unallocated space
- HFS fix to display type
Java:
- More artifact helper methods
- More artifacts and attributes for drones and GPS coordinates
- Updated TimelineManager to insert GPS artifacts into events table
---------------- VERSION 4.8.0 --------------
C/C++
- Pool layer was added to support APFS. NOTE: API is likely to change.
- Limited APFS support added in libtsk and some of the command line tools.
-- Encryption support is not complete.
-- Blackbag Technologies submitted the initial PR. Basis Technology
did some minor refactoring.
- Refactoring and minor fixes to logical imager
- Various bug fixes from Google fuzzing efforts and Jonathan B from Afarsec
- Fixed infinite NTFS loop from cyclical attribute lists. Reported by X.
- File system bug fixes from uckelman-sf on github
Database:
- DB schema was updated to support pools
- Added concept of JSON in Blackboard Attributes
- Schema supports cascading deletes to enable data source deletion
Java:
- Added Pool class and associated infrastructure
- Added methods to support deleting data sources from database
- Removed JavaFX as a dependency by refactoring the recently
introduced timeline filtering classes.
- Added attachment support to the blackboard helper package.
---------------- VERSION 4.7.0 --------------
C/C++:
- DB schema was expanded to store tsk_events and related tables.
Time-based data is automatically added when files and artifacts are
created. Used by Autopsy timeline.
- Logical Imager can save files as individual files instead of in
VHD (saves space).
- Logical imager produces log of results
- Logical Imager refactor
- Removed PRIuOFF and other macros that caused problems with
signed/unsigned printing. For example, TSK_OFF_T is a signed value
and PRIuOFF would cause problems as it printed a negative number
as a big positive number.
Java
- Travis and Debian package use OpenJDK instead of OracleJDK
- New Blackboard Helper packages (blackboardutils) to make it easier
to make artifacts.
- Blackboard scope was expanded, including the new postArtifact() method
that adds event data to database and broadcasts an event to listeners.
- SleuthkitCase now has an EventBus for database-related events.
- New TimelineManager and associated filter classes to support new events
table
---------------- VERSION 4.6.7 --------------
C/C++ Code:
- First release of new logical imager tool
- VHD image writer fixes for out of space scenarios
Java:
- Expand Communications Manager API
- Performance improvement for SleuthkitCase.addLocalFile()
---------------- VERSION 4.6.6 --------------
C/C++ Code:
- Acquisition deteails are set in DB for E01 files
- Fix NTFS decompression issue (from Joe Sylve)
- Image reading fix when cache fails (Joe Sylve)
- Fix HFS+ issue with large catalog files (Joe Sylve)
- Fix free memory issue in srch_strings (Derrick Karpo)
Java:
- Fix so that local files can be relative
- More Blackboard artifacts and attributes for web data
- Added methods to CaseDbManager to enable checking for and modifying tables.
- APIs to get and set acquisition details
- Added methods to add volume and file systems to database
- Added method to add LayoutFile for allocated files
- Changed handling of JNI handles to better support multiple cases
---------------- VERSION 4.6.5 --------------
C/C++ Code:
- HFS boundary check fix
- New fields for hash values and acquisition details in case database
- Store "created schema version" in case database
Java Code:
- New artifacts and attributes defined
- Fixed bug in SleuthkitCase.getContentById() for data sources
- Fixed bug in LayoutFile.read() that could allow reading past end offile
---------------- VERSION 4.6.4 --------------
Java Code:
- Increase max statements in database to prevent errors under load
- Have a max timeout for SQLite retries
---------------- VERSION 4.6.3 --------------
C/C++ Code:
- Hashdb bug fixes for corrupt indexes and 0 hashes
- New code for testing power of number in ExtX code
Java Code:
- New class that allows generic database access
- New methods that check for duplicate artifacts
- Added caches for frequently used content
Database Schema:
- Added Examiner table
- Tags are now associated with Examiners
- Changed parent_path for logical files to be consistent with FS files.
---------------- VERSION 4.6.2 --------------
C/C++ Code:
- Various compiler warning fixes
- Added small delay into image writer to not starve other threads
Java:
- Added more locking to ensure that handles were not closed while other threads were using them.
- Added APIs to support more queries by data source
- Added memory-based caching when detecting if an object has children or not.
---------------- VERSION 4.6.1 --------------
C/C++ Code:
- Lots of bounds checking fixes from Google's fuzzing tests. Thanks Google.
- Cleanup and fixes from uckelman-sf and others
- PostgreSQL, libvhdi, & libvmdk are supported for Linux / OS X
- Fixed display of NTFS GUID in istat - report from Eric Zimmerman.
- NTFS istat shows details about all FILE_NAME attributes, not just the first. report from Eric Zimmerman.
Java:
- Reports can be URLs
- Reports are Content
- Added APIs for graph view of communications
- JNI library is extracted to name with user name in it to avoid conflicts
Database:
- Version upgraded from to 8.0 because Reports are now Content
---------------- VERSION 4.6.0 --------------
New Features
- New Communications related Java classes and database tables.
- Java build updates for Autopsy Linux build
- Blackboard artifacts are now Content objects in Java and part of tsk_objects table in database.
- Increased cache sizes.
- Lots of bounds checking fixes from Google's fuzzing tests. Thanks Google.
- HFS fix from uckelman-sf.
---------------- VERSION 4.5.0 --------------
New Features:
- Support for LZVN compressed HFS files (from Joel Uckelman)
- Use sector size from E01 (helps with 4k sector sizes)
- More specific version number of DB schema
- New Local Directory type in DB to differentiate with Virtual Directories
- All blackboard artifacts in DB are now 'content'. Attachments can now
be children of their parent message.
- Added extension as a column in tsk_files table.
Bug Fixes:
- Faster resolving of HFS hard links
- Lots of fixes from Google Fuzzing efforts.
---------------- VERSION 4.4.2 --------------
New Features:
- usnjls tool for NTFS USN log (from noxdafox)
- Added index to mime type column in DB
- Use local SQLite3 if it exists (from uckelman-sf)
- Blackboard Artifacts have a shortDescription metho
Bug Fixes:
- Fix for highest HFS+ inum lookup (from uckelman-sf)
- Fix ISO9660 crash
- various performance fixes and added thread safety checks
---------------- VERSION 4.4.1 --------------
- New Features:
-- Can create a sparse VHD file when reading a local drive with new
IMAGE_WRITER structure. Currently being used by Autopsy, but no TSK
command line tools.
- Bug fixes:
-- Lots of cleanup and fixes. Including:
-- memory leaks
-- UTF8 and UTF16 cleanup
-- Missing NTFS files (in fairly rare cases)
-- Really long folder structures and database inserts
---------------- VERSION 4.4.0 --------------
- Compiling in Windows now uses Visual Studio 2015
- tsk_loaddb now adds new files for slack space and JNI was upgraded
accordingly.
---------------- VERSION 4.3.1 --------------
- NTFS works on 4k sectors
- Added support in Java to store local files in encoded form (XORed)
- Added Java Account object into datamodel
- Added notion of a review status to blackboard artifacts
- Upgraded version of PostgreSQL
- Various minor bug fixes
---------------- VERSION 4.3.0 --------------
- PostgreSQL support (Windows only)
- New Release_ NoLibs Visual Studio target
- Support for virtual machine formats via libvmdk and libvhdi (Windows only)
- Schema updates (data sources table, mime type, attributes store type)
- tsk_img_open can take externally created TSK_IMG_INFO
- Various minor bug fixes
---------------- VERSION 4.2.0 --------------
- ExFAT support added
- New database schema
- New Sqlite hash database
- Various bug fixes
- NTFS pays more attention to sequence and loads metadata only
if it matches.
- Added secondary hash database index
---------------- VERSION 4.1.3 --------------
- fixed bug that could crash UFS/ExtX in inode_lookup.
- More bounds checking in ISO9660 code
- Image layer bounds checking
- Update version of SQLITE-JDBC
- changed how java loads navite libraries
- Config file for YAFFS2 spare area
- New method in image layer to return names
- Yaffs2 cleanup.
- Escape all strings in SQLite database
- SQlite code uses NTTFS sequence number to match parent IDs
---------------- VERSION 4.1.2 --------------
Core:
- Fixed more visual studio projects to work on 64-bit
- TskAutoDB considers not finding a VS/FS a critical error.
Java:
- added method to Image to perform sanity check on image sizes.
fiwalk:
- Fixed compile error on Linux etc.
---------------- VERSION 4.1.1 --------------
Core:
- Added FILE_SHARE_WRITE to all windows open calls.
- removed unused methods in CRC code that caused compile errors.
- Added NTFS FNAME times to time2 struct in TSK_FS_META to make them
easier to access -- should have done this a long time ago!
- fls -m and tsk_gettimes output NTFS FNAME times to output for timelines.
- hfind with EnCase hashsets works when DB is specified (and not only index)
- TskAuto now goes into UNALLOC partitions by default too.
- Added support to automatically find all Cellebrite raw dump files given
the name of the first image.
- Added 64-bit windows targets to VisualStudio files.
- Added NTFS sequence to parent address in directory and directory itself.
- Updated SQLite code to use sequence when finding parent object ID.
Java:
- Java bindings JAR files now have native libraries in them.
- Logical files are added with a transaction
---------------- VERSION 4.1.0 --------------
Core:
- Added YAFFS2 support (patch from viaForensics).
- Added Ext4 support (patch from kfairbanks)
- changed all include paths to be 'tsk' instead of 'tsk3'
-- IMPORTANT FOR ALL DEVELOPERS!
Framework:
- Added Linux and MAC support.
- Added L01 support.
- Added APIs to find files by name, path and extension.
- Removed deprecated TskFile::getAttributes methods.
- moved code around for AutoBuild tool support.
Java Bindings:
- added DerivedFile datamodel support
- added a public method to Content to add ability to close() its tsk handle before the object is gc'd
- added faster skip() and random seek support to ReadContentInputStream
- refactored datamodel by pushing common methods up to AbstractFile
- fixed minor memory leaks
- improved regression testing framework for java bindings datamodel
---------------- VERSION 4.0.2 --------------
Core:
New Features:
- Added fiwalk tool from Simson. Not supported in Visual Studio yet.
Bug Fixes:
- Fixed fcat to work on NTFS files (still doesn't support ADS though).
- Fixed HFS+ support in tsk_loaddb / SQLite -- root directory was not added.
- NTFS code now looks at all MFT entries when listing directory contents. It used to only look at unallocated entries for orphan files. This fixes an image that had allocated files missing from the directory b-tree.
- NTFS code uses sequence number when searching MFT entries for all files.
- Libewf detection code change to support v2 API more reliably (ID: 3596212).
- NTFS $SII code could crash in rare cases if $SDS was multiple of block size.
Framework:
- Added new API to TskImgDB that returns the base name of an image.
- Numerous performance improvements to framework.
- Removed requirement in framework to specify module extension in pipeline configuration file.
- Added blackboard artifacts to represent both operating system and network service user accounts.
Java Bindings:
- added more APIs to find files by name, path and where clause
- added API to get currently processed dir when image is being added,
- added API to return specific types of children of image, volume system, volume, file system.
- moved more common methods up to Content interface
- deprecated context of blackboard attributes,
- deprecated SleuthkitCase.runQuery() and SleuthkitCase.closeRunQuery()
- fixed ReadContentInputStream bugs (ignoring offset into a buffer, implementing available() )
- methods that are lazy loading are now thread safe
- Hash class is now thread-safe
- use more PreparedStatements to improve performance
- changed source level from java 1.6 to 1.7
- Throw exceptions from C++ side better
---------------- VERSION 4.0.1 --------------
New Features:
- Can open raw Windows devices with write mode sharing.
- More DOS partition types are displayed.
- Added fcat tool that takes in file name and exports content (equivalent to using ifind and icat together).
- Added new API to TskImgDB that returns hash value associated with carved files.
- performance improvements with FAT code (maps and dir_add)
- performance improvements with NTFS code (maps)
- added AONLY flag to block_walk
- Updated blkls and blkcalc to use AONLY flag -- MUCH faster.
Bug Fixes:
- Fixed mactime issue where it could choose the wrong timezone that did
not follow daylight savings times.
- Fixed file size of alternate data streams in framework.
- Incorporated memory leak fixes and raw device fixes from ADF Solutions.
---------------- VERSION 4.0.0 --------------
New Features:
- Added multithreaded support
- Added C++ wrapper classes
- Added JNI bindings / Java data model classes
- 3314047: Added utf8-specific versions of 'toid' methods for img,vs,fs types
- 3184429: More consistent printing of unset times (all zerso instead of 1970)
- New database design that allows for multiple images in the same database
- GPT volume system tries other sector sizes if first attempt fails.
- Added hash calculation and lookup to AutoDB and JNI.
- Upgraded SQLite to 3.7.9.
- Added Framework in (windows-only)
- EnCase hash support
- Libewf v2 support (it is now non-beta)
- First file in a raw split or E01 can be specified and the rest of the files
are found.
- mactime displays times as 0 if the time is not set (isntead of 1970)
- Changed behavior of 'mactime -y' to use ISO8601 format.
- Updated HFS+ code from ATC-NY.
- FAT orphan file improvements to reduce false positives.
- TskAuto better reports errors.
- Upgrade build projects from Visual Studio 2008 to 2010.
Bug Fixes:
- Relaxed checking when conflict exists between DOS and GPT partitions.
Had a Mac image that was failing to resolve which partition table
to use.
---------------- VERSION 3.2.3 --------------
New Features:
- new TskAuto method (handleNotification()) that gets verbose messages that allow for debugging when the class makes decisions.
- DOS partitions are loaded even if an extended partition fails to load
- new TskAuto::findFilesInFs(TSK_FS_INFO *) method
- Need to only specify first E01 file and the rest are found
- Changed docs license to non-commercial
- Unicode conversion routines fix invalid UTF-16 text during conversion
- Added '-d' to tsk_recover to specify directory to recover
Bug Fixes:
- Added check to fatfs_open to compare first sectors of FAT if we used backup boot sector and verify it is FAT32.
- More checks to make sure that FAT short names are valid ASCII
- 3406523: Mactime size sanity check
- 3393960: hfind reading of Windows input file
- 3316603: Error reading last blocks of RAW CD images
- Fixed bugs in how directories and files were detected in TskAuto
---------------- VERSION 3.2.2 --------------
Bug Fixes
- 3213886: ISO9660 directory hole not advancing
- 3173095 contd: Updated checks so that tougher FAT checks are
applied to deleted directories.
- 3303678: Image type in Sqlite DB is now not always 0
- 3303679: Deleted FAT files have more name cleanup in short names
New Features:
- 3213888: RAW CD format
- Auto class accepts TSK_IMG_INFO as argument
- Copies of split image file names are stored in TSK so that the caller can free them before TSK_IMG_INFO is freed.
---------------- VERSION 3.2.1 --------------
Bug Fixes
- 3108272: fls arguments for -d and -u
- 3105539: compile error issues because of SQlite and pthreads
- 3173095: missing FAT files because of invalid dates.
- 3184419: mingew compile errors.
- 3191391: surround file name in quotes in mactime -d csv output
New Features:
- A single dummy entry is added to the SQlite DB if no volume exists
so that all programs can assume that there will be at least one
volume in the table.
- 3184455: allow srcdir != builddir
---------------- VERSION 3.2.0 --------------
Bug Fixes
- 3043092: Minor logic errors with ifind code.
- FAT performance fix when looking for parent directories
in $OrphanFiles.
- 3052302: Crash on NTFS/UFS detection test because of
corrupt data -- tsk_malloc error.
- 3088447: Error adding attribute because of run collision.
Solved by assigning unique IDs.
New Features:
- 3012324: Name mangling moved out of library into outer tools
so that they can see control characters if they want to. Patch
by Anthony Lawrence.
- 2993806: ENUM values have a specified NONE value if you don't
want to specify any special flags. Patch by Anthony Lawrence.
- 3026989: Add -e and -s flags to img_cat. patch by Simson Garfinkel.
- 2941805: Add case sensitive flag to fsstat in HFS. Patch by Rob Joyce.
- 3017764: Changed how default NTFS $DATA attribute was named. Now it
has no name, while it previously had a fake name of "$Data".
- New TskAuto class.
- New tsk_loaddb, tsk_recover, tsk_comparedir, and tsk_gettimes tools.
---------------- VERSION 3.1.3 --------------
Bug Fixes
- 3006733: FAT directory listings were slow because the inner
code was not stopping when it found the parent directory.
- Adjusted sanity / testing code on FAT directory entries to allow
non-ascii in extensions and reject entries with lots of 0s.
- 3023606: Ext2 / ffs corrupted file names.
- Applied NTFS SID fixes from Mandiant.
- ntfs_load_secure() memory leak patch from Michael Cohen
---------------- VERSION 3.1.2 --------------
Bug Fixes
- 2982426: FAT directory listings were slow because the entire
image was being scanned for parent directory information.
- 2982965: fs_attr length bug fix.
- 2988619: mmls -B display error.
- 2988330: ntfs SII cluster size increment bug
- 2991487: Zeroed content in NTFS files that were not fully initialized.
- 2993767: Slow FAT listings of OrphanFiles because hunt for parent
directory resulted in many searches for OrphanFiles. Added cache
of OrphanFiles.
- 2999567: ifind was not stopping after first hit.
- 2993804: read past end of file did not always return -1.
---------------- VERSION 3.1.1 --------------
Bug Fixes
- 2954703: ISO9660 missing files because duplicate files
had same starting block.
- 2954707: ISO9660 missing some files with zero length and
duplicate starting block. Also changed behavior of how
multiple volume descriptors are processed.
- 2955898: Orphan files not found if no deleted file names exist.
- 2955899: NTFS internal setting of USED flag.
- 2972721: Sorter fails with hash lookup if '-l' is given.
- 2941813: Reverse HFS case sensitive flags (internal fix only)
- 2954448: Debian package typo fixes, etc.
- 2975245: sorter ignores realloc entries to reduce misleading mismatch entries and duplicate entries.
---------------- VERSION 3.1.0 --------------
New Features and Changes
- 2206285: HFS+ can now be read. Lots of tracker items about this.
Thanks to Rob Joyce and ATC-NY for many of the patches and reports.
- 2677069: DOS Safety Partitions in GPT Volume Systems are better
detected instead of reporting multiple VSs.
- Windows executables can be build in Visual Studio w/out needing
other image format libraries.
- 2367426: Uninitialized file space is shown if slack space is
requested.
- 2677107 All image formats supported by AFFLIB can be accessed by
specifying the "afflib" type.
- 2206265: sigfind can now process non-raw files.
- 2206331: Indirect block addresses are now available in the library
and command line tools. They are stored in a different attribute.
- Removed 'docs' files and moved them to the wiki.
- Removed disk_stat and disk_sreset because they were out of date
and hdparm now has the same functionality.
- 2874854: Image layer tools now support non-512 byte device sector
sizes. Users can specify sector size using the -b argument to the
command line tools. This has several consequences:
-- 'mmls -b' is now 'mmls -B'. Similarly with istat -b.
-- Changed command line format for '-o' so that sector size is
specified only via -b and not using '-o 62@4096'.
- 2874852: Sanity checking on partition table entires is relaxed
and only first couple of partitions are checked to make sure that
they can fit into the image.
- 2895607: NTFS SID data is available in the library and 'istat'.
- 2206341: AFF encrypted images now give more proper error message
if password is not given.
- 2351426: mactime is now distributed with Windows execs.
Developer-level Changes
- Abstracted name comparison to file system-specific function.
- Added support in mactime to read body files with comment lines.
- 2596153: Changed img_open arguments, similar to getopt().
- 2797169: tsk_fs_make_ls is now supported as an external library
function. Now named tsk_fs_meta_make_ls.
- 2908510: Nanosecond resolution of timestamps is now available.
- 2914255: Version info is now available in .h files in both string
and integer form.
Bug Fixes:
- 2568528: incorrect adjustment of attribute FILLER offset.
- 2596397: Incorrect date sorting in mactime.
- 2708195: Errors when doing long reads in fragmented attributes.
- Fixed typo bugs in sorter (reported via e-mail by Drew Hunt).
- 2734458: added orphan cache map to prevent slow NTFS listing times.
- 2655831: Sorter now knows about the ext2 and ext3 types.
- 2725799: ifind not converting UTF16 names properly on Windows
because it was using endian ordering of file system and not local
system.
- 2662168: warning messages on macs when reading the raw character
device.
- 2778170: incorrect read size on resident attributes.
- 2777633: missing second resolution on FAT creation times.
- Added the READ_SHARE option to the CreateFile command for split
image files. Patch by Christopher Siwy.
- 2786963: NTFS compression infinite loop fix.
- 2645156: FAT / blkls error getting slack because allocsize was
being set too small (and other values were not being reset).
- 2367426: Zeros are set for VDL slack on NTFS files.
- 2796945: Inifite loop in fs_attr.
- 2821031: Missing fls -m fields.
- 2840345: Extended DOS partitions in extended partitions are now
marked as Meta.
- 2848162: Reading attributes at offsets that are on boundary of
run fragment.
- 2824457: Fixed issue reading last block of file system with blkcat.
- 2891285: Fixed issue that prevented reads from the last block of
a file system when using the POSIX-style API.
- 2825690: Fixed issue that prevented blkls -A from working.
- 2901365: Allow FAT files to have a 0 wdate.
- 2900761: Added FAT directory sanity checks to prevent infinite loops.
- 2895607: Fixed various memory leaks.
- 2907248: Fixed image layer cache crash.
- 2905750: all file system read() functions now return -1 when
offset given is past end of file.
---------------- VERSION 3.0.1 --------------
11/11/08: Bug Fix: Fixed crashing bug in ifind on FAT file system.
Bug: 2265927
11/11/08: Bug Fix: Fixed crashing bug in istat on ExtX $OrphanFiles
dir. Bug: 2266104
11/26/08: Update: Updated fls man page.
11/30/08: Update: Removed TODO file and using tracker for bugs and
feature requests.
12/29/08: Bug Fix: Fixed incorrectly setting block status in file_walk
for compressed files (Bug: 2475246)
12/29/08: Bug Fix: removed fs_info field from FS_META because it
was not being set and should have been removed in 3.0. Reported by
Rob Joyce and Judson Powers.
12/29/08: Bug Fix: orphan files and NTFS files found via parent
directory have an unknown file name type (instead of being equal
to meta type). (Bug: 2389901). Reported by Barry Grundy.
1/12/09: Bug Fix: Fixed ISO9660 bug where large directory contents
were not displayed. (Bug: 2503552). Reported by Tom Black.
1/24/09: Bug Fix: Fixed bug 2534449 where extra NTFS files were
shown if the MFT address was changed to 0 because fs_dir_add was
checking the address and name. Reported by Andy Bontoft.
1/29/09: Update: Fixed fix for bug 2534449. The fix is in ifind
instead of fs_dir_add().
2/2/09: Update: Added RPM spec file from Morgan Weetmam.
---------------- VERSION 3.0.0 --------------
0/00/00: Update: Many, many, many API changes.
2/14/08: Update: Added mmcat tool.
2/26/08: Update: Added flags to mmls to specify partition types.
3/1/08: Update: Major update of man pages.
4/14/08: Bug Fix: Fixed the calculation of "actual" last block.
Off by 1 error. Reported by steve.
5/23/08: Bug Fix: Incorrect malloc return check in srch_strings.
reported by Petri Latvala.
5/29/08: Bug Fix: Fixed endian ordering bug in ISO9660 code. Reported
by Eduardo Aguiar de Oliveira.
6/17/08: Update: 'sorter' now uses the ifind method for finding
deleted NTFS files (like Autopsy) does instead of relying on fls.
Reported by John Lehr.
6/17/08: Update: 'ifind -p' reports data on ADS.
7/10/08: Update: FAT looks for a backup boot sector in FAT32 if
magic is 0
7/21/08: Bug Fix: Changed define of strcasecmp to _stricmp instead
of _strnicmp in Windows. (reported by Darren Bilby).
7/21/08: Bug Fix: Fall back to open "\\.\" image files on Windows
with SHARE_WRITE access so that drive devices can be opened.
(reported by Darren Bilby).
8/20/08: Bug Fix: Look for Windows objects when opening files in
Cygwin, not just Win32. Reported by Par Osterberg Medina.
8/21/08: Update: Renamed library and install header files to have a '3'
in them to allow parallel installations of v2 and v3. Suggested by
Simson Garfinkel.
8/22/08: Update: Added -b option to sorter to specify minimum file size
to process. Suggested by Jeff Kell.
8/22/08: Update: Added libewf as a requirement to build win32 so that
E01 files are supported.
8/29/08: Update: Added initial mingw patches for cross compiling and
Windows. Patches by Michael Cohen.
9/X/08: Update: Added ability to access attibutes
9/6/08: Update: Added image layer cache.
9/12/08: Bug Fix: Fixed crash from incorrectly cleared value in FS_DIR
structure. Reported and patched by Jason Miller.
9/13/08: Update: Changed d* tool names to blk*.
9/17/08: Update: Finished mingw support so that both tools and
library work with Unicode file name support.
9/22/08: Update: Added new HFS+ code from Judson Powers and Rob Joyce (ATC-NY)
9/24/08: Bug Fix: Fixed some cygwin compile errors about types on Cygwin.
Reported by Phil Peacock.
9/25/08: Bug Fix: Added O_BINARY to open() in raw and split because Cygwin
was having problems. Reported by Mark Stam.
10/1/08: Update: Added ifndef to TSK_USE_HFS define to allow people
to define it on the command line. Patch by RB.
---------------- VERSION 2.52 --------------
2/12/08: Bug Fix: Fixed warning messages in mactime about non-Numeric
data. Reported by Pope.
2/19/08: Bug Fix: Added #define to tsk_base_i.h to define
LARGEFILE64_SOURCE based on LARGEFILE_SOURCE for older Linux systems.
2/20/08: Bug Fix: Updated afflib references and code.
3/13/08: Update: Added more fixes to auto* so that AFF will compile
on more systems. I have confirmed that AFFLIB 3.1.3 will run with
OS X 10.4.11.
3/14/08: Bug Fix: Added checks to FAT code that calcs size of
directories. If starting cluster of deleted dir points into a
cluster chain, then problems can occur. Reported by John Ward.
3/19/08: Update: I have verified that this compiles with libewf-20070512.
3/21/08: Bug Fix: Deleted Ext/FFS directories were not being recursed
into. This case was rare (because typically the metadata are
wiped), but possible. Reported by JWalker.
3/24/08: Update: I have verified that this compiles with libewf-20080322.
Updates from Joachim Metz.
3/26/08: Update: Changed some of the header file design for the tools
so that the define settings in tsk_config.h can be used (for large files).
3/28/08: Update: Added config.h reference to srch_strings to get the
LARGEFILE support.
4/5/08: Update: Improved inode argument number parsing function.
---------------- VERSION 2.51 --------------
1/30/08: Bug Fix: Fixed potential infinite loop in fls_lib.c. Patch
by Nathaniel Pierce.
2/7/08: Bug Fix: Defined some of the new constants that are used
in disktools because older Linux distros did not define them.
Reported by Russell Reynolds.
2/7/08: Bug Fix: Modified autoconf to check for large file build
requirements and look for new 48-bit structures needed by disktools.
Both of these were causing problems on older Linux distros.
2/7/08: Update: hfind will normalize hash values in database so
that they are case insensitive.
---------------- VERSION 2.50 --------------
12/19/07: Update: Finished upgrade to autotools building design. No
longer include file, afflib, libewf. Resulted in many source code layout
changes and sorter now searches for md5, sha1, etc.
---------------- VERSION 2.10 --------------
7/12/07: Update: 0s are returned for AFF pages that were not imaged.
7/31/07: Bug Fix: ifind -p could crash if a deleted file name was found
that did not point to a valid meta data stucture. (Reported by Andy Bontoft)
8/5/07: Update: Added NSRL support back into sorter.
8/15/07: Update: Errors are given if supplied sector offset is larger than
disk image. Reported by Simson Garfinkel.
8/16/07: Update: Renamed MD5 and SHA1 functions to TSK_MD5_.. and TSK_SHA_....
8/16/07: Update: tsk_error_get() does not reset the error messages.
9/26/07: Bug Fix: Changed FATFS check for valid dentries to consider
second values of 30. Reported by Alessandro Camillo.
10/18/07: Update: inode_walk for NTFS and FAT will not abort if
data corruption is found in one entry -- instead they will just
skip it.
10/18/07: Update: tsk_os.h uses standard gcc system names instead
of TSK specific ones.
10/18/07: Update: Updated raw.c to use ioctl commands on OS X to
get size of raw device because it does not work with SEEK_END.
Patch by Rob Joyce.
10/31/07: Update: Finished upgrade to fatfs_file_walk_off so that
walking can start at a specific offset. Also finished upgrade that
caches FAT run list to make the fatfs_file_walk_off more efficient.
11/14/07: Update: Fixed few places where off_t was being used
instead of OFF_T. Reported by GiHan Kim.
11/14/07: Update: Fixed a memory leak in aff.c to free AFF_INFO.
Reported by GiHan Kim.
11/24/07: Update: Finished review and update of ISO9660 code.
11/26/07: Bug Fix: Fixed 64-bit calculation in HFS+ code. Submitted
by Rob Joyce.
11/29/07: Update: removed linking of srch_strings.c and libtsk. Reported by
kwizart.
11/30/07: Upate: Made a #define TSK_USE_HFS compile flag for incorporating
the HFS support (flag is in src/fstools/fs_tools_i.h)
11/30/07: Update: restricted the FAT dentry sanity checks to verify
space padding in the name and latin-only extensions.
12/5/07: Bug Fix: fs_read_file_int had a bug that ignored the type passed
for NTFS files. Reported by Dave Collett.
12/12/07: Update: Changed teh FAT dentry sanity checks to allow spaces
in volume labels and do more checking on the attribute flag.
---------------- VERSION 2.09 --------------
4/6/07: Bug Fix: Inifite loop in ext2 and ffs istat code because of using
unsigned size_t variable. Reported by Makoto Shiotsuki.
4/16/07: Bug Fix: Changed use of fseek() to fseeko() in hashtools. Patch
by Andy Bontoft.
4/16/07: Bug Fix: Changed Win32 SetFilePointer to use LARGE_INTEGER.
Reported by Kim GiHan.
4/19/07: Bug Fix: Not all FAT orphan files were being found because of
and offset error.
4/26/07: Bug Fix: ils -O was not working (link value not being
checked). Reported by Christian Perst.
4/27/07: Bug Fix: ils -r was showing UNUSED inodes. Reported by
Christian Perst.
5/10/07: Update: Redefined the USED and UNUSED flags for NTFS so that
UNUSED is set when no attributes exist.
5/16/07: Bug Fix: Fixed several bounds checking bugs that may cause
a crash if the disk image is corrupt. Reported by Tim Newsham (iSec
Partners)
5/17/07: Update: Updated AFFLIB to 2.2.11
5/17/07: Update: Updated libewf to libewf-20070512
5/17/07: Update: Updated file to 4.20
5/29/07: Update: Removed NTFS SID/SDS contributed code because it causes
crashes on some systems and its output is not entirely clear. (most recent bug
reported by Andy Scott)
6/11/07: Update: Updated AFFLIB to 2.2.12.
6/12/07: Bug Fix: ifind -p was not reporting back info on the allocated name
when one existed (because strtok was overwritting the name when the search
continued). Reported by Andy Bontoft.
6/13/07: Update: Updated file to 4.21
---------------- VERSION 2.08 --------------
12/19/06: Bug Fix: ifind_path was not setting *result when root inode
was searched for. patch by David Collett.
12/29/06: Update: Removed 'strncpy' in ntfs.c to manual assignment of
text for '$Data' and 'N/A' for performance reasons.
1/11/07: Update: Added duname to FS_INFO that contains a string of
name for a file system's data unit -- Cluster for example.
1/19/07: Bug Fix: ifind_path was returning an error even after some
files were found. Errors are now ignored if a file was found.
Reported by Michael Cohen.
1/26/07: Bug Fix: Fixed calcuation of inode numbers in fatfs.c
(reported by Simson Garfinkel).
2/1/07: Update: Changed aff-install to support symlinked directory.
2/1/07: Update: img_open modified so that it does not report errors for
s3:// and http:// files that do not exist.
2/5/07: Update: updated *_read() return values to look for "<0" instead of
simply "== -1". (suggested by Simson Garfinkel).
2/8/07: Update: removed typedef for uintptr in WIN32 code.
2/13/07: Update: Applied patch from Kim Kulak to update HFS+ code to internal
design changes.
2/16/07: Update: Renamed many of the external data structures and flags
so that they start with TSK_ or tsk_ to prevent name collisions.
2/16/07: Update: Moved MD5 and SHA1 routines and binaries to auxtools
instead of hashtools so that they are more easy to access.
2/16/07: Update: started redesign and port of hashtools.
2/21/07: Update: Changed inode_walk callback API to remove the flags
variable -- this was redundant since flags are also in TSK_FS_INODE.
Same for TSK_FS_DENT.
3/7/07: Bug Fix: fs_read_file failed for NTFS resident files. Reported
by Michael Cohen.
3/8/07: Bug Fix: FATFS assumed a 512-byte sector in a couple of locations.
3/13/07: Update: Finished hashtools update.
3/13/07: Update: dcat reads block by block instead of all at once.
3/23/07: Update: Change ntfs_load_secure to allocate all of its
needed memory at once instead of doing reallocs.
3/23/07: Update: Updated AFFLIB to 2.2.0
3/24/07: Bug Fix: Fixed many locations where return value from strtoull
was not being properly checked and therefore invalid numbers were not
being detected.
3/24/07: Bug Fix: A couple of error messages in ntfs_file_walk should
have been converted to _RECOVER when the _RECOVERY flag was given.
3/24/07: Update: Changed behavior of ntfs_file_walk. If no type is
given, then a default type is chosen for files and dirs. Now, no error
is generated if that type does not exist -- similar to how no error is
generated if a FAT file has 0 file size.
3/26/07: Update: cleaned up and documented fs_data code more.
3/29/07: Update: Updated AFF to 2.2.2.
3/29/07: Update: Updated install scripts for afflib, libewf, and file to
touch files so that the auto* files are in the correct time stamp order.
4/5/07: Bug Fix: Added sanity checks to offsets and addresses in ExtX and
UFS group descriptors. Reported by Simson Garfinkel.
---------------- VERSION 2.07 --------------
9/6/06: Update: Changed TCHAR and _T to TSK_TCHAR and _TSK_T to avoid
conflicts with other libraries.
9/18/06: Update: Added tsk_list_* functions and structures.
9/18/06: Update: Added checks for recursive FAT directories.
9/20/06: Update: Changed FS_META_* flags for LINK and UNLINK and moved
them to ILS_? flags.
9/20/06: Update: added flags to ils to find only orphan inodes.
9/20/06: Update: Added Orphan support for FAT, NTFS, UFS, Ext2, ISO.
9/20/06: Update: File walk actions now have a flag to identify if a block
is SPARSE or not (used to identify if the address being passed is valid
or made up).
9/21/06: Update: Added file size sanity check to fatfs_is_dentry and
fixed assignment of fatfs->clustcnt.
9/21/06: Update: block_, inode, and dent_walk functions now do more flag
checking and make sure that some things are set instead of making the
calling code do it.
9/21/06: Update: Added checks for recursive (infinite loop) NTFS, UFS,
ExtX, and ISO9660 directories.
9/21/06: Update Added checks to make sure that walking the FAT for files
and directories would result in an infinite loop (if FAT is corrupt).
9/21/06: Update: Added -a and -A to dls to specify allocated and
unallocated blocks to display.
9/21/06: Update: Updated AFFLIB to 1.6.31.
9/22/06: Update: added a fs_read_file() function that allows you to read
random parts of a file.
10/10/06: Update: Improved performance of fs_read_file() and added
new FS_FLAG_META_COMP and FS_FLAG_DATA_COMP flags to show if a file
and data are using file system-level compression (NTFS only).
10/18/06: Bug fix: in fs_data_put_run, added a check to see
if the head was null before looking up. An extra error message
was being created for nothing.
10/18/06: Bug Fix: Added a check to the compression buffer
to see if it is null in _done().
10/25/06: Bug Fix: Added some more bounds checks to NTFS uncompression code.
11/3/06: Bug Fix: added check to dcat_lib in case the number of blocks
requested is too large.
11/07/06: Update: Added fs_read_file_noid wrapper around fs_read_file
interface.
11/09/06: Update: Updated AFF to 1.7.1
11/17/06: Update: Updated libewf to 20061008-1
11/17/06: Bug Fix: Fixed attribute lookup bug in fs_data_lookup.
Patch by David Collett.
11/21/06: Bug Fix: Fixed fs_data loops that were stopping when they hit
an unused attribute. Patch by David Collett.
11/21/06: Bug Fix: sorter no longer clears the path when it starts. THis
was causing errors on Cygwin because OpenSSL libraries could not be found.
11/22/06: Update: Added a tskGetVersion() function to return the string
of the current version.
11/29/06: Update: Added more tsk_error_resets to more places to prevent
extra error messages from being displayed.
11/30/06: Update: Added Caching to the getFAT function and to fs_read.
12/1/06: Update: Changed TSK_LIST to a reverse sorted list of buckets.
12/5/06: Bug Fix: Fixed FS_DATA_INUSE infinite loop bug.
12/5/06: Bug Fix: Fixed infinite loop bug with NTFS decompression code.
12/5/06: Update: Added NULL check to fs_inode_free (from Michael Cohen).
12/5/06: Update: Updated ifind_path so that an allocated name will be
shown if one exists -- do not exit if we find simply an unallocated
entry with an address of 0. Suggested by David Collett.
12/6/06: Update: Updated file to version 4.18.
12/6/06: Update: Updated libaff to 2.0a10 and changed build process
accordingly.
12/7/06: Update: Added a tsk_error_get() function that returns a string
with the error messages -- can be used instead of tsk_error_print.
12/7/06: Update: fixed some memory leaks in FAT and NTFS code.
12/11/06: Bug Fix: fatfs_open error message code referenced a value that
was in freed memory -- reordered statements.
12/15/06: Update: Include VCProj files in build.
---------------- VERSION 2.06 --------------
8/11/06: Bug Fix: Added back in ASCII/UTF-8 checks to remove control
characters in file names.
8/11/06: Bug Fix: Added support for fast sym links in UFS1
8/11/06: Update: Redesigned the endian support so that getuX takes only
the endian flag so that the Unicode design could be changed as well.
8/11/06: Update: Redesigned the Unicode support so that there is a
tsk_UTF... routine instead of fs_UTF...
8/11/06: Update: Updated GPT to fully convert UTF16 to UTF8.
8/11/06: Update: There is now only one aux_tools header file to include
instead of libauxtools and/or aux_lib, which were nearly identical.
8/16/06: Bug Fix: ntfs_dent_walk could segfault if two consecutive
unallocated entries were found that had an MFT entry address of 0.
Reported by Robert-Jan Mora.
8/16/06: Update: Changed a lot of the header files and reduced them so
that it is easier to use the library and only one header file needs to
be included.
8/21/06: Update: mmtools had char * instead of void * for walk callback
8/22/06: Update: Added fs_load_file function that returns a buffer full
with the contents of a file.
8/23/06: Update: Upgraded AFFLIB to 1.6.31 and libewf to 20060820-1.
8/25/06: Update: Created printf wrappers so that output is UTF-16 on
Windows and UTF-8 on Unix.
8/25/06: Update: Continued port to Windows by starting to use more
TCHARS and defining needed macros for the Unix side.
8/25/06: Bug Fix: Fixed crash that could occur because of SDS code
in NTFS. (reported by Simson Garfinkel) (BUG: 1546925).
8/25/06: Bug Fix: Fixed crash that could occur because path stack became
corrupt with deep directories or corrupt images. (reported by Simson
Garfinkel) (BUG: 1546926).
8/25/06: Bug Fix: Fixed infinite loop that could occur when trying to
determine size of FAT directory when the FAT has a loop in it. (BUG:
1546929)
8/25/06: Update: Improved FAT checking code to look for '.' and '..'
entries when inode value is replaced during dent_walk.
8/29/06: Update: Finished Win32 port and changes to handle UTF-16 vs
UTF-8 inputs.
8/29/06: Update: Created a parse_inum function to handle parsing inode
addresses from command line.
8/30/06: Update: Made progname a local variable instead of global.
8/31/06: Bug Fix: Fixed a sizeof() error with the memset in fatfs_inode_walk
for the sect_alloc buffer.
8/31/06: Update: if mktime in dos2unixtime returns any negative value,
then the return value is set to 0. Windows and glibc seem to have
different return values.
---------------- VERSION 2.05 --------------
5/15/06: Bug Fix: Fixed a bug in img_cat that could cause it to
go into an infinite loop. (BUG: 1489284)
5/16/06: Update: Fixed printf statements in tsk_error.c that caused
warning messages for some compilers. Reported by Jason DePriest.
5/17/06: Update: created a union of file system-specific file times in
FS_INFO (Patch by Wyatt Banks)
5/22/06: Bug Fix: Updated libewf to 20060520 to fix bug with reported
image size. (BUG: 1489287)
5/22/06: Bug Fix: Updated AFFLIB to 1.6.24 so that TSK could compile in
CYGWIN. (BUG: 1493013)
5/22/06: Update: Fixed some more printf statements that were causing
compile warnings.
5/23/06: Update: Added a file existence check to img_open to make error
message more accurate.
5/23/06: Update: Usage messages had extra "Supported image types message".
5/25/06: Update: Added block / page range to fsstat for raw and swapfs.
6/5/06: Update: fixed some typos in the output messages of sigfind (reported
by Jelle Smet)
6/9/06: Update: Added HFS+ template to sigfind (Patch by Wyatt Banks)
6/9/06: Update: Added ntfs and HFS template to sigfind.
6/19/06: Update: Begin Windows Visual Studio port
6/22/06: Update: Updated a myflags check in ntfs.c (reported by Wyatt Banks)
6/28/06: Update: Incorporated NTFS compression patch from I.D.E.A.L.
6/28/06: Update: Incorporated NTFS SID patch from I.D.E.A.L.
6/28/06: Bug Fix: A segfault could occur with NTFS if no inode was loaded
in the dent_walk code. (Reported by Pope).
7/5/06: Update: Added tsk_error_reset function and updated code to use it.
7/5/06: Update: Added more sanity checks to the DOS partitions code.
7/10/06: Update: Upgraded libewf to version 20060708.
7/10/06: Update: Upgraded AFFLIB to version 1.6.28
7/10/06: Update: added 'list' option to usage message so that file
system, image, volume system types are listed only if '-x list' is given.
Suggested by kenshin.
7/10/06: Update: Compressed NTFS files use the compression unit size
specified in the header.
7/10/06: Update: Added -R flag to icat to suppress recovery warnings and
use this flag in sorter to prevent FAT recovery messages from filling
up screen.
7/10/06: Update: file_walk functions now return FS_ERR_RECOVERY error
codes for most cases if the RECOVERY flag is set -- this allows the
errors to be more easily suppressed.
7/12/06: Update: Removed individual libraries and now make a single
static libtsk.a library.
7/12/06: Update: Cleaned up top-level Makefile. Use '-C' flag (suggested
by kenshin).
7/14/06: Update: Fixed and redesigned some of the new NTFS compression
code. Changed variable names.
7/20/06: Update: Fixed an NTFS compression bug if a sub-block was not
compressed.
7/21/06: Update: Made NTFS compression code thread friendly.
---------------- VERSION 2.04 --------------
12/1/05: Bug Fix: Fixed a bug in the verbose output of img_open
that would crash if no type or offset was given. Reported and
patched by Wyatt Banks.
12/20/05: Bug Fix: An NTFS directory index sanity check used 356
instead of 365 when calculating an upper bound on the times. Reported
by Wyatt Banks.
12/23/05: Bug Fix: Two printf statements in istat for NTFS printed
to stdout instead of a specific file handle. Reported by Wyatt
Banks.
1/22/06: Bug Fix: fsstat, imgstat and dcalc were using a char instead
of int for the return value of getopt, which caused some systems to not
execute the programs. (internal fix and later reported by Bernhard Reiter)
2/23/06: Update: added support for FreeBSD 6.
2/27/06: Bug Fix: Indirect blocks would nto be found by ifind with
UFS and Ext2. Reported by Nelson G. Mejias-Diaz. (BUG: 1440075)
3/9/06: Update: Added AFF image file support.
3/14/06: Bug Fix: If the first directory entry of a UFS or ExtX block
was unallocated, then later entries may not be shown. Reported by John
Langezaal. (BUG: 1449655)
4/3/06: Update: Finished the improved error handling. Many internal
changes, not many external changes. error() function no longer used
and instead tsk_err variables and function are used. This makes the
library more powerful.
4/5/06: Update: The byte offset for a volume is now passed to the mm_
and fs_ functions instead of img_open. This allows img_info to be used
for multiple volumes at the same time. This required some mm_ changes.
4/5/06: Update: All TSK libraries are written to the lib directory.
4/6/06: Update: Added FS_FLAG_DATA_RES flag to identify data that are
resident in ntfs_data_walk (suggested by Michael Cohen).
4/6/06: Update: The partition code (media Management) now checks that a
partition starts before the end of the image file. There are currently
no checks about the end of the partition though.
4/6/06: Update: The media management code now shows unpartitioned space
as such from the end of the last partition to the end of the image file
(using the image file size). (Suggested by Wyatt Banks).
4/7/06: Update: New version of ISO9660 code from Wyatt Banks and Crucial
Security added and other code updated to allow CDs to be analyzed.
4/7/06: There was a conflict with guessuXX with mmtools and fstools.
Renamed to mm_guessXX and fs_guessXX.
4/10/06: Upgraded AFFLIB to 1.5.6
4/12/06: Added version of libewf and support for it in imgtools
4/13/06: Added new img_cat tool to extract raw data from an image format.
4/24/06: Upgraded AFFLIB to 1.5.12
4/24/06: split and raw check if the image is a directory
4/24/06: Updated libewf to 20060423-1
4/26/06: Updated makedefs to work with SunOS 5.10
5/3/06: Added iso9660 patch from Wyatt Banks so that version number
is not printed with file name.
5/4/06: Updated error checking in icat, istat, fatfs_dent, and ntfs_dent
5/8/06: Updated libewf to 20060505-1 to fix some gcc 2 compile errors.
5/9/06: Updated AFFLIB to 1.6.18
5/11/06: Cleaned up error handling (removed %m and unused legacy code)
5/11/06: Updated AFFLIB to 1.6.23
---------------- VERSION 2.03 --------------
7/26/05: Update: Removed incorrect print_version() statement from
fs_tools.h (reported by Jaime Chang)
7/26/05: Update: Renamed libraries to start with "lib"
7/26/05: Update: Removed the logfp variable for verbose statements
and instead use only stderr.
8/12/05: Update: If time is 0, then it is put as 00:00:00 instead of
the default 1970 or 1980 time.
8/13/05: Update: Added Unicode support for FAT and NTFS (Supported by
I.D.E.A.L. Technology Corp).
9/2/05: Update: Added Unicode support for UFS and ExtX. Non-printable
ASCII characters are no longer replaced with '^.'.
9/2/05: Update: Improved the directory entry sanity checks for UFS
and ExtX.
9/2/05: Update: Upgraded file to version 4.15.
9/2/05: Update: The dent_walk code of all file systems does not
abort if a sub-directory is encountered with an error. If it is the
top directory explicitly called, then it still gives an error.
9/2/05: Bug Fix: MD5 and SHA-1 values were incorrect under AMD64
systems because the incorrect variable sizes were being used.
(reported by: Regis Friend Cassidy. BUG: 1280966)
9/2/05: Update: Changed all licenses in TSK to Common Public License
(except those that were already IBM Public License).
9/15/05: Bug Fix: The Unicode names would not be displayed if the FAT
short name entry was using code pages. The ASCII name check was removed,
which may lead to more false positives during inode_walk.
10/05/05: Update: improved the sector size check when the FAT boot
sector is read (check for specific values besides just mod 512).
10/12/05: Update: The ASCII name check was added back into FAT, but
the check no longer looks for values over 0x80.
10/12/05: Update: The inode_walk function in FAT skips clusters
that are allocated to files. This makes it much faster, but it
will now not find unallocated directory entries in the slack space
of allocated files.
10/13/05: Update: sorter updated to handle unicode in HTML output.
---------------- VERSION 2.02 --------------
4/27/05: Bug Fix: the sizes of 'id' were not consistent in the
front-end and library functions for icat and ffind. Reported by
John Ward.
5/16/05: Bug Fix: fls could segfault in FAT if short name did not
exist. There was also a bug where the long file name variable
(fatfs->lfn_len) was not reset after processing a directory and the
next entry could incorrectly get the long name. Reported by Jaime
Chang. BUG: 1203673.
5/18/05: Update: Updated makedefs to support Darwin 8 (OS X Tiger)
5/23/05: Bug Fix: ntfs_dent_walk would not always stop when WALK_STOP
was returned. This caused some issues with previous versions of ifind.
This was fixed.
5/24/05: Bug Fix: Would not compile under Suse because it had header
file conflicts for the size of int64_t. Reported by: Andrea Ghirardini.
BUG: 1203676
5/25/05: Update: Fixed some memory leaks in fstools (reported by Jaime
Chang).
6/13/05: Update: Compiled with g++ to get better warning messages.
Fixed many signed versus unsigned comparisons, -1 assignments to
unsigned vars, and some other minor internal issues.
6/13/05: Bug Fix: if UFS or FFS found a valid dentry in unallocated
space, it could have a documented length that is larger than the
remaining unallocated space. This would cause an allocated name
to be skipped. BUG: 1210204 Reported by Christopher Betz.
6/13/05: Update: Improved design of all dent code so that there are no
more global variables.
6/13/05: Update: Improved design of FAT dent code so that FATFS_INFO
does not keep track of long file name information.
6/13/05: Bug Fix: If a cluster in a directory started with a strange
dentry, then FAT inode_walk would skip it. The fixis to make sure
that all directory sectors are processed. (BUG: 1203669). Reported
by Jaime Chang.
6/14/05: Update: Changed design of FS_INODE so that it contains the
inode address and the inode_walk action was changed to remove inum
as an argument.
6/15/05: Update: Added 'ils -o' back in as 'ils -O' to list open
and deleted files.
6/15/05: Update: Added '-m' flag to mactime so that it prints the month
as a number instead of its name.
7/2/05: Bug Fix: If an NTFS file did not have a $DATA or $IDX_*
attribute, then fls would not print it. The file had no content, but
the name should be shown. (BUG: 1231515) (Reported by Fuerst)
---------------- VERSION 2.01 --------------
3/24/05: Bug Fix: ffind would fail if the directory had two
non-printable chars. The handling of non-printable chars was changed
to replace with '^.'. (BUG: 1170310) (reported by Brian Baskin)
3/24/05: Bug Fix: icat would not print the output to stdout when split
images were used. There was a bug in the image closing process of
icat. (BUG: 1170309) (reported by Brian Baskin)
3/24/05: Update: Changed the header files in fstools to make fs_lib.h
more self contained.
4/1/05: Bug Fix: Imgtools byte offset with many leading 0s could
cause issues. (BUG: 1174977)
4/1/05: Update: Removed test check in mmtools/dos.c for value cluster
size because to many partition tables have that as a valid field.
Now it checks only OEM name.
4/8/05: Update: Updated usage of 'strtoul' to 'strtoull' for blocks
and inodes.
---------------- VERSION 2.00 --------------
1/6/05: Update: Added '-b' flag to 'mmls' so that sizes can be
printed in bytes. Suggested and a patch proposed by Matt Kucenski
1/6/05: Update: Define DADDR_T, INUM_T, OFF_T, PNUM_T as a static
size and use those to store values in data structures. Updated
print statements as well.
1/6/05: Update: FAT now supports larger images becuase the inode
address space is 64-bits.
1/6/05: Moved guess and get functions to misc from mmtools and
fstools.
1/7/05: Update: Added imgtools with support for "raw" and "split"
layers. All fstools have been updated.
1/7/05: Update: removed dtime from ils output
1/9/05: Update: FAT code reads in clusters instead of sectors to
be faster (suggested by David Collett)
1/9/05: Update: mmtools uses imgtools for split images etc.
1/10/05: Update: Removed usage of global variables when using
file_walk internally.
1/10/05: Update: mmls BSD will use the next sector automatically
if the wrong is given instead of giving an error.
1/10/05: Update: Updated file to version 4.12
1/11/05: Update: Added autodetect to file system tools.
1/11/05: Update: Changed names to specify file system type (not
OS-based)
1/11/05: Update: Added '-t' option to fsstat to give just the type.
1/11/05: Update: Added autodetect to mmls
1/17/05: Update: Added the 'mmstat' tool that gives the type of
volume system.
1/17/05: Update: Now using CVS for local version control - added
date stamps to all files.
2/20/05: Bug Fix: ils / istat would go into an infinte loop if the
attribute list had an entry with a length of 0. Reported by Angus
Marshall (BUG: 1144846)
3/2/05: Update: non-printable letters in ExtX/UFS file names are
now replaced by a '.'
3/2/05: Update: Made file system tools more library friendly by
making stubs for each application.
3/4/05: Update: Redesigned the diskstat tool and created the
disksreset tool to remove the HPA temporarily.
3/4/05: Update: Added imgstat tool that displays image format
details
3/7/05: Bug Fix: In fsstat on ExtX, the final group would have an
incorrect _percentage_ of free blocks value (although the actual
number was correct). Reported by Knut Eckstein. (BUG: 1158620)
3/11/05: Update: Renamed diskstat, disksreset, sstrings, and imgstat to
disk_stat, disk_sreset, srch_strings, and img_stat to make the names more
clear.
3/13/05: Bug Fix: The verbose output for fatfs_file_walk had an
incorrect sector address. Reported by Rudolph Pereira.
3/13/05: Bug Fix: The beta version had compiling problems on FreeBSD
because of a naming clash with the new 'fls' functions. (reported
by secman)
---------------- VERSION 1.74 --------------
11/18/04: Bug Fix: FreeBSD 5 would produce incorrect 'icat' output for
Ext2/3 & UFS1 images because it used a 64-bit on-disk address.
reported by neutrino neutrino. (BUG: 1068771)
11/30/04: Bug Fix: The makefile in disktools would generate an error
on some systems (Cygwin) because of an extra entry. Reported by
Vajira Ganepola (BUG: 1076029)
---------------- VERSION 1.73 --------------
09/09/04: Update: Added journal support for EXT3FS and added jls
and jcat tools.
09/13/04: Updated: Added the major and minor device numbers to
EXTxFS istat.
09/13/04: Update: Added EXTxFS orphan code to 'fsstat'
09/24/04: Update: Fixed incorrect usage of 'ptr' and "" in action
of ntfs_dent.c. Did not affect any code, but could have in the
future. Reported by Pete Winkler.
09/25/04: Update: Added UFS flags to fsstat
09/26/04: Update: All fragments are printed for indirect block pointer
addresses in UFS istat.
09/29/04: Update: Print extended UFS2 attributes in 'istat'
10/07/04: Bug Fix: Changed usage of (int) to (uintptr_t) for pointer
arithmetic. Caused issues with Debian Sarge. (BUG: 1049352) - turned out
to be from changes made to package version so that it would compile in
64-bit system (BUG: 928278).
10/11/04: Update: Added diskstat to check for HPA on linux systems.
10/13/04: Update: Added root directory location to FAT32 fsstat output
10/17/04: Bug Fix: EXTxFS superblock location would not be printed
for images in fsstat that did not have sparse superblok (which is
rare) (BUG: 1049355)
10/17/04: Update: Added sigfind tool to find binary signatures.
10/27/04: Bug Fix: NTFS is_clust_alloc returned an error when loading
$MFT that had attribute list entry. Now I assume that clusters
referred to by the $MFT are allocated until the $MFT is loaded.
(BUG: 1055862).
10/28/04: Bug Fix: Check to see if an attribute with the same name
exists instead of relying on id only. (ntfs_proc_attrseq) Affects
the processing of attribute lists. Reported by Szakacsits Szabolcs,
Matt Kucenski, & Gene Meltser (BUG: 1055862)
10/28/04: Update: Removed usage of mylseek in fstools for all systems
(Bug: 928278)
---------------- VERSION 1.72 --------------
07/31/04: Update: Added flag to mft_lookup so that ifind can run in noabort
mode and it will not stop when it finds an invalid magic value.
08/01/04: Update: Removed previous change and removed MAGIC check
entirely. XP doesn't even care if the Magic is corrupt, so neither
does TSK. The update sequence check should find an invalid MFT
entry.
08/01/04: Update: Added error message to 'ifind' if none of the search
options are given.
08/05/04: Bug Fix: Fixed g_curdirptr recursive error by clearing the value
when dent_walk had to abort because a deleted directory could not be recovered.
(BUG: 1004329) Reported by epsilon@yahoo.com
08/16/04: Update: Added a sanity check to fatfs.c fat2unixtime to check
if the year is > 137 (which is the overflow date for the 32-bit UNIX time).
08/16/04: Update: Added first version of sstrings from binutils-2.15
08/20/04: Bug Fix: Fixed a bug where the group number for block 0 of an
EXT2FS file system would report -1. 'dstat' no longer displays value when it
is not part of a block group. (BUG: 1013227)
8/24/04: Update: If an attribute list entry is found with an invalid MFT
entry address, then it is ignored instead of an error being generated and
exiting.
8/26/04: Update: Changed internal design of NTFS to make is_clust_alloc
8/26/04: Update: If an attribute list entry is found with an invalid MFT
entry address AND the entry is unallocated, then no error message is
printed, it is just ignored or logged in verbose mode.
8/29/04: Update: Added support for 32-bit GID and UID in EXTxFS
8/30/04: Bug Fix: ntfs_dent_walk was adding 24 extra bytes to the
size of the index record for the final record processing (calc of
list_len) (BUG: 1019321) (reported and debugging help from Matt
Kucenski).
8/30/04: Bug Fix: fs_data_lookup was using an id of 0 as a wild
card, but 0 is a legit id value and this could cause confusion. To
solve this, a new FS_FLAG_FILE_NOID flag was added and a new
fs_data_lookup_noid function that will not use the id to lookup
values. (BUG: 1019690) (reported and debugging help from Matt
Kucenski)
8/30/04: Update: modified fs_data_lookup_noid to return unamed data
attribute if that type is requested (instead of just relying on id
value in attributes)
8/31/04: Update: Updated file to v4.10, which seems to fix the
CYGWIN compile problem.
9/1/04: Update: Added more DOS partition types to mmls (submitted by
Matt Kucenski)
9/2/04: Update: Added EXT3FS extended attributes and Posix ACL to istat
output.
9/2/04: Update: Added free inode and block counts per group to fsstat for
EXT2FS.
9/7/04: Bug Fix: FreeBSD compile error for PRIx printf stuff in mmtools/gpt.c
---------------- VERSION 1.71 --------------
06/05/04: Update: Added sanity checks in fat to unix time conversion so that
invalid times are set to 0.
06/08/04: Bug Fix: Added a type cast when size is assigned in FAT
and removed the assignment to a 32-bit signed variable (which was no
longer needed). (Bug: 966839)
06/09/04: Bug Fix: Added a type cast to the 'getuX' macros because some
compilers were assuming it was signed (Bug: 966839).
06/11/04: Update: Changed NTFS magic check to use the aa55 at the
end and fixed the name of the original "magic" value to oemname.
The oemname is now printed in fsstat.
06/12/04: Bug Fix: The NTFS serial number was being printed with
bytes in the wrong order in the fsstat output. (BUG: 972207)
06/12/04: Update: The begin offset value in index header for NTFS
was 16-bits instead of 32-bits.
06/22/04: Update: Created a library for the MD5 and SHA1 functions so
that it can be incorporated into other tools. Also renamed some of the
indexing tools that hfind uses.
06/23/04: Update: Changed output of 'istat' for NTFS images. Added more
data from $STANDARD_INFORMATION.
07/13/04: Update: Changed output of 'istat' for NTFS images again. Moved
more data to the $FILE_NAME section and added new data.
07/13/04: Update: Changed code for processing NTFS runs and no
longer check for the offset to be 0 in ntfs_make_data_run(). This
could have prevented some sparse files from being processed.
07/13/04: Update: Added flags for compressed and encrypted NTFS
files. They are not decrypted or uncompressed yet, just identified.
They cannot be displayed from 'icat', but the known layout is given
in 'istat'.
07/18/04: Bug Fix: Sometimes, 'icat' would report an error about an
existing FILLER entry in an NTFS attribute. This was traced to
instances when it was run on a non-base file record. There is now
a check for that to not show the error. (BUG: 993459)
07/19/04: Bug Fix: A run of -1 may exist for sparse files in non-NT
versions of NTFS. Changed check for this. reported by Matthew
Kucenski. (BUG: 994024).
07/24/04: Bug Fix: NTFS attribute names were missing (rarely) on
some files because the code assumed they would always be at offset
64 for non-res attributes (Bug: 996981).
07/24/04: Update: Made listing of unallcoated NTFS file names less
strict. There was a check for file name length versus stream length.
07/24/04: Update: Added $OBJECT_ID output to 'istat'
07/24/04: Update: Fixed ntfs.c compile warning about constant too
large in time conversion code.
07/25/04: Update: Added attribute list contents to NTFS 'istat' output
07/25/04: Bug Fix: Not all slack space was being shown with 'dls -s'.
It was documented that this occurs, but it is not what would be
expected. (BUG: 997800).
07/25/04: Update: Changed output format of 'dls -s' so that it sends
zeros where the file content was. Therefore the output is now a
multiple of the data unit size. Also removed limitation to FAT &
NTFS.
07/25/04: Update: 'dcalc' now has the '-s' option calculate the
original location of data from a slack space image (dls -s).
(from Chris Betz).
07/26/04: Update: Created the fs_os.h file and adjusted some of the
header files for the PRI macros (C99). Created defines for OSes that do
not have the macros already defined.
07/26/04: Non-release bug fix: Fixed file record size bug introduced with
recent changes.
07/27/04: Update: Added GPT support to mmls.
07/29/04: Update: Added '-p' flag to 'ifind' to find deleted NTFS files
that point to the given parent directory. Added '-l and -z' as well.
---------------- VERSION 1.70 --------------
04/21/04: Update: Changed attribute and mode for FAT 'istat' so
that actual FAT attributes are used instead of UNIX translation.
04/21/04: Update: The FAT 'istat' output better handles Long FIle
Name entry
04/21/04: Update: The FAT 'istat' output better handles Volume Label
entry
04/21/04: Update: Allowed the FAT volume label entry to be displayed
with 'ils'
04/21/04: Update: Allowed the FAT volume label entry to be displayed
with 'fls'
04/24/04: Update: 'dstat' on a FAT cluster now shows the cluster
address in addition to the sector address.
04/24/04: Update: Added the cluster range to the FAT 'fsstat' output
05/01/04: Update: Improved the FAT version autodetect code.
05/02/04: Update: Removed 'H' flag from 'icat'.
05/02/04: Update: Changed all of the FS_FLAG_XXX variables in the
file system tools to constants that are specific to the usage
(NAME, DATA, META, FILE).
05/03/04: Update: fatfs_inode_walk now goes by sectors instead of clusters
to get more dentries from slack space.
05/03/04: Bug Fix: The allocation status of FAT dentires was set only by
the flag and not the allocation status of the cluster it is located in.
(BUG: 947112)
05/03/04: Update: Improved comments and variable names in FAT code
05/03/04: Update: Added '-r' flag to 'icat' for deleted file recovery
05/03/04: Update: Added RECOVERY flag to file_walk for deleted file
recovery
05/03/04: Update: Added FAT file recovery.
05/03/04: Update: Removed '-H' flag from 'icat'. Default is to
display holes.
05/03/04: Update: 'fls -r' will recurse down deleted directories in FAT
05/03/04: Update: 'fsstat' reports FAT clusters that are marked as BAD
05/03/04: Update: 'istat' for FAT now shows recovery clusters for
deleted files.
05/04/04: Update: Added output to 'fsstat' for FAT file systems by adding
a list of BAD sectors and improving the amount of layout information. I
also changed some of the internal variables.
05/08/04: Update: Removed addr_bsize from FS_INFO, moved block_frags
to FFS_INFO, modified dcat output only data unit size.
05/20/04: Update: Added RECOVERY flag to 'ifind' so that it can find the
data units that are allocated to deleted files
05/20/04: Update: Added icat recovery options to 'sorter'.
05/20/04: Update: Improved the naming convention in sorter for the 'ils'
dead files.
05/21/04: Update: Added outlook to sorter rules (from David Berger)
05/27/04: Bug Fix: Added <linux/unistd.h> to mylseek.c so that it compiles
with Fedora Core 2 (Patch by Angus Marshall) (BUG: 961908).
05/27/04: Update: Changed the letter with 'fls -l' for FIFO to 'p'
instead of 'f' (reported by Dave Henkewick).
05/28/04: Update: Added '-u' flag to 'dcat' so that the data unit size
can be specified for raw, swap, and dls image types.
05/28/04: Update: Changed the size argument of 'dcat' to be number of
data units instead of size in bytes (suggestion by Harald Katzer).
---------------- VERSION 1.69 --------------
03/06/04: Update: Fixed some memory leaks in ext2fs_close. reported
by Paul Bakker.
03/10/04: Bug Fix: If the '-s' flag was used with 'icat' on a EXT2FS
or FFS file system, then a large amount of extra data came out.
Reported by epsion. (BUG: 913874)
03/10/04: Bug Fix: One of the verbose outputs in ext2fs.c was being sent
to STDOUT instead of logfp. (BUG: 913875)
04/14/04: Update: Added more data to fsstat output of FAT file system.
04/15/04: Bug Fix: The last sector of a FAT file system may not
be analyzed. (BUG: 935976)
04/16/04: Update: Added full support for swap and raw by making the
standard files and functions for them instead of the hack in dcat.
Suggested by (and initial patch by) Paul Baker.
04/18/04: Update: Changed error messages in EXT2/3FS code to be extXfs.
04/18/04: Update: Updaged to version 4.09 of 'file'. This will
help fix some of the problems people have had compiling it under
OS X 10.3.
04/18/04: Update: Added compiling support for SFU 3.5 (Microsoft). Patches
from an anonymous person.
---------------- VERSION 1.68 --------------
01/20/04: Bug Fix: FAT times were an hour too fast during daylight savings.
Now use mktime() instead of manual calculation. Reported by Randall
Shane. (BUG: 880606)
02/01/04: Update: 'hfind -i' now reports the header entry as an invalid
entry. The first header row was ignored.
02/20/04: Bug Fix: indirect block pointer blocks would not be identified by
the ifind tool. Reported by Knut Eckstein (BUG: 902709)
03/01/04: Update: Added fs->seek_pos check to fs_read_random.
---------------- VERSION 1.67 --------------
11/15/03: Bug Fix: Added support for OS X 10.3 to src/makedefs. (BUG: 843029)
11/16/03: Bug Fix: Mac partition tables could generate an error if there were
VOID-type partitions. (BUG: 843366)
11/21/03: Update: Changed NOABORT messages to verbose messages, so invalid
data is not printed during 'ifind' searches.
11/30/03: Bug Fix: icat would not hide the 'holes' if '-h' was given because
the _UNALLOC flag was always being passed to file_walk. (reported by
Knut Eckstein). (BUG: 851873)
11/30/03: Bug Fix: NTFS data_walk was not using _ALLOC and _UNALLOC flags
and other code that called it was not either. (BUG: 851895)
11/30/03: Bug Fix: Not all needed commands were using _UNALLOC when they
called file_walk (although for most cases it did not matter because
sparse files would not be found in a directory for example). (Bug: 851897)
12/09/03: Bug Fix: FFS and EXT2FS code was using OFF_T type instead of
size_t for the size of the file. This could result in a file > 2GB
as being a negative size on some systems (BUG: 856957).
12/26/03: Bug Fix: ffind would crash for root directory of FAT image.
Added NULL check and added a NULL name to fake root directory entry.
(BUG: 871219)
01/05/04: Bug Fix: The clustcnt value for FAT was incorrectly calculated
and was too large for FAT12 and FAT16 by 32 sectors. This could produce
extra entries in the 'fsstat' output when the FAT is dumped.
(BUG: 871220)
01/05/04: Bug Fix: ils, fls, and istat were not printing the full size
of files that are > 2GB. (reported by Knut Eckstein) (BUG: 871457)
01/05/04: Bug Fix: The EXT2FS and EXT3FS code was not using the
i_dir_acl value as the upper 32-bits of regular files that are
> 2GB (BUG: 871458)
01/06/04: Mitigation: An error was reported where sorter would error
that icat was being passed a '-1' argument. I can't find how that would
happen, so I added quotes to all arguments so that the next time it
occurs, the error is more useful (BUG: 845840).
01/06/04: Update: Incorporated patch from Charles Seeger so that 'cc'
can be used and compile time warnings are fixed with Sun 'cc'.
01/06/04: Update: Upgraded file from v3.41 to v4.07
---------------- VERSION 1.66 --------------
09/02/03: Bug Fix: Would not compile under OpenBSD 3 because fs_tools.h
& mm_tools was missing a defined statement (reported by Randy - m0th_man)
NOTE: Bugs now will have an entry into the Source Forge bug tracking
sytem.
10/13/03: Bug Fix: buffer was not being cleared between uses and length
incorrectly set in NTFS resulted in false deleted file names being shown
when the '-r' flag was given. The extra entries were from the previous
directory. (BUG: 823057)
10/13/03: Bug Fix: The results of 'sorter' varied depending on the version
of Perl and the system. If the file output matched more than one,
sorter could not gaurantee which would match. Therefore, results were
different for some files and some machines. 'sorter' now enforces the
ordering based on the order they are in the configuration file. The
entries at the end of the file have priority over the first entries
(generic rules to specific rules). (BUG: 823057)
10/14/03: Update: 'mmls' prints 'MS LVM' with partition type 0x42 now.
10/25/03: Bug Fix: NTFS could have a null pointer crash if the image
was very corrupt and $Data was not found for the MFT.
11/10/03: Bug Fix: NTFS 'ffind' would only report the file name and not
the attribute name because the type and id were ignored. ffind and
ntfs_dent were updated - found during NTFS keyword search test.
(Bug: 831579()
11/12/03: Update: added support for Solaris x86 partition tables to 'mmls'
11/12/03: Update: Modified the sparc data structure to add the correct
location of the 'sanity' magic value.
11/15/03: Update: Added '-s' flag to 'icat' so that slack space is also
displayed.
---------------- VERSION 1.65 --------------
08/03/03: Bug Fix: 'sorter' now checks for inode values that are too
small to avoid 'icat' errors about invalid inode values.
08/19/03: Update: 'raw' is now a valid type for 'dcat'.
08/21/03: Update: mactime and sorter look for perl5.6.0 first.
08/21/03: Update: Removed NSRL support from 'sorter' until a better
wany to identify the known good and known bad files is found
08/21/03: Bug Fix: The file path replaces < and > with HTML
encoding for HTML output (ils names were not being shown)
08/25/03: Update: Added 'nsrl.txt' describing why the NSRL functionality
was removed.
08/27/03: Update: Improved code in 'mactime' to reduce warnings when
'-w' is used with Perl ('exists' checks on arrays).
08/27/03: Update: Improved code in 'sorter' to reduce warnings when
'-w' is used with Perl (inode_int for NTFS).
---------------- VERSION 1.64 --------------
08/01/03: Docs Fix: The Sun VTOC was documented as Virtual TOC and it
should be Volume TOC (Jake @ UMASS).
08/02/03: Bug Fix: Some compilers complained about verbose logging
assignment in 'mmls' (Ralf Spenneberg).
---------------- VERSION 1.63 --------------
06/13/03; Update: Added 'mmtools' directory with 'dos' partitions
and 'mmls'.
06/18/03: Update: Updated the documents in the 'doc' directory
06/19/03: Update: Updated error message for EXT3FS magic check
06/27/03: Update: Added slot & table number to mmls
07/08/03: Update: Added mac support to mmtools
07/11/03: Bug Fix: 'sorter' was not processing all unallocated meta
data structures because of a regexp error. (reported by Jeff Reava)
07/16/03: Update: Added support for FreeBSD5
07/16/03: Update: Added BSD disk labels to mmtools
07/28/03: Update: Relaxed requirements for DOS directory entries, the wtime
can be zero (reported by Adam Uccello).
07/30/03: Update: Added SUN VTOC to mmtools
07/31/03: Update: Added NetBSD support (adam@monkeybyte.org)
08/01/03: Update: Added more sanity checks to FAT so that it would not
try and process NTFS images that have the same MAGIC value
---------------- VERSION 1.62 --------------
04/11/03: Bug Fix: 'fsstat' for an FFS file system could report data
fragments in the last group that were larger than the maximum
fragment
04/11/03: Bug Fix: 'ffs' allows the image to not be a multiple of the
block size. A read error occurred when it tried to read the last
fragments since a whole block could not be read.
04/15/03: Update: Added debug statements to FAT code.
04/26/03: Update: Added verbose statements to FAT code
04/26/03: Update: Added NOABORT flag to dls -s
04/26/03: Update: Added stderr messages for errors that are not aborted
because of NOABORT
05/27/03: Update: Added 'mask' field to FATFS_INFO structure and changed
code in fatfs.c to use it.
05/27/03: Update: isdentry now checks the starting cluster to see if
it is a valid size.
05/27/03: Bug Fix: Added a sanitizer to 'sorter' to remove invalid chars
from the 'file' output and reduce the warnings from Perl.
05/28/03: Bug Fix: Improved sanitize expression in 'sorter'
05/28/03: Update: Added '-d' option to 'mactime' to allow output to be
given in comma delimited format for importing into a spread sheet or
other graphing tool
06/09/03: Update: Added hourly summary / indexing to mactime
06/09/03: Bug Fix: sorter would not allow linux-ext3 fstype
---------------- VERSION 1.61 --------------
02/05/03: Update: Started addition of image thumbnails to sorter
03/05/03: Update: Updated 'file' to version 3.41
03/16/03: Update: Added comments and NULL check to 'ifind'
03/16/03: Bug Fix: Added a valid magic of 0 for MFT entries. This was
found in an XP image.
03/26/03: Bug Fix: fls would crash for an inode of 0 and a clock skew
was given. fixed the bug in fls.c (debug help from Josep Homs)
03/26/03: Update: Added more verbose comments to ntfs_dent.c.
03/26/03: Bug Fix: 'ifind' for a path could return a result that was
shorter than the requested name (strncmp was used)
03/26/03: Update: Short FAT names can be used in 'ifind -n' and
error messages were improved
03/26/03: Bug Fix: A final NTFS Index Buffer was not always processed in
ntfs_dent.c, which resulted in files not being shown. This was fixed
with debugging help from Matthew Shannon.
03/27/03: Update: Added an 'index.html' for image thumbnails in sorter
and added a 'details' link from the thumbnail to the images.html file
03/27/03: Update: 'sorter' can now take a directory inode to start
processing
03/27/03: Update: added '-z' flag when running 'file' in 'sorter' so that
compressed file contents are reported
03/27/03: Update: added '-i' flag to 'mactime' that creates a daily
summary of events
03/27/03: Update: Added support for Version 2 of the NSRL in 'hfind'
04/01/03: Update: Added support for Hash Keeper to 'hfind'
04/01/03: Update: Added '-e' flag to 'hfind' for extended info
(currently hashkeeper only)
---------------- VERSION 1.60 --------------
10/31/02: Bug Fix: the unmounting status of EXT2FS in the 'fsstat' command
was not correct (reported by Stephane Denis).
11/24/02: Bug Fix: The -v argument was not allowed on istat or fls (Michael
Stone)
11/24/02: Bug Fix: When doing an 'ifind' on a UNIX fs, it could abort if it
looked at an unallocated inode with invalid indirect block pointers.
This was fixed by adding a "NOABORT" flag to the walk code and adding
error checks in the file system code instead of relying on the fs_io
code. (suggested by Micael Stone)
11/26/02: Update: ifind has a '-n' argument that allows one to specify a
file name it and it searches to find the meta data structure for it
(suggested by William Salusky).
11/26/02: Update: Now that there is a '-n' flag with 'ifind', the '-d'
flag was added to specify the data unit address. The old syntax of
giving the data_unit at the end is no longer supported.
11/27/02: Update: Added sanity checks on meta data and data unit addresses
earlier in the code.
12/12/02: Update: Added additional debug statements to NTFS code
12/19/02: Update: Moved 'hash' directory to 'hashtools'
12/19/02: Update: Started development of 'hfind'
12/31/02: Update: Improved verbose debug statements to show full 64-bit
offsets
01/02/03: Update: Finished development of 'hfind' with ability to update
for next version of NSRL (which may have a different format)
01/05/03: Bug Fix: FFS and EXT2FS symbolic link destinations where not
properly NULL terminated and some extra chars were appended in 'fls'
(later reported by Thorsten Zachmann)
01/06/03: Bug Fix: getu64() was not properly masking byte sizes and some
data was being lost. This caused incorrect times to be displayed in some
NTFS files.
01/06/03: Bug Fix: ifind reported incorrect ownership for some UNIX
file systems if the end fragments were allocated to a different file than
the first ones were.
01/07/03: Update: Renamed the src/mactime directory to src/timeline.
01/07/03: Update: Updated README and man pages for hfind and sorter
01/12/03: Bug Fix: ntfs_mft_lookup was casting a 64-bit value to a 32-bit
variable. This caused MFT Magic errors. Reported and debugged by
Keven Murphy
01/12/03: Update: Added verbose argument to 'fls'
01/12/03: Bug Fix: '-V' argument to 'istat' was doing verbose instead of
version
01/13/03: Update: Changed static sizes of OFF_T and DADDR_T in Linux
version to the actual 'off_t' and 'daddr_t' types
01/23/03: Update: Changed use of strtok_r to strtok in ifind.c so that
Mac 10.1 could compile (Dave Goldsmith).
01/28/03: Update: Improved code in 'hfind' and 'sorter' to handle
files with spaces in the path (Dave Goldsmith).
---------------- VERSION 1.52 --------------
09/24/02: Bug Fix: Memory leak in ntfs_dent_idxentry(), ntfs_find_file(),
and ntfs_dent_walk()
09/24/02: Update: Removal of index sequences for index buffers is now
done using upd_off, which will allow for NTFS to move the structure in
the future.
09/26/02: Update: Added create time for NTFS / STANDARD_INFO to
istat output.
09/26/02: Update: Changed the method that the NTFS time is converted
to UNIX time. Should be more efficient.
10/09/02: Update: dcat error changed.
10/02/02: Update: Includes a Beta version of 'sorter'
---------------- VERSION 1.51 --------------
09/10/02: Bug Fix: Fixed a design bug that would not allow attribute
lists in $MFT. This bug would generate an error that complained about
an invalid MFT entry in attribute list.
09/10/02: Update: The size of files and directories is now calculated
after each time proc_attrseq() is called so that it is more up to date
when dealing with attribute lists. The size has the sizes of all
$Data, $IDX_ROOT, and $IDX_ALLOC streams.
09/10/02: Update: The maxinum number of MFT entries is now calculated
each time an MFT entry is processed while loading the MFT. This
allows us to reflect what the maximum possible MFT entry is at that
given point based on how many attribute lists have been processed.
09/10/02: Update: Added file version 3.39 to distro (bigger magic files)
(Salusky)
09/10/02: Bug Fix: fs_data was wasting memory when it was allocated
09/10/02: Update: added a fs_data_alloc() function
09/12/02: Bug Fix: Do not give an error if an attribute list of an
unallocated file points to an MFT that no longer claims it is a
member of the list.
09/12/02: Update: No longer need version to remove update sequence
values from on-disk buffers
09/19/02: Bug Fix: fixed memory leak in ntfs_load_ver()
09/19/02: Bug Fix: Update sequence errors were displayed because of a
bug that occurred when an MFT entry crossed a run in $MFT. Only occurred
with 512-byte clusters and an odd number of clusters in a run.
09/19/02: Update: New argument to ils, istat, and fls that allows user to
specify a time skew in seconds of the compromised system. Originated
from discussion at DFRWS II.
09/19/02: Update: Added '-h' argument to mactime to display header info
---------------- VERSION 1.50 --------------
04/21/02: icat now displays idxroot attribute for NTFS directories
04/21/02: fs_dent_print functions now are passed the FS_DATA structure
instead of the extra inode and name strings. (NTFS)
04/21/02: fs_dent_print functions display alternate data stream size instead
of the default data size (NTFS)
04/24/02: Fixed bug in istat that displayed too many fragments with ffs images
04/24/02: Fixed bug in istat that did not display sparse files correctly
04/24/02: fsstat of FFS images now identifies the fragments at the
beginning of cyl groups as data fragments.
04/26/02: Fixed bug in ext2fs_dent_parse_block that did not advance the
directory entry pointer far enough each time
04/26/02: Fixed bug in ext2fs_dent_parse_block so that gave an error if
a file name was exactly 255 chars
04/29/02: Removed the getX functions from get.c as they are now macros
05/11/02: Added support for lowercase flag in FAT
05/11/02: Added support for sequence values (NTFS)
05/13/02: Added FS_FLAG_META for FAT
05/13/02: Changed ifind so that it looks the block up to identify if it is
a meta data block when an inode can not be found
05/13/02: Added a conditional to ifind so that it handles sparse files better
05/19/02: Changed icat so that the default attribute type is set in the
file_walk function
05/20/02: ils and dls now use boundary inode & block values if too large
or small are given
05/21/02: istat now displays all NTFS times
05/21/02: Created functions to just display date and time
05/24/02: moved istat functionality to the specific file system file
05/25/02: added linux-ext3 flag, but no new features
05/25/02: Added sha1 (so Autopsy can use the NIST SW Database)
05/26/02: Fixed bug with FAT that did not return all slack space on file_walk
05/26/02: Added '-s' flag to dls to extract slack space of FAT and NTFS
06/07/02: fixed _timezone variable so correct times are shown in CYGWIN
06/11/02: *_copy_inode now sets the flags for the inode
06/11/02: fixed bug in mactimes that displayed a duplicate entry with time
because of header entries in body file
06/12/02: Added ntfs.README doc
06/16/02: Added a comment to file Makefile to make it easier to compile for
an IR CD.
06/18/02: Fixed NTFS bug that showed ADS when only deleted files were supposed
to be shown (when ADS in directory)
06/19/02: added the day of the week to the mactime output (Tan)
07/09/02: Fixed bug that added extra chars to end of symlink destination
07/17/02: 1.50 Released
---------------- VERSION 1.00 --------------
- Integrated TCT-1.09 and TCTUTILs-1.01
- Fixed bug in bcat if size is not given with type of swap.
- Added platform indep by including the structures of each file system type
- Added flags for large file support under linux
- blockcalc was off by 1 if calculated using the raw block number and
not the one that lazarus spits out (which start at 1)
- Changed the inode_walk and block_walk functions slightly to return a
value so that a walk can be ended in the middle of it.
- FAT support added
- Improved ifind to better handle fragments
- '-z' flag to fls and istat now use the time zone string instead of
integer value.
- no longer prepend / in _dent
- verify that '-m' directory in fls ends with a '/'
- identify the destination of sym links
- fsstat tool added
- fixed caching bug with FAT12 when the value overlapped cache entries
- added mactime
- removed the <inode> value in fls when printing mac format (inode is now printed in mactime)
- renamed src/misc directory to src/hash (it only has md5 and will have sha)
- renamed aux directory to misc (Windows doesn't allow aux as a name ??)
- Added support for Cygwin
- Use the flags in super block of EXT2FS to identify v1 or v2
- removed file system types of linux1 and linux2 and linux
- added file system type of linux-ext2 (as ext3 is becoming more popular)
- bug in file command that reported seek error for object files and STDIN
[![Build Status](https://travis-ci.org/sleuthkit/sleuthkit.svg?branch=develop)](https://travis-ci.org/sleuthkit/sleuthkit)
[![Build status](https://ci.appveyor.com/api/projects/status/8f7ljj8s2lh5sqfv?svg=true)](https://ci.appveyor.com/project/bcarrier/sleuthkit)
# [The Sleuth Kit](http://www.sleuthkit.org/sleuthkit)
## INTRODUCTION
The Sleuth Kit is an open source forensic toolkit for analyzing
Microsoft and UNIX file systems and disks. The Sleuth Kit enables
investigators to identify and recover evidence from images acquired
during incident response or from live systems. The Sleuth Kit is
open source, which allows investigators to verify the actions of
the tool or customize it to specific needs.
The Sleuth Kit uses code from the file system analysis tools of
The Coroner's Toolkit (TCT) by Wietse Venema and Dan Farmer. The
TCT code was modified for platform independence. In addition,
support was added for the NTFS (see [wiki/ntfs](http://wiki.sleuthkit.org/index.php?title=NTFS_Implementation_Notes))
and FAT (see [wiki/fat](http://wiki.sleuthkit.org/index.php?title=FAT_Implementation_Notes)) file systems. Previously, The Sleuth Kit was
called The @stake Sleuth Kit (TASK). The Sleuth Kit is now independent
of any commercial or academic organizations.
It is recommended that these command line tools can be used with
the Autopsy Forensic Browser. Autopsy, (http://www.sleuthkit.org/autopsy),
is a graphical interface to the tools of The Sleuth Kit and automates
many of the procedures and provides features such as image searching
and MD5 image integrity checks.
As with any investigation tool, any results found with The Sleuth
Kit should be be recreated with a second tool to verify the data.
## OVERVIEW
The Sleuth Kit allows one to analyze a disk or file system image
created by 'dd', or a similar application that creates a raw image.
These tools are low-level and each performs a single task. When
used together, they can perform a full analysis. For a more detailed
description of these tools, refer to [wiki/filesystem](http://wiki.sleuthkit.org/index.php?title=TSK_Tool_Overview).
The tools are briefly described in a file system layered approach. Each
tool name begins with a letter that is assigned to the layer.
### File System Layer:
A disk contains one or more partitions (or slices). Each of these
partitions contain a file system. Examples of file systems include
the Berkeley Fast File System (FFS), Extended 2 File System (EXT2FS),
File Allocation Table (FAT), and New Technologies File System (NTFS).
The fsstat tool displays file system details in an ASCII format.
Examples of data in this display include volume name, last mounting
time, and the details about each "group" in UNIX file systems.
### Content Layer (block):
The content layer of a file system contains the actual file content,
or data. Data is stored in large chunks, with names such as blocks,
fragments, and clusters. All tools in this layer begin with the letters
'blk'.
The blkcat tool can be used to display the contents of a specific unit of
the file system (similar to what 'dd' can do with a few arguments).
The unit size is file system dependent. The 'blkls' tool displays the
contents of all unallocated units of a file system, resulting in a
stream of bytes of deleted content. The output can be searched for
deleted file content. The 'blkcalc' program allows one to identify the
unit location in the original image of a unit in the 'blkls' generated
image.
A new feature of The Sleuth Kit from TCT is the '-l' argument to
'blkls' (or 'unrm' in TCT). This argument lists the details for data
units, similar to the 'ils' command. The 'blkstat' tool displays
the statistics of a specific data unit (including allocation status
and group number).
### Metadata Layer (inode):
The metadata layer describes a file or directory. This layer contains
descriptive data such as dates and size as well as the addresses of the
data units. This layer describes the file in terms that the computer
can process efficiently. The structures that the data is stored in
have names such as inode and directory entry. All tools in this layer
begin with an 'i'.
The 'ils' program lists some values of the metadata structures.
By default, it will only list the unallocated ones. The 'istat'
displays metadata information in an ASCII format about a specific
structure. New to The Sleuth Kit is that 'istat' will display the
destination of symbolic links. The 'icat' function displays the
contents of the data units allocated to the metadata structure
(similar to the UNIX cat(1) command). The 'ifind' tool will identify
which metadata structure has allocated a given content unit or
file name.
Refer to the [ntfs wiki](http://wiki.sleuthkit.org/index.php?title=NTFS_Implementation_Notes)
for information on addressing metadata attributes in NTFS.
### Human Interface Layer (file):
The human interface layer allows one to interact with files in a
manner that is more convenient than directly with the metadata
layer. In some operating systems there are separate structures for
the metadata and human interface layers while others combine them.
All tools in this layer begin with the letter 'f'.
The 'fls' program lists file and directory names. This tool will
display the names of deleted files as well. The 'ffind' program will
identify the name of the file that has allocated a given metadata
structure. With some file systems, deleted files will be identified.
#### Time Line Generation
Time lines are useful to quickly get a picture of file activity.
Using The Sleuth Kit a time line of file MAC times can be easily
made. The mactime (TCT) program takes as input the 'body' file
that was generated by fls and ils. To get data on allocated and
unallocated file names, use 'fls -rm dir' and for unallocated inodes
use 'ils -m'. Note that the behavior of these tools are different
than in TCT. For more information, refer to [wiki/mactime](http://wiki.sleuthkit.org/index.php?title=Mactime).
#### Hash Databases
Hash databases are used to quickly identify if a file is known. The
MD5 or SHA-1 hash of a file is taken and a database is used to identify
if it has been seen before. This allows identification to occur even
if a file has been renamed.
The Sleuth Kit includes the 'md5' and 'sha1' tools to generate
hashes of files and other data.
Also included is the 'hfind' tool. The 'hfind' tool allows one to create
an index of a hash database and perform quick lookups using a binary
search algorithm. The 'hfind' tool can perform lookups on the NIST
National Software Reference Library (NSRL) (www.nsrl.nist.gov) and
files created from the 'md5' or 'md5sum' command. Refer to the
[wiki/hfind](http://wiki.sleuthkit.org/index.php?title=Hfind) file for more details.
#### File Type Categories
Different types of files typically have different internal structure.
The 'file' command comes with most versions of UNIX and a copy is
also distributed with The Sleuth Kit. This is used to identify
the type of file or other data regardless of its name and extension.
It can even be used on a given data unit to help identify what file
used that unit for storage. Note that the 'file' command typically
uses data in the first bytes of a file so it may not be able to
identify a file type based on the middle blocks or clusters.
The 'sorter' program in The Sleuth Kit will use other Sleuth Kit
tools to sort the files in a file system image into categories.
The categories are based on rule sets in configuration files. The
'sorter' tool will also use hash databases to flag known bad files
and ignore known good files. Refer to the [wiki/sorter](http://wiki.sleuthkit.org/index.php?title=Sorter)
file for more details.
## LICENSE
There are a variety of licenses used in TSK based on where they
were first developed. The licenses are located in the [licenses
directory](https://github.com/sleuthkit/sleuthkit/tree/develop/licenses).
- The file system tools (in the
[tools/fstools](https://github.com/sleuthkit/sleuthkit/tree/develop/tools/fstools)
directory) are released under the IBM open source license and Common
Public License.
- srch_strings and fiwalk are released under the GNU Public License
- Other tools in the tools directory are Common Public License
- The modifications to 'mactime' from the original 'mactime' in TCT
and 'mac-daddy' are released under the Common Public License.
The library uses utilities that were released under MIT and BSD 3-clause.
## INSTALL
For installation instructions, refer to the INSTALL.txt document.
## OTHER DOCS
The [wiki](http://wiki.sleuthkit.org/index.php?title=Main_Page) contains documents that
describe the provided tools in more detail. The Sleuth Kit Informer is a newsletter that contains
new documentation and articles.
> www.sleuthkit.org/informer/
## MAILING LIST
Mailing lists exist on SourceForge, for both users and a low-volume
announcements list.
> http://sourceforge.net/mail/?group_id=55685
Brian Carrier
carrier at sleuthkit dot org
The Sleuth Kit
Win32 README File
http://www.sleuthkit.org/sleuthkit
Last Modified: Jan 2014
====================================================================
The Sleuth Kit (TSK) runs on Windows. If you simply want the
executables, you can download them from the www.sleuthkit.org
website.
If you want to build your own executables, you have two options.
1) Microsoft Visual Studio. The VS solution file is in the win32
directory. Refer to the win32\BUILDING.txt file for details for
building the 32-bit and 64-bit versions.
2) mingw32. See below for more details.
---------------------------------------------------------------
MINGW32
If you're using mingw32 on Linux, simply give the
"--host=i586-mingw32msvc" argument when running the './configure'
script and use 'make' to compile. If you're using mingw32 on Windows,
'./configure' and 'make' will work directly.
Note that to compile the Java bindings you will need to have a JDK
to be installed, and by default the Oracle JDK on Windows is installed
in a path such as C:\Program Files\Java\jdk1.6.0_16\. GNU autotools
(which is used if you do a mingw32 compile, but not a Visual Studio
compile) do not handle paths containing spaces, so you will need
to copy the JDK to a directory without spaces in the name, such as
C:\jdk1.6.0_16\, then add C:\jdk1.6.0_16\bin to $PATH before running
'./configure'
Note also that libtool may fail on mingw32 on Windows if
C:\Windows\system32 is on $PATH before /usr/bin. The fix is to have
the C:\Windows directories at the _end_ of your mingw $PATH.
-------------------------------------------------------------------
carrier <at> sleuthkit <dot> org
Brian Carrier
version: 4.6.0.{build}
environment:
matrix:
- job_name: Windows Build
appveyor_build_worker_image: Visual Studio 2019
- job_name: Linux Build
appveyor_build_worker_image: Ubuntu
- job_name: macOS Build
appveyor_build_worker_image: macos-catalina
matrix:
fast_finish: true
# job-specific configurations
for:
-
matrix:
only:
- job_name: Windows Build
cache:
- C:\Users\appveyor\.ant
- C:\ProgramData\chocolatey\bin
- C:\ProgramData\chocolatey\lib
install:
- ps: choco install nuget.commandline
- ps: choco install ant --ignore-dependencies
- ps: $env:Path="C:\Program Files\Java\jdk1.8.0\bin;$($env:Path);C:\ProgramData\chocolatey\lib\ant"
- set PATH=C:\Python36-x64\';%PATH%
environment:
global:
TSK_HOME: "%APPVEYOR_BUILD_FOLDER%"
PYTHON: "C:\\Python36-x64"
JDK_HOME: C:\Program Files\Java\jdk1.8.0
services:
before_build:
- nuget restore win32\libtsk -PackagesDirectory win32\packages
build_script:
- python win32\updateAndBuildAll.py -m
- ps: ant -version
- ps: pushd bindings/java
- cmd: ant -q dist
- ps: popd
- ps: pushd case-uco/java
- cmd: ant -q
- ps: popd
test_script:
- cmd: ant -q -f bindings/java test
-
matrix:
only:
- job_name: Linux Build
build_script:
- ./bootstrap
- ./configure -q
- make -s
-
matrix:
only:
- job_name: macOS Build
build_script:
- ./bootstrap
- ./configure -q
- make -s
# Compile the sub directories
SUBDIRS = jni
tsk_jar = $(top_builddir)/bindings/java/dist/sleuthkit-$(PACKAGE_VERSION).jar
jardir = $(prefix)/share/java
jar_DATA = $(tsk_jar)
if OFFLINE
ant_args=-Doffline=true
else
endif
$(tsk_jar):
all-local:
ant dist $(ant_args)
CLEANFILES = $(tsk_jar)
clean-local:
ant clean
Sleuth Kit Java Bindings
Overview
The core functionality of the Sleuth Kit is in the C/C++ library.
The functionality is made available to Java applications by using
JNI. The theory is that a SQLite database is created by the C++
library and then it is queried by native Java code. JNI methods
exist to make the database and to read file content (and other raw
data that is too large to fit into the database).
To use the Java bindings, you must have the Sleuth Kit datamodel
JAR file compiled and have compiled the associated dynamic library
from the C/C++ code.
Requirements:
* Java JDK
* Ant
* Jar files as listed in ivy.xml (which will get downloaded automatically)
The following jar files must be on the classpath for building and
running. Version details can be found in ivy.xml. They will be
automatically downloaded if you do not compile in offline mode.
* sqlite-jdbc
* postgresql-jdbc
* c3p0
Building the Dynamic Library (for JNI)
The win32 Visual Studio solution has a tsk_jni project that will
build the JNI dll. To use this project, you will need to have
JDK_HOME environment variable set to the root directory of JDK.
On non-windows environments, it should just build as part of running
./configure and make. If the needed Java components are not found,
it will not be built.
This library will depend on libewf, zlib, and other libraries that
TSK was built to depend on. In Windows, the core of TSK (libtsk)
is a static library that is fully embedded in the libtsk_jni.dll
file. On non-Windows environments, libtsk_jni will depend on the
libtsk dynamic library.
Building The Jar File
Build with the default ant target (by running 'ant'). This will
download the required libraries (using ivy) and place the jar file
in the dist folder along with the needed dll and library files.
Using the Jar file and Library
There are two categories of things that need to be in the right place:
- The Jar file needs to be on the CLASSPATH.
- The libewf and zlib dynamic libraries need to be loadable. The TSK
JNI native library is inside of the Jar file and it will depend on the
libewf and zlib libraries. On a Unix-like platform, that means that
if you did a 'make install' with libewf and zlib, you should be OK.
On Windows, you should copy these dlls to a place that is found based
on the rules of Windows library loading. Note that these locations are
based on the rules of Windows loading them and not necessarily based on
java's loading paths.
Refer to the javadocs for details on using the API:
http://sleuthkit.org/sleuthkit/docs/jni-docs/
------------
Brian Carrier
Jan 2014
<?xml version="1.0" encoding="windows-1252"?>
<project name="TSKTestTargets">
<property name="dlls" value="../../win32/x64/Release"/>
<property environment="env"/>
<target name="test" description="Performs regression tests." depends="compile-test, copyTSKLibs">
<junit fork="on" haltonfailure="yes" dir=".">
<env key="path" value="${env.Path}:${dlls}"/>
<sysproperty key="rslt" value="${test-results}"/>
<sysproperty key="gold" value="${test-standards}"/>
<sysproperty key="inpt" value="${test-input}"/>
<classpath refid="libraries"/>
<formatter type="plain" usefile="false"/>
<test name="org.sleuthkit.datamodel.timeline.TimelineTestSuite" />
<test name="org.sleuthkit.datamodel.DataModelTestSuite"/>
</junit>
</target>
<target name="test-rebuild" description="Rebuilds regression tests." depends="compile-test, copyTSKLibs">
<java classname="org.sleuthkit.datamodel.DataModelTestSuite" classpathref="libraries" fork="true" failonerror="true">
<sysproperty key="gold" value="${test-standards}"/>
<sysproperty key="inpt" value="${test-input}"/>
<sysproperty key="types" value="${test-types}"/>
</java>
</target>
<target name="check-native-build" depends="check-native-build-mac,check-native-build-unix"/>
<target name="check-native-build-mac" depends="testTSKLibs" if="tsk_dylib.present">
<uptodate property="native-up-to-date" srcfile="./jni/.libs/libtsk_jni.dylib" targetfile="${amd64}/mac/libtsk_jni.jnilib"/>
</target>
<target name="check-native-build-unix" depends="testTSKLibs" if="tsk_so.present">
<uptodate property="native-up-to-date" srcfile="./jni/.libs/libtsk_jni.so" targetfile="${amd64}/linux/libtsk_jni.so"/>
</target>
<target name="testTSKLibs">
<property environment="env"/>
<available file="./jni/.libs/libtsk_jni.dylib" property="tsk_dylib.present"/>
<available file="./jni/.libs/libtsk_jni.so" property="tsk_so.present"/>
<fail message="JNI native library not built.">
<condition>
<not>
<or>
<isset property="tsk_dylib.present"/>
<isset property="tsk_so.present"/>
</or>
</not>
</condition>
</fail>
<!-- Default location to find zlib and libewf. Overwritten by properties in makefile -->
<property name="lib.z.path" value="/usr/lib"/>
<property name="lib.ewf.path" value="/usr/local/lib"/>
</target>
<!-- OS X -->
<target name="copyTskLibs_dylib" depends="testTSKLibs" if="tsk_dylib.present">
<property environment="env"/>
<copy file="./jni/.libs/libtsk_jni.dylib" tofile="./libtsk_jni.jnilib" overwrite="true"/>
</target>
<target name="copyMacLibs" depends="testTSKLibs" if="tsk_dylib.present">
<property environment="env"/>
<property name="jni.dylib" location="${basedir}/jni/.libs/libtsk_jni.dylib"/>
<property name="jni.jnilib" value="libtsk_jni.jnilib"/>
<!-- x86_64 -->
<copy file="${jni.dylib}" tofile="${x86_64}/mac/${jni.jnilib}" overwrite="true"/>
<!-- amd64 -->
<copy file="${jni.dylib}" tofile="${amd64}/mac/${jni.jnilib}" overwrite="true"/>
</target>
<!-- Non-OS X -->
<target name="copyTskLibs_so" depends="testTSKLibs" if="tsk_so.present">
<property environment="env"/>
<copy file="./jni/.libs/libtsk_jni.so" tofile="./libtsk_jni.so" overwrite="true"/>
</target>
<target name="copyLinuxLibs" depends="testTSKLibs" if="tsk_so.present">
<property environment="env"/>
<property name="jni.so" location="${basedir}/jni/.libs/libtsk_jni.so"/>
<property name="zlib.so" location="${lib.z.path}/libz.so"/>
<property name="libewf.so" location="${lib.ewf.path}/libewf.so"/>
<!-- x86_64 -->
<copy file="${jni.so}" tofile="${x86_64}/linux/libtsk_jni.so" overwrite="true"/>
<!-- amd64 -->
<copy file="${jni.so}" tofile="${amd64}/linux/libtsk_jni.so" overwrite="true"/>
<!-- x86 -->
<copy file="${jni.so}" tofile="${x86}/linux/libtsk_jni.so" overwrite="true"/>
<!-- i386 -->
<copy file="${jni.so}" tofile="${i386}/linux/libtsk_jni.so" overwrite="true"/>
<!-- i586 -->
<copy file="${jni.so}" tofile="${i586}/linux/libtsk_jni.so" overwrite="true"/>
<!-- i686 -->
<copy file="${jni.so}" tofile="${i686}/linux/libtsk_jni.so" overwrite="true"/>
</target>
<target name="copyLibs" depends="copyLinuxLibs,copyMacLibs"/>
<target name="copyLibs-Debug" depends="copyLinuxLibs,copyMacLibs"/>
<target name="copyTSKLibs" depends="copyTskLibs_so,copyTskLibs_dylib">
<!-- depends targets take care of the actual copying since the file differs on OS X and Linux -->
<!-- This assumes that TSK, libewf, and zlib have been installed on the system and those libraries will be with normal loading approaches -->
</target>
</project>
<?xml version="1.0" encoding="windows-1252"?>
<project name="TSKTestTargets">
<property name="dlls" value="../../win32/x64/Release"/>
<property environment="env"/>
<target name="test"
description="Runs the regression tests."
depends="compile-test" >
<junit fork="on" haltonfailure="yes" dir=".">
<env key="path" value="${env.Path};${dlls}"/>
<sysproperty key="rslt" value="${test-results}"/>
<sysproperty key="gold" value="${test-standards}"/>
<sysproperty key="inpt" value="${test-input}"/>
<classpath refid="libraries" />
<formatter type="plain" usefile="false" />
<test name="org.sleuthkit.datamodel.timeline.TimelineTestSuite" />
<test name="org.sleuthkit.datamodel.DataModelTestSuite" />
</junit>
</target>
<target name="test-rebuild"
description="Rebuilds gold standards for tests."
depends="compile-test" >
<java classname="org.sleuthkit.datamodel.DataModelTestSuite" classpathref="libraries" fork="true" failonerror="true">
<sysproperty key="java.library.path" value="${dlls}"/>
<sysproperty key="gold" value="${test-standards}"/>
<sysproperty key="inpt" value="${test-input}"/>
<sysproperty key="types" value="${test-types}"/>
</java>
</target>
<target name="check-native-build" depends="check-build-32,check-build-64"/>
<target name="check-build-32" if="win32.TskLib.exists">
<uptodate property="native-up-to-date" srcfile="${basedir}/../../win32/Release/libtsk_jni.dll"
targetfile="${x86}/win/libtsk_jni.dll"/>
</target>
<target name="check-build-64" if="win64.TskLib.exists">
<uptodate property="native-up-to-date" srcfile="${basedir}/../../win32/x64/Release/libtsk_jni.dll"
targetfile="${amd64}/win/libtsk_jni.dll"/>
</target>
<target name="copyLibs" description="Copy native libs to the correct folder">
<property name="tsk.config" value="Release"/>
<antcall target="copyWinTskLibsToBuildSQLite" />
</target>
<target name="copyLibs-Debug" description="Copy native libs to the correct folder">
<property name="tsk.config" value="Debug"/>
<antcall target="copyWinTskLibsToBuildSQLite" />
</target>
<target name="copyWinTskLibsToBuildSQLite" depends="copyWinTskLibs64ToBuildSQLite, copyWinTskLibs32ToBuild-SQLite" description="Copy Windows DLLs to the correct location, SQLite build." />
<target name="checkTskLibDirsSQLite">
<available property="win64.TskLib.exists" type="file" file="${basedir}/../../win32/x64/${tsk.config}/libtsk_jni.dll" />
<available property="win32.TskLib.exists" type="file" file="${basedir}/../../win32/${tsk.config}/libtsk_jni.dll" />
</target>
<target name="copyWinTskLibs64ToBuildSQLite" depends="checkTskLibDirsSQLite" if="win64.TskLib.exists">
<property name="tsk.jni.64" location="${basedir}/../../win32/x64/${tsk.config}/libtsk_jni.dll" />
<copy file="${tsk.jni.64}" todir="${amd64}/win" overwrite="true"/>
<copy file="${tsk.jni.64}" todir="${x86_64}/win" overwrite="true"/>
</target>
<target name="copyWinTskLibs32ToBuild-SQLite" depends="checkTskLibDirs" if="win32.TskLib.exists">
<property name="tsk.jni.32" location="${basedir}/../../win32/${tsk.config}/libtsk_jni.dll" />
<copy file="${tsk.jni.32}" todir="${i386}/win" overwrite="true"/>
<copy file="${tsk.jni.32}" todir="${x86}/win" overwrite="true"/>
<copy file="${tsk.jni.32}" todir="${i586}/win" overwrite="true"/>
<copy file="${tsk.jni.32}" todir="${i686}/win" overwrite="true"/>
</target>
<target name="checkTskLibDirs">
<available property="win64.TskLib.exists" type="file" file="${basedir}/../../win32/x64/${tsk.config}/libtsk_jni.dll" />
<available property="win32.TskLib.exists" type="file" file="${basedir}/../../win32/${tsk.config}/libtsk_jni.dll" />
</target>
</project>
<project xmlns:ivy="antlib:org.apache.ivy.ant" name="DataModel" default="dist" basedir=".">
<description>
Sleuthkit Java DataModel
</description>
<condition property="os.family" value="unix">
<os family="unix"/>
</condition>
<condition property="os.family" value="windows">
<os family="windows"/>
</condition>
<import file="build-${os.family}.xml"/>
<!-- Careful changing this because release-windows.pl updates it by pattern -->
<property name="VERSION" value="4.12.1"/>
<!-- set global properties for this build -->
<property name="default-jar-location" location="/usr/share/java"/>
<property name="src" location="src/org/sleuthkit/datamodel"/>
<property name="sample" location="src/org/sleuthkit/datamodel/Examples"/>
<property name="build" location="build/"/>
<property name="build-datamodel" location="build/org/sleuthkit/datamodel"/>
<property name="dist" location="dist"/>
<property name="lib" location="lib"/>
<property name="test" location="test"/>
<property name="test-standards" location="test/output/gold"/>
<property name="test-results" location="test/output/results"/>
<property name="test-input" location="test/input"/>
<property name="test-types" location="test/org/sleuthkit/datamodel"/>
<property name="native-libs" location="build/NATIVELIBS"/>
<property name="amd64" location="build/NATIVELIBS/amd64"/>
<property name="x86" location="build/NATIVELIBS/x86"/>
<property name="x86_64" location="build/NATIVELIBS/x86_64"/>
<property name="i386" location="build/NATIVELIBS/i386"/>
<property name="i586" location="build/NATIVELIBS/i586"/>
<property name="i686" location="build/NATIVELIBS/i686"/>
<!-- Only added win folders for now -->
<target name="init">
<mkdir dir="${build}"/>
<mkdir dir="${dist}"/>
<mkdir dir="${lib}"/>
<mkdir dir="${test-input}"/>
<mkdir dir="${test-standards}"/>
<mkdir dir="${test-results}"/>
<mkdir dir="${native-libs}"/>
<mkdir dir="${amd64}"/>
<mkdir dir="${amd64}/win"/>
<mkdir dir="${amd64}/mac"/>
<mkdir dir="${amd64}/linux"/>
<mkdir dir="${x86}"/>
<mkdir dir="${x86}/win"/>
<mkdir dir="${x86}/linux"/>
<mkdir dir="${x86_64}"/>
<mkdir dir="${x86_64}/win"/>
<mkdir dir="${x86_64}/mac"/>
<mkdir dir="${x86_64}/linux"/>
<mkdir dir="${i386}"/>
<mkdir dir="${i386}/win"/>
<mkdir dir="${i386}/linux"/>
<mkdir dir="${i586}"/>
<mkdir dir="${i586}/win"/>
<mkdir dir="${i586}/linux"/>
<mkdir dir="${i686}"/>
<mkdir dir="${i686}/win"/>
<mkdir dir="${i686}/linux"/>
</target>
<!-- set classpath for dependencies-->
<target name="set-library-path" description="sets the path of the libraries" depends="set-library-path-online,set-library-path-offline"></target>
<target name="set-library-path-online" description="set this library path when the user is online" unless="offline">
<path id="libraries">
<fileset dir="${lib}">
<include name="*.jar"/>
</fileset>
<pathelement path="${build}"/>
</path>
</target>
<target name="set-library-path-offline" description="set the library path when the user is offline" if="offline">
<path id="libraries">
<fileset dir="${default-jar-location}">
<include name="*.jar"/>
</fileset>
<fileset dir="${lib}">
<include name="*.jar"/>
</fileset>
<pathelement path="${build}"/>
</path>
</target>
<property name="ivy.install.version" value="2.5.0" />
<condition property="ivy.home" value="${env.IVY_HOME}">
<isset property="env.IVY_HOME"/>
</condition>
<property name="ivy.home" value="${user.home}/.ant"/>
<property name="ivy.jar.dir" value="${ivy.home}/lib"/>
<property name="ivy.jar.file" value="${ivy.jar.dir}/ivy.jar"/>
<target name="download-ivy" unless="offline">
<mkdir dir="${ivy.jar.dir}"/>
<get src="https://repo1.maven.org/maven2/org/apache/ivy/ivy/${ivy.install.version}/ivy-${ivy.install.version}.jar"
dest="${ivy.jar.file}" usetimestamp="true"/>
</target>
<target name="init-ivy" depends="download-ivy">
<path id="ivy.lib.path">
<fileset dir="${ivy.jar.dir}" includes="*.jar"/>
</path>
<taskdef resource="org/apache/ivy/ant/antlib.xml"
uri="antlib:org.apache.ivy.ant" classpathref="ivy.lib.path"/>
</target>
<target name="retrieve-deps" description="retrieve dependencies using ivy" depends="init-ivy" unless="offline">
<ivy:settings file="ivysettings.xml"/>
<ivy:resolve/>
<ivy:retrieve sync="true" pattern="lib/[artifact]-[revision](-[classifier]).[ext]"/>
</target>
<target name="compile-test" depends="compile" description="compile the tests">
<javac encoding="iso-8859-1" debug="on" srcdir="${test}" destdir="${build}" includeantruntime="false">
<classpath refid="libraries"/>
<compilerarg value="-Xlint" />
</javac>
</target>
<target name="compile" depends="init, set-library-path, retrieve-deps" description="compile the source">
<!-- Compile the java code from ${src} into ${build} -->
<javac encoding="iso-8859-1" debug="on" srcdir="${src}" destdir="${build}" classpathref="libraries" includeantruntime="false">
<compilerarg value="-Xlint"/>
</javac>
<!-- Copy Bundle*.properties files into DataModel build directory, so they are included in the .jar -->
<copy todir="${build-datamodel}">
<fileset dir="${src}" includes="**/*.properties"/>
</copy>
<!-- Verify sample compiles -->
<javac encoding="iso-8859-1" debug="on" srcdir="${sample}" destdir="${build}" includeantruntime="false">
<classpath refid="libraries"/>
</javac>
<!--Copy .properties to .properties-MERGED -->
<antcall target="copy-bundle" />
</target>
<target name="dist" depends="check-build, init-ivy, compile, copyLibs" unless="up-to-date" description="generate the distribution">
<!-- Put everything in ${build} into the MyProject-${DSTAMP}.jar file -->
<jar jarfile="${dist}/sleuthkit-${VERSION}.jar" basedir="${build}"/>
</target>
<target name="check-build" depends="check-native-build">
<uptodate property="java-up-to-date" targetfile="${dist}/sleuthkit-${VERSION}.jar">
<srcfiles dir="${src}" includes="**/*.java"/>
</uptodate>
<condition property="up-to-date">
<and>
<isset property="java-up-to-date"/>
<isset property="native-up-to-date"/>
</and>
</condition>
</target>
<target name="Debug" depends="check-build, init-ivy, compile, copyLibs-Debug" unless="up-to-date" description="generate the debug distribution">
<!-- Put everything in ${build} into the MyProject-${DSTAMP}.jar file -->
<jar jarfile="${dist}/sleuthkit-${VERSION}.jar" basedir="${build}"/>
</target>
<target name="jni" depends="compile" description="make the jni.h file">
<javah classpath="${build}" outputFile="jni/dataModel_SleuthkitJNI.h" force="yes">
<class name="org.sleuthkit.datamodel.SleuthkitJNI"/>
</javah>
</target>
<target name="clean" description="clean up">
<delete dir="${build}"/>
<delete dir="${dist}"/>
<delete dir="${lib}"/>
</target>
<target name="javadoc" description="Make the API docs">
<mkdir dir="javadoc"/>
<javadoc sourcepath="src" destdir="javadoc" overview="src/overview.html"/>
</target>
<target name="test-download" description="download test images.">
<mkdir dir="${test-input}"/>
<get src="http://digitalcorpora.org/corp/nps/drives/nps-2009-canon2/nps-2009-canon2-gen6.E01" dest="${test-input}"/>
<get src="http://digitalcorpora.org/corp/nps/drives/nps-2009-ntfs1/ntfs1-gen2.E01" dest="${test-input}"/>
<!--<get src="http://www.cfreds.nist.gov/dfr-images/dfr-16-ext.dd.bz2" dest="${test-input}"/> <bunzip2 src="${test-input}/dfr-16-ext.dd.bz2" /> -->
</target>
<!-- NOTE: test and test-rebuild targets are in the OS-specific files -->
<target name="run-sample" depends="compile" description="run the sample">
<java classname="org.sleuthkit.datamodel.Examples.Sample" fork="true" failonerror="true">
<env key="PATH" path="${env.TEMP}:${env.Path}:${env.TSK_HOME}/win32/x64/Release"/>
<arg value="${image}"/>
<classpath refid="libraries"/>
</java>
</target>
<target name="doxygen" description="build doxygen docs, requires doxygen in PATH">
<exec executable="doxygen" dir="${basedir}/doxygen">
<arg value="Doxyfile"/>
</exec>
</target>
<target name="copy-bundle">
<!-- the externalized strings in 'src' are in both the java files as annotations and in the Bundle.property files.
The strings get merged during compilation. This target copies that merged file into src so that it can be checked
in and used as a basis for translation efforts -->
<copy todir="src">
<fileset dir="build">
<include name="**/Bundle.properties"/>
</fileset>
<globmapper from="*" to="*-MERGED"/>
</copy>
</target>
</project>
# Doxyfile 1.8.9.1
# This file describes the settings to be used by the documentation system
# doxygen (www.doxygen.org) for a project.
#
# All text after a double hash (##) is considered a comment and is placed in
# front of the TAG it is preceding.
#
# All text after a single hash (#) is considered a comment and will be ignored.
# The format is:
# TAG = value [value, ...]
# For lists, items can also be appended using:
# TAG += value [value, ...]
# Values that contain spaces should be placed between quotes (\" \").
#---------------------------------------------------------------------------
# Project related configuration options
#---------------------------------------------------------------------------
# This tag specifies the encoding used for all characters in the config file
# that follow. The default is UTF-8 which is also the encoding used for all text
# before the first occurrence of this tag. Doxygen uses libiconv (or the iconv
# built into libc) for the transcoding. See http://www.gnu.org/software/libiconv
# for the list of possible encodings.
# The default value is: UTF-8.
DOXYFILE_ENCODING = UTF-8
# The PROJECT_NAME tag is a single word (or a sequence of words surrounded by
# double-quotes, unless you are using Doxywizard) that should identify the
# project for which the documentation is generated. This name is used in the
# title of most generated pages and in a few other places.
# The default value is: My Project.
PROJECT_NAME = "Sleuth Kit Java Bindings (JNI)"
# The PROJECT_NUMBER tag can be used to enter a project or revision number. This
# could be handy for archiving the generated documentation or if some version
# control system is used.
# NOTE: This is updated by the release-unix.pl script
PROJECT_NUMBER = 4.12.1
# Using the PROJECT_BRIEF tag one can provide an optional one line description
# for a project that appears at the top of each page and should give viewer a
# quick idea about the purpose of the project. Keep the description short.
PROJECT_BRIEF = "Java bindings for using The Sleuth Kit"
# With the PROJECT_LOGO tag one can specify a logo or an icon that is included
# in the documentation. The maximum height of the logo should not exceed 55
# pixels and the maximum width should not exceed 200 pixels. Doxygen will copy
# the logo to the output directory.
PROJECT_LOGO =
# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path
# into which the generated documentation will be written. If a relative path is
# entered, it will be relative to the location where doxygen was started. If
# left blank the current directory will be used.
OUTPUT_DIRECTORY = docs
# If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096 sub-
# directories (in 2 levels) under the output directory of each output format and
# will distribute the generated files over these directories. Enabling this
# option can be useful when feeding doxygen a huge amount of source files, where
# putting all generated files in the same directory would otherwise causes
# performance problems for the file system.
# The default value is: NO.
CREATE_SUBDIRS = NO
# If the ALLOW_UNICODE_NAMES tag is set to YES, doxygen will allow non-ASCII
# characters to appear in the names of generated files. If set to NO, non-ASCII
# characters will be escaped, for example _xE3_x81_x84 will be used for Unicode
# U+3044.
# The default value is: NO.
ALLOW_UNICODE_NAMES = NO
# The OUTPUT_LANGUAGE tag is used to specify the language in which all
# documentation generated by doxygen is written. Doxygen will use this
# information to generate all constant output in the proper language.
# Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese,
# Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States),
# Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian,
# Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages),
# Korean, Korean-en (Korean with English messages), Latvian, Lithuanian,
# Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian,
# Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish,
# Ukrainian and Vietnamese.
# The default value is: English.
OUTPUT_LANGUAGE = English
# If the BRIEF_MEMBER_DESC tag is set to YES, doxygen will include brief member
# descriptions after the members that are listed in the file and class
# documentation (similar to Javadoc). Set to NO to disable this.
# The default value is: YES.
BRIEF_MEMBER_DESC = YES
# If the REPEAT_BRIEF tag is set to YES, doxygen will prepend the brief
# description of a member or function before the detailed description
#
# Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the
# brief descriptions will be completely suppressed.
# The default value is: YES.
REPEAT_BRIEF = YES
# This tag implements a quasi-intelligent brief description abbreviator that is
# used to form the text in various listings. Each string in this list, if found
# as the leading text of the brief description, will be stripped from the text
# and the result, after processing the whole list, is used as the annotated
# text. Otherwise, the brief description is used as-is. If left blank, the
# following values are used ($name is automatically replaced with the name of
# the entity):The $name class, The $name widget, The $name file, is, provides,
# specifies, contains, represents, a, an and the.
ABBREVIATE_BRIEF =
# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then
# doxygen will generate a detailed section even if there is only a brief
# description.
# The default value is: NO.
ALWAYS_DETAILED_SEC = NO
# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all
# inherited members of a class in the documentation of that class as if those
# members were ordinary class members. Constructors, destructors and assignment
# operators of the base classes will not be shown.
# The default value is: NO.
INLINE_INHERITED_MEMB = NO
# If the FULL_PATH_NAMES tag is set to YES, doxygen will prepend the full path
# before files name in the file list and in the header files. If set to NO the
# shortest path that makes the file name unique will be used
# The default value is: YES.
FULL_PATH_NAMES = YES
# The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path.
# Stripping is only done if one of the specified strings matches the left-hand
# part of the path. The tag can be used to show relative paths in the file list.
# If left blank the directory from which doxygen is run is used as the path to
# strip.
#
# Note that you can specify absolute paths here, but also relative paths, which
# will be relative from the directory where doxygen is started.
# This tag requires that the tag FULL_PATH_NAMES is set to YES.
STRIP_FROM_PATH =
# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the
# path mentioned in the documentation of a class, which tells the reader which
# header file to include in order to use a class. If left blank only the name of
# the header file containing the class definition is used. Otherwise one should
# specify the list of include paths that are normally passed to the compiler
# using the -I flag.
STRIP_FROM_INC_PATH =
# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but
# less readable) file names. This can be useful is your file systems doesn't
# support long names like on DOS, Mac, or CD-ROM.
# The default value is: NO.
SHORT_NAMES = NO
# If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the
# first line (until the first dot) of a Javadoc-style comment as the brief
# description. If set to NO, the Javadoc-style will behave just like regular Qt-
# style comments (thus requiring an explicit @brief command for a brief
# description.)
# The default value is: NO.
JAVADOC_AUTOBRIEF = NO
# If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first
# line (until the first dot) of a Qt-style comment as the brief description. If
# set to NO, the Qt-style will behave just like regular Qt-style comments (thus
# requiring an explicit \brief command for a brief description.)
# The default value is: NO.
QT_AUTOBRIEF = NO
# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a
# multi-line C++ special comment block (i.e. a block of //! or /// comments) as
# a brief description. This used to be the default behavior. The new default is
# to treat a multi-line C++ comment block as a detailed description. Set this
# tag to YES if you prefer the old behavior instead.
#
# Note that setting this tag to YES also means that rational rose comments are
# not recognized any more.
# The default value is: NO.
MULTILINE_CPP_IS_BRIEF = NO
# If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the
# documentation from any documented member that it re-implements.
# The default value is: YES.
INHERIT_DOCS = YES
# If the SEPARATE_MEMBER_PAGES tag is set to YES then doxygen will produce a new
# page for each member. If set to NO, the documentation of a member will be part
# of the file/class/namespace that contains it.
# The default value is: NO.
SEPARATE_MEMBER_PAGES = NO
# The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen
# uses this value to replace tabs by spaces in code fragments.
# Minimum value: 1, maximum value: 16, default value: 4.
TAB_SIZE = 8
# This tag can be used to specify a number of aliases that act as commands in
# the documentation. An alias has the form:
# name=value
# For example adding
# "sideeffect=@par Side Effects:\n"
# will allow you to put the command \sideeffect (or @sideeffect) in the
# documentation, which will result in a user-defined paragraph with heading
# "Side Effects:". You can put \n's in the value part of an alias to insert
# newlines.
ALIASES =
# This tag can be used to specify a number of word-keyword mappings (TCL only).
# A mapping has the form "name=value". For example adding "class=itcl::class"
# will allow you to use the command class in the itcl::class meaning.
TCL_SUBST =
# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources
# only. Doxygen will then generate output that is more tailored for C. For
# instance, some of the names that are used will be different. The list of all
# members will be omitted, etc.
# The default value is: NO.
OPTIMIZE_OUTPUT_FOR_C = NO
# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or
# Python sources only. Doxygen will then generate output that is more tailored
# for that language. For instance, namespaces will be presented as packages,
# qualified scopes will look different, etc.
# The default value is: NO.
OPTIMIZE_OUTPUT_JAVA = YES
# Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran
# sources. Doxygen will then generate output that is tailored for Fortran.
# The default value is: NO.
OPTIMIZE_FOR_FORTRAN = NO
# Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL
# sources. Doxygen will then generate output that is tailored for VHDL.
# The default value is: NO.
OPTIMIZE_OUTPUT_VHDL = NO
# Doxygen selects the parser to use depending on the extension of the files it
# parses. With this tag you can assign which parser to use for a given
# extension. Doxygen has a built-in mapping, but you can override or extend it
# using this tag. The format is ext=language, where ext is a file extension, and
# language is one of the parsers supported by doxygen: IDL, Java, Javascript,
# C#, C, C++, D, PHP, Objective-C, Python, Fortran (fixed format Fortran:
# FortranFixed, free formatted Fortran: FortranFree, unknown formatted Fortran:
# Fortran. In the later case the parser tries to guess whether the code is fixed
# or free formatted code, this is the default for Fortran type files), VHDL. For
# instance to make doxygen treat .inc files as Fortran files (default is PHP),
# and .f files as C (default is Fortran), use: inc=Fortran f=C.
#
# Note: For files without extension you can use no_extension as a placeholder.
#
# Note that for custom extensions you also need to set FILE_PATTERNS otherwise
# the files are not read by doxygen.
EXTENSION_MAPPING =
# If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments
# according to the Markdown format, which allows for more readable
# documentation. See http://daringfireball.net/projects/markdown/ for details.
# The output of markdown processing is further processed by doxygen, so you can
# mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in
# case of backward compatibilities issues.
# The default value is: YES.
MARKDOWN_SUPPORT = YES
# When enabled doxygen tries to link words that correspond to documented
# classes, or namespaces to their corresponding documentation. Such a link can
# be prevented in individual cases by putting a % sign in front of the word or
# globally by setting AUTOLINK_SUPPORT to NO.
# The default value is: YES.
AUTOLINK_SUPPORT = YES
# If you use STL classes (i.e. std::string, std::vector, etc.) but do not want
# to include (a tag file for) the STL sources as input, then you should set this
# tag to YES in order to let doxygen match functions declarations and
# definitions whose arguments contain STL classes (e.g. func(std::string);
# versus func(std::string) {}). This also make the inheritance and collaboration
# diagrams that involve STL classes more complete and accurate.
# The default value is: NO.
BUILTIN_STL_SUPPORT = NO
# If you use Microsoft's C++/CLI language, you should set this option to YES to
# enable parsing support.
# The default value is: NO.
CPP_CLI_SUPPORT = NO
# Set the SIP_SUPPORT tag to YES if your project consists of sip (see:
# http://www.riverbankcomputing.co.uk/software/sip/intro) sources only. Doxygen
# will parse them like normal C++ but will assume all classes use public instead
# of private inheritance when no explicit protection keyword is present.
# The default value is: NO.
SIP_SUPPORT = NO
# For Microsoft's IDL there are propget and propput attributes to indicate
# getter and setter methods for a property. Setting this option to YES will make
# doxygen to replace the get and set methods by a property in the documentation.
# This will only work if the methods are indeed getting or setting a simple
# type. If this is not the case, or you want to show the methods anyway, you
# should set this option to NO.
# The default value is: YES.
IDL_PROPERTY_SUPPORT = YES
# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC
# tag is set to YES then doxygen will reuse the documentation of the first
# member in the group (if any) for the other members of the group. By default
# all members of a group must be documented explicitly.
# The default value is: NO.
DISTRIBUTE_GROUP_DOC = NO
# Set the SUBGROUPING tag to YES to allow class member groups of the same type
# (for instance a group of public functions) to be put as a subgroup of that
# type (e.g. under the Public Functions section). Set it to NO to prevent
# subgrouping. Alternatively, this can be done per class using the
# \nosubgrouping command.
# The default value is: YES.
SUBGROUPING = YES
# When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions
# are shown inside the group in which they are included (e.g. using \ingroup)
# instead of on a separate page (for HTML and Man pages) or section (for LaTeX
# and RTF).
#
# Note that this feature does not work in combination with
# SEPARATE_MEMBER_PAGES.
# The default value is: NO.
INLINE_GROUPED_CLASSES = NO
# When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions
# with only public data fields or simple typedef fields will be shown inline in
# the documentation of the scope in which they are defined (i.e. file,
# namespace, or group documentation), provided this scope is documented. If set
# to NO, structs, classes, and unions are shown on a separate page (for HTML and
# Man pages) or section (for LaTeX and RTF).
# The default value is: NO.
INLINE_SIMPLE_STRUCTS = NO
# When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or
# enum is documented as struct, union, or enum with the name of the typedef. So
# typedef struct TypeS {} TypeT, will appear in the documentation as a struct
# with name TypeT. When disabled the typedef will appear as a member of a file,
# namespace, or class. And the struct will be named TypeS. This can typically be
# useful for C code in case the coding convention dictates that all compound
# types are typedef'ed and only the typedef is referenced, never the tag name.
# The default value is: NO.
TYPEDEF_HIDES_STRUCT = NO
# The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This
# cache is used to resolve symbols given their name and scope. Since this can be
# an expensive process and often the same symbol appears multiple times in the
# code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small
# doxygen will become slower. If the cache is too large, memory is wasted. The
# cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range
# is 0..9, the default is 0, corresponding to a cache size of 2^16=65536
# symbols. At the end of a run doxygen will report the cache usage and suggest
# the optimal cache size from a speed point of view.
# Minimum value: 0, maximum value: 9, default value: 0.
LOOKUP_CACHE_SIZE = 0
#---------------------------------------------------------------------------
# Build related configuration options
#---------------------------------------------------------------------------
# If the EXTRACT_ALL tag is set to YES, doxygen will assume all entities in
# documentation are documented, even if no documentation was available. Private
# class members and static file members will be hidden unless the
# EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES.
# Note: This will also disable the warnings about undocumented members that are
# normally produced when WARNINGS is set to YES.
# The default value is: NO.
EXTRACT_ALL = YES
# If the EXTRACT_PRIVATE tag is set to YES, all private members of a class will
# be included in the documentation.
# The default value is: NO.
EXTRACT_PRIVATE = NO
# If the EXTRACT_PACKAGE tag is set to YES, all members with package or internal
# scope will be included in the documentation.
# The default value is: NO.
EXTRACT_PACKAGE = NO
# If the EXTRACT_STATIC tag is set to YES, all static members of a file will be
# included in the documentation.
# The default value is: NO.
EXTRACT_STATIC = YES
# If the EXTRACT_LOCAL_CLASSES tag is set to YES, classes (and structs) defined
# locally in source files will be included in the documentation. If set to NO,
# only classes defined in header files are included. Does not have any effect
# for Java sources.
# The default value is: YES.
EXTRACT_LOCAL_CLASSES = YES
# This flag is only useful for Objective-C code. If set to YES, local methods,
# which are defined in the implementation section but not in the interface are
# included in the documentation. If set to NO, only methods in the interface are
# included.
# The default value is: NO.
EXTRACT_LOCAL_METHODS = NO
# If this flag is set to YES, the members of anonymous namespaces will be
# extracted and appear in the documentation as a namespace called
# 'anonymous_namespace{file}', where file will be replaced with the base name of
# the file that contains the anonymous namespace. By default anonymous namespace
# are hidden.
# The default value is: NO.
EXTRACT_ANON_NSPACES = NO
# If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all
# undocumented members inside documented classes or files. If set to NO these
# members will be included in the various overviews, but no documentation
# section is generated. This option has no effect if EXTRACT_ALL is enabled.
# The default value is: NO.
HIDE_UNDOC_MEMBERS = NO
# If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all
# undocumented classes that are normally visible in the class hierarchy. If set
# to NO, these classes will be included in the various overviews. This option
# has no effect if EXTRACT_ALL is enabled.
# The default value is: NO.
HIDE_UNDOC_CLASSES = NO
# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend
# (class|struct|union) declarations. If set to NO, these declarations will be
# included in the documentation.
# The default value is: NO.
HIDE_FRIEND_COMPOUNDS = NO
# If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any
# documentation blocks found inside the body of a function. If set to NO, these
# blocks will be appended to the function's detailed documentation block.
# The default value is: NO.
HIDE_IN_BODY_DOCS = NO
# The INTERNAL_DOCS tag determines if documentation that is typed after a
# \internal command is included. If the tag is set to NO then the documentation
# will be excluded. Set it to YES to include the internal documentation.
# The default value is: NO.
INTERNAL_DOCS = NO
# If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file
# names in lower-case letters. If set to YES, upper-case letters are also
# allowed. This is useful if you have classes or files whose names only differ
# in case and if your file system supports case sensitive file names. Windows
# and Mac users are advised to set this option to NO.
# The default value is: system dependent.
CASE_SENSE_NAMES = NO
# If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with
# their full class and namespace scopes in the documentation. If set to YES, the
# scope will be hidden.
# The default value is: NO.
HIDE_SCOPE_NAMES = NO
# If the HIDE_COMPOUND_REFERENCE tag is set to NO (default) then doxygen will
# append additional text to a page's title, such as Class Reference. If set to
# YES the compound reference will be hidden.
# The default value is: NO.
HIDE_COMPOUND_REFERENCE= NO
# If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of
# the files that are included by a file in the documentation of that file.
# The default value is: YES.
SHOW_INCLUDE_FILES = YES
# If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each
# grouped member an include statement to the documentation, telling the reader
# which file to include in order to use the member.
# The default value is: NO.
SHOW_GROUPED_MEMB_INC = NO
# If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include
# files with double quotes in the documentation rather than with sharp brackets.
# The default value is: NO.
FORCE_LOCAL_INCLUDES = NO
# If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the
# documentation for inline members.
# The default value is: YES.
INLINE_INFO = YES
# If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the
# (detailed) documentation of file and class members alphabetically by member
# name. If set to NO, the members will appear in declaration order.
# The default value is: YES.
SORT_MEMBER_DOCS = YES
# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief
# descriptions of file, namespace and class members alphabetically by member
# name. If set to NO, the members will appear in declaration order. Note that
# this will also influence the order of the classes in the class list.
# The default value is: NO.
SORT_BRIEF_DOCS = YES
# If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the
# (brief and detailed) documentation of class members so that constructors and
# destructors are listed first. If set to NO the constructors will appear in the
# respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS.
# Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief
# member documentation.
# Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting
# detailed member documentation.
# The default value is: NO.
SORT_MEMBERS_CTORS_1ST = YES
# If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy
# of group names into alphabetical order. If set to NO the group names will
# appear in their defined order.
# The default value is: NO.
SORT_GROUP_NAMES = NO
# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by
# fully-qualified names, including namespaces. If set to NO, the class list will
# be sorted only by class name, not including the namespace part.
# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES.
# Note: This option applies only to the class list, not to the alphabetical
# list.
# The default value is: NO.
SORT_BY_SCOPE_NAME = YES
# If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper
# type resolution of all parameters of a function it will reject a match between
# the prototype and the implementation of a member function even if there is
# only one candidate or it is obvious which candidate to choose by doing a
# simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still
# accept a match between prototype and implementation in such cases.
# The default value is: NO.
STRICT_PROTO_MATCHING = NO
# The GENERATE_TODOLIST tag can be used to enable (YES) or disable (NO) the todo
# list. This list is created by putting \todo commands in the documentation.
# The default value is: YES.
GENERATE_TODOLIST = YES
# The GENERATE_TESTLIST tag can be used to enable (YES) or disable (NO) the test
# list. This list is created by putting \test commands in the documentation.
# The default value is: YES.
GENERATE_TESTLIST = YES
# The GENERATE_BUGLIST tag can be used to enable (YES) or disable (NO) the bug
# list. This list is created by putting \bug commands in the documentation.
# The default value is: YES.
GENERATE_BUGLIST = YES
# The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or disable (NO)
# the deprecated list. This list is created by putting \deprecated commands in
# the documentation.
# The default value is: YES.
GENERATE_DEPRECATEDLIST= YES
# The ENABLED_SECTIONS tag can be used to enable conditional documentation
# sections, marked by \if <section_label> ... \endif and \cond <section_label>
# ... \endcond blocks.
ENABLED_SECTIONS =
# The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the
# initial value of a variable or macro / define can have for it to appear in the
# documentation. If the initializer consists of more lines than specified here
# it will be hidden. Use a value of 0 to hide initializers completely. The
# appearance of the value of individual variables and macros / defines can be
# controlled using \showinitializer or \hideinitializer command in the
# documentation regardless of this setting.
# Minimum value: 0, maximum value: 10000, default value: 30.
MAX_INITIALIZER_LINES = 30
# Set the SHOW_USED_FILES tag to NO to disable the list of files generated at
# the bottom of the documentation of classes and structs. If set to YES, the
# list will mention the files that were used to generate the documentation.
# The default value is: YES.
SHOW_USED_FILES = YES
# Set the SHOW_FILES tag to NO to disable the generation of the Files page. This
# will remove the Files entry from the Quick Index and from the Folder Tree View
# (if specified).
# The default value is: YES.
SHOW_FILES = YES
# Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces
# page. This will remove the Namespaces entry from the Quick Index and from the
# Folder Tree View (if specified).
# The default value is: YES.
SHOW_NAMESPACES = YES
# The FILE_VERSION_FILTER tag can be used to specify a program or script that
# doxygen should invoke to get the current version for each file (typically from
# the version control system). Doxygen will invoke the program by executing (via
# popen()) the command command input-file, where command is the value of the
# FILE_VERSION_FILTER tag, and input-file is the name of an input file provided
# by doxygen. Whatever the program writes to standard output is used as the file
# version. For an example see the documentation.
FILE_VERSION_FILTER =
# The LAYOUT_FILE tag can be used to specify a layout file which will be parsed
# by doxygen. The layout file controls the global structure of the generated
# output files in an output format independent way. To create the layout file
# that represents doxygen's defaults, run doxygen with the -l option. You can
# optionally specify a file name after the option, if omitted DoxygenLayout.xml
# will be used as the name of the layout file.
#
# Note that if you run doxygen from a directory containing a file called
# DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE
# tag is left empty.
LAYOUT_FILE =
# The CITE_BIB_FILES tag can be used to specify one or more bib files containing
# the reference definitions. This must be a list of .bib files. The .bib
# extension is automatically appended if omitted. This requires the bibtex tool
# to be installed. See also http://en.wikipedia.org/wiki/BibTeX for more info.
# For LaTeX the style of the bibliography can be controlled using
# LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the
# search path. See also \cite for info how to create references.
CITE_BIB_FILES =
#---------------------------------------------------------------------------
# Configuration options related to warning and progress messages
#---------------------------------------------------------------------------
# The QUIET tag can be used to turn on/off the messages that are generated to
# standard output by doxygen. If QUIET is set to YES this implies that the
# messages are off.
# The default value is: NO.
QUIET = NO
# The WARNINGS tag can be used to turn on/off the warning messages that are
# generated to standard error (stderr) by doxygen. If WARNINGS is set to YES
# this implies that the warnings are on.
#
# Tip: Turn warnings on while writing the documentation.
# The default value is: YES.
WARNINGS = YES
# If the WARN_IF_UNDOCUMENTED tag is set to YES then doxygen will generate
# warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag
# will automatically be disabled.
# The default value is: YES.
WARN_IF_UNDOCUMENTED = YES
# If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for
# potential errors in the documentation, such as not documenting some parameters
# in a documented function, or documenting parameters that don't exist or using
# markup commands wrongly.
# The default value is: YES.
WARN_IF_DOC_ERROR = YES
# This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that
# are documented, but have no documentation for their parameters or return
# value. If set to NO, doxygen will only warn about wrong or incomplete
# parameter documentation, but not about the absence of documentation.
# The default value is: NO.
WARN_NO_PARAMDOC = NO
# The WARN_FORMAT tag determines the format of the warning messages that doxygen
# can produce. The string should contain the $file, $line, and $text tags, which
# will be replaced by the file and line number from which the warning originated
# and the warning text. Optionally the format may contain $version, which will
# be replaced by the version of the file (if it could be obtained via
# FILE_VERSION_FILTER)
# The default value is: $file:$line: $text.
WARN_FORMAT = "$file:$line: $text "
# The WARN_LOGFILE tag can be used to specify a file to which warning and error
# messages should be written. If left blank the output is written to standard
# error (stderr).
WARN_LOGFILE =
#---------------------------------------------------------------------------
# Configuration options related to the input files
#---------------------------------------------------------------------------
# The INPUT tag is used to specify the files and/or directories that contain
# documented source files. You may enter file names like myfile.cpp or
# directories like /usr/src/myproject. Separate the files or directories with
# spaces.
# Note: If this tag is empty the current directory is searched.
INPUT = main.dox \
query_database.dox \
blackboard.dox \
artifact_catalog.dox \
insert_and_update_database.dox \
communications.dox \
datasources.dox \
os_accounts.dox \
schema/schema_list.dox \
schema/db_schema_8_6.dox \
schema/db_schema_9_0.dox \
schema/db_schema_9_1.dox \
../src
# This tag can be used to specify the character encoding of the source files
# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses
# libiconv (or the iconv built into libc) for the transcoding. See the libiconv
# documentation (see: http://www.gnu.org/software/libiconv) for the list of
# possible encodings.
# The default value is: UTF-8.
INPUT_ENCODING = UTF-8
# If the value of the INPUT tag contains directories, you can use the
# FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and
# *.h) to filter out the source-files in the directories. If left blank the
# following patterns are tested:*.c, *.cc, *.cxx, *.cpp, *.c++, *.java, *.ii,
# *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h, *.hh, *.hxx, *.hpp,
# *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc, *.m, *.markdown,
# *.md, *.mm, *.dox, *.py, *.f90, *.f, *.for, *.tcl, *.vhd, *.vhdl, *.ucf,
# *.qsf, *.as and *.js.
FILE_PATTERNS = *.java
# The RECURSIVE tag can be used to specify whether or not subdirectories should
# be searched for input files as well.
# The default value is: NO.
RECURSIVE = YES
# The EXCLUDE tag can be used to specify files and/or directories that should be
# excluded from the INPUT source files. This way you can easily exclude a
# subdirectory from a directory tree whose root is specified with the INPUT tag.
#
# Note that relative paths are relative to the directory from which doxygen is
# run.
EXCLUDE =
# The EXCLUDE_SYMLINKS tag can be used to select whether or not files or
# directories that are symbolic links (a Unix file system feature) are excluded
# from the input.
# The default value is: NO.
EXCLUDE_SYMLINKS = NO
# If the value of the INPUT tag contains directories, you can use the
# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude
# certain files from those directories.
#
# Note that the wildcards are matched against the file with absolute path, so to
# exclude all test directories for example use the pattern */test/*
EXCLUDE_PATTERNS =
# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names
# (namespaces, classes, functions, etc.) that should be excluded from the
# output. The symbol name can be a fully qualified name, a word, or if the
# wildcard * is used, a substring. Examples: ANamespace, AClass,
# AClass::ANamespace, ANamespace::*Test
#
# Note that the wildcards are matched against the file with absolute path, so to
# exclude all test directories use the pattern */test/*
EXCLUDE_SYMBOLS =
# The EXAMPLE_PATH tag can be used to specify one or more files or directories
# that contain example code fragments that are included (see the \include
# command).
EXAMPLE_PATH =
# If the value of the EXAMPLE_PATH tag contains directories, you can use the
# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and
# *.h) to filter out the source-files in the directories. If left blank all
# files are included.
EXAMPLE_PATTERNS =
# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be
# searched for input files to be used with the \include or \dontinclude commands
# irrespective of the value of the RECURSIVE tag.
# The default value is: NO.
EXAMPLE_RECURSIVE = NO
# The IMAGE_PATH tag can be used to specify one or more files or directories
# that contain images that are to be included in the documentation (see the
# \image command).
IMAGE_PATH = images/
# The INPUT_FILTER tag can be used to specify a program that doxygen should
# invoke to filter for each input file. Doxygen will invoke the filter program
# by executing (via popen()) the command:
#
# <filter> <input-file>
#
# where <filter> is the value of the INPUT_FILTER tag, and <input-file> is the
# name of an input file. Doxygen will then use the output that the filter
# program writes to standard output. If FILTER_PATTERNS is specified, this tag
# will be ignored.
#
# Note that the filter must not add or remove lines; it is applied before the
# code is scanned, but not when the output code is generated. If lines are added
# or removed, the anchors will not be placed correctly.
INPUT_FILTER =
# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern
# basis. Doxygen will compare the file name with each pattern and apply the
# filter if there is a match. The filters are a list of the form: pattern=filter
# (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how
# filters are used. If the FILTER_PATTERNS tag is empty or if none of the
# patterns match the file name, INPUT_FILTER is applied.
FILTER_PATTERNS =
# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using
# INPUT_FILTER) will also be used to filter the input files that are used for
# producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES).
# The default value is: NO.
FILTER_SOURCE_FILES = NO
# The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file
# pattern. A pattern will override the setting for FILTER_PATTERN (if any) and
# it is also possible to disable source filtering for a specific pattern using
# *.ext= (so without naming a filter).
# This tag requires that the tag FILTER_SOURCE_FILES is set to YES.
FILTER_SOURCE_PATTERNS =
# If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that
# is part of the input, its contents will be placed on the main page
# (index.html). This can be useful if you have a project on for instance GitHub
# and want to reuse the introduction page also for the doxygen output.
USE_MDFILE_AS_MAINPAGE =
#---------------------------------------------------------------------------
# Configuration options related to source browsing
#---------------------------------------------------------------------------
# If the SOURCE_BROWSER tag is set to YES then a list of source files will be
# generated. Documented entities will be cross-referenced with these sources.
#
# Note: To get rid of all source code in the generated output, make sure that
# also VERBATIM_HEADERS is set to NO.
# The default value is: NO.
SOURCE_BROWSER = YES
# Setting the INLINE_SOURCES tag to YES will include the body of functions,
# classes and enums directly into the documentation.
# The default value is: NO.
INLINE_SOURCES = NO
# Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any
# special comment blocks from generated source code fragments. Normal C, C++ and
# Fortran comments will always remain visible.
# The default value is: YES.
STRIP_CODE_COMMENTS = YES
# If the REFERENCED_BY_RELATION tag is set to YES then for each documented
# function all documented functions referencing it will be listed.
# The default value is: NO.
REFERENCED_BY_RELATION = YES
# If the REFERENCES_RELATION tag is set to YES then for each documented function
# all documented entities called/used by that function will be listed.
# The default value is: NO.
REFERENCES_RELATION = YES
# If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set
# to YES then the hyperlinks from functions in REFERENCES_RELATION and
# REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will
# link to the documentation.
# The default value is: YES.
REFERENCES_LINK_SOURCE = YES
# If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the
# source code will show a tooltip with additional information such as prototype,
# brief description and links to the definition and documentation. Since this
# will make the HTML file larger and loading of large files a bit slower, you
# can opt to disable this feature.
# The default value is: YES.
# This tag requires that the tag SOURCE_BROWSER is set to YES.
SOURCE_TOOLTIPS = YES
# If the USE_HTAGS tag is set to YES then the references to source code will
# point to the HTML generated by the htags(1) tool instead of doxygen built-in
# source browser. The htags tool is part of GNU's global source tagging system
# (see http://www.gnu.org/software/global/global.html). You will need version
# 4.8.6 or higher.
#
# To use it do the following:
# - Install the latest version of global
# - Enable SOURCE_BROWSER and USE_HTAGS in the config file
# - Make sure the INPUT points to the root of the source tree
# - Run doxygen as normal
#
# Doxygen will invoke htags (and that will in turn invoke gtags), so these
# tools must be available from the command line (i.e. in the search path).
#
# The result: instead of the source browser generated by doxygen, the links to
# source code will now point to the output of htags.
# The default value is: NO.
# This tag requires that the tag SOURCE_BROWSER is set to YES.
USE_HTAGS = NO
# If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a
# verbatim copy of the header file for each class for which an include is
# specified. Set to NO to disable this.
# See also: Section \class.
# The default value is: YES.
VERBATIM_HEADERS = YES
# If the CLANG_ASSISTED_PARSING tag is set to YES then doxygen will use the
# clang parser (see: http://clang.llvm.org/) for more accurate parsing at the
# cost of reduced performance. This can be particularly helpful with template
# rich C++ code for which doxygen's built-in parser lacks the necessary type
# information.
# Note: The availability of this option depends on whether or not doxygen was
# compiled with the --with-libclang option.
# The default value is: NO.
CLANG_ASSISTED_PARSING = NO
# If clang assisted parsing is enabled you can provide the compiler with command
# line options that you would normally use when invoking the compiler. Note that
# the include paths will already be set by doxygen for the files and directories
# specified with INPUT and INCLUDE_PATH.
# This tag requires that the tag CLANG_ASSISTED_PARSING is set to YES.
CLANG_OPTIONS =
#---------------------------------------------------------------------------
# Configuration options related to the alphabetical class index
#---------------------------------------------------------------------------
# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all
# compounds will be generated. Enable this if the project contains a lot of
# classes, structs, unions or interfaces.
# The default value is: YES.
ALPHABETICAL_INDEX = YES
# The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in
# which the alphabetical index list will be split.
# Minimum value: 1, maximum value: 20, default value: 5.
# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.
COLS_IN_ALPHA_INDEX = 5
# In case all classes in a project start with a common prefix, all classes will
# be put under the same header in the alphabetical index. The IGNORE_PREFIX tag
# can be used to specify a prefix (or a list of prefixes) that should be ignored
# while generating the index headers.
# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.
IGNORE_PREFIX =
#---------------------------------------------------------------------------
# Configuration options related to the HTML output
#---------------------------------------------------------------------------
# If the GENERATE_HTML tag is set to YES, doxygen will generate HTML output
# The default value is: YES.
GENERATE_HTML = YES
# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a
# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
# it.
# The default directory is: html.
# This tag requires that the tag GENERATE_HTML is set to YES.
# NOTE: This is updated by the release-unix.pl script
HTML_OUTPUT = jni-docs/4.12.1/
# The HTML_FILE_EXTENSION tag can be used to specify the file extension for each
# generated HTML page (for example: .htm, .php, .asp).
# The default value is: .html.
# This tag requires that the tag GENERATE_HTML is set to YES.
HTML_FILE_EXTENSION = .html
# The HTML_HEADER tag can be used to specify a user-defined HTML header file for
# each generated HTML page. If the tag is left blank doxygen will generate a
# standard header.
#
# To get valid HTML the header file that includes any scripts and style sheets
# that doxygen needs, which is dependent on the configuration options used (e.g.
# the setting GENERATE_TREEVIEW). It is highly recommended to start with a
# default header using
# doxygen -w html new_header.html new_footer.html new_stylesheet.css
# YourConfigFile
# and then modify the file new_header.html. See also section "Doxygen usage"
# for information on how to generate the default header that doxygen normally
# uses.
# Note: The header is subject to change so you typically have to regenerate the
# default header when upgrading to a newer version of doxygen. For a description
# of the possible markers and block names see the documentation.
# This tag requires that the tag GENERATE_HTML is set to YES.
HTML_HEADER =
# The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each
# generated HTML page. If the tag is left blank doxygen will generate a standard
# footer. See HTML_HEADER for more information on how to generate a default
# footer and what special commands can be used inside the footer. See also
# section "Doxygen usage" for information on how to generate the default footer
# that doxygen normally uses.
# This tag requires that the tag GENERATE_HTML is set to YES.
HTML_FOOTER = footer.html
# The HTML_STYLESHEET tag can be used to specify a user-defined cascading style
# sheet that is used by each HTML page. It can be used to fine-tune the look of
# the HTML output. If left blank doxygen will generate a default style sheet.
# See also section "Doxygen usage" for information on how to generate the style
# sheet that doxygen normally uses.
# Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as
# it is more robust and this tag (HTML_STYLESHEET) will in the future become
# obsolete.
# This tag requires that the tag GENERATE_HTML is set to YES.
HTML_STYLESHEET =
# The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined
# cascading style sheets that are included after the standard style sheets
# created by doxygen. Using this option one can overrule certain style aspects.
# This is preferred over using HTML_STYLESHEET since it does not replace the
# standard style sheet and is therefore more robust against future updates.
# Doxygen will copy the style sheet files to the output directory.
# Note: The order of the extra style sheet files is of importance (e.g. the last
# style sheet in the list overrules the setting of the previous ones in the
# list). For an example see the documentation.
# This tag requires that the tag GENERATE_HTML is set to YES.
HTML_EXTRA_STYLESHEET =
# The HTML_EXTRA_FILES tag can be used to specify one or more extra images or
# other source files which should be copied to the HTML output directory. Note
# that these files will be copied to the base HTML output directory. Use the
# $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these
# files. In the HTML_STYLESHEET file, use the file name only. Also note that the
# files will be copied as-is; there are no commands or markers available.
# This tag requires that the tag GENERATE_HTML is set to YES.
HTML_EXTRA_FILES =
# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen
# will adjust the colors in the style sheet and background images according to
# this color. Hue is specified as an angle on a colorwheel, see
# http://en.wikipedia.org/wiki/Hue for more information. For instance the value
# 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300
# purple, and 360 is red again.
# Minimum value: 0, maximum value: 359, default value: 220.
# This tag requires that the tag GENERATE_HTML is set to YES.
HTML_COLORSTYLE_HUE = 220
# The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors
# in the HTML output. For a value of 0 the output will use grayscales only. A
# value of 255 will produce the most vivid colors.
# Minimum value: 0, maximum value: 255, default value: 100.
# This tag requires that the tag GENERATE_HTML is set to YES.
HTML_COLORSTYLE_SAT = 100
# The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the
# luminance component of the colors in the HTML output. Values below 100
# gradually make the output lighter, whereas values above 100 make the output
# darker. The value divided by 100 is the actual gamma applied, so 80 represents
# a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not
# change the gamma.
# Minimum value: 40, maximum value: 240, default value: 80.
# This tag requires that the tag GENERATE_HTML is set to YES.
HTML_COLORSTYLE_GAMMA = 80
# If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML
# page will contain the date and time when the page was generated. Setting this
# to NO can help when comparing the output of multiple runs.
# The default value is: YES.
# This tag requires that the tag GENERATE_HTML is set to YES.
HTML_TIMESTAMP = YES
# If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML
# documentation will contain sections that can be hidden and shown after the
# page has loaded.
# The default value is: NO.
# This tag requires that the tag GENERATE_HTML is set to YES.
HTML_DYNAMIC_SECTIONS = YES
# With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries
# shown in the various tree structured indices initially; the user can expand
# and collapse entries dynamically later on. Doxygen will expand the tree to
# such a level that at most the specified number of entries are visible (unless
# a fully collapsed tree already exceeds this amount). So setting the number of
# entries 1 will produce a full collapsed tree by default. 0 is a special value
# representing an infinite number of entries and will result in a full expanded
# tree by default.
# Minimum value: 0, maximum value: 9999, default value: 100.
# This tag requires that the tag GENERATE_HTML is set to YES.
HTML_INDEX_NUM_ENTRIES = 100
# If the GENERATE_DOCSET tag is set to YES, additional index files will be
# generated that can be used as input for Apple's Xcode 3 integrated development
# environment (see: http://developer.apple.com/tools/xcode/), introduced with
# OSX 10.5 (Leopard). To create a documentation set, doxygen will generate a
# Makefile in the HTML output directory. Running make will produce the docset in
# that directory and running make install will install the docset in
# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at
# startup. See http://developer.apple.com/tools/creatingdocsetswithdoxygen.html
# for more information.
# The default value is: NO.
# This tag requires that the tag GENERATE_HTML is set to YES.
GENERATE_DOCSET = YES
# This tag determines the name of the docset feed. A documentation feed provides
# an umbrella under which multiple documentation sets from a single provider
# (such as a company or product suite) can be grouped.
# The default value is: Doxygen generated docs.
# This tag requires that the tag GENERATE_DOCSET is set to YES.
DOCSET_FEEDNAME = "Doxygen docs"
# This tag specifies a string that should uniquely identify the documentation
# set bundle. This should be a reverse domain-name style string, e.g.
# com.mycompany.MyDocSet. Doxygen will append .docset to the name.
# The default value is: org.doxygen.Project.
# This tag requires that the tag GENERATE_DOCSET is set to YES.
DOCSET_BUNDLE_ID = org.doxygen.Doxygen
# The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify
# the documentation publisher. This should be a reverse domain-name style
# string, e.g. com.mycompany.MyDocSet.documentation.
# The default value is: org.doxygen.Publisher.
# This tag requires that the tag GENERATE_DOCSET is set to YES.
DOCSET_PUBLISHER_ID = org.doxygen.Publisher
# The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher.
# The default value is: Publisher.
# This tag requires that the tag GENERATE_DOCSET is set to YES.
DOCSET_PUBLISHER_NAME = Publisher
# If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three
# additional HTML index files: index.hhp, index.hhc, and index.hhk. The
# index.hhp is a project file that can be read by Microsoft's HTML Help Workshop
# (see: http://www.microsoft.com/en-us/download/details.aspx?id=21138) on
# Windows.
#
# The HTML Help Workshop contains a compiler that can convert all HTML output
# generated by doxygen into a single compiled HTML file (.chm). Compiled HTML
# files are now used as the Windows 98 help format, and will replace the old
# Windows help format (.hlp) on all Windows platforms in the future. Compressed
# HTML files also contain an index, a table of contents, and you can search for
# words in the documentation. The HTML workshop also contains a viewer for
# compressed HTML files.
# The default value is: NO.
# This tag requires that the tag GENERATE_HTML is set to YES.
GENERATE_HTMLHELP = NO
# The CHM_FILE tag can be used to specify the file name of the resulting .chm
# file. You can add a path in front of the file if the result should not be
# written to the html output directory.
# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
CHM_FILE =
# The HHC_LOCATION tag can be used to specify the location (absolute path
# including file name) of the HTML help compiler (hhc.exe). If non-empty,
# doxygen will try to run the HTML help compiler on the generated index.hhp.
# The file has to be specified with full path.
# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
HHC_LOCATION =
# The GENERATE_CHI flag controls if a separate .chi index file is generated
# (YES) or that it should be included in the master .chm file (NO).
# The default value is: NO.
# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
GENERATE_CHI = NO
# The CHM_INDEX_ENCODING is used to encode HtmlHelp index (hhk), content (hhc)
# and project file content.
# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
CHM_INDEX_ENCODING =
# The BINARY_TOC flag controls whether a binary table of contents is generated
# (YES) or a normal table of contents (NO) in the .chm file. Furthermore it
# enables the Previous and Next buttons.
# The default value is: NO.
# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
BINARY_TOC = NO
# The TOC_EXPAND flag can be set to YES to add extra items for group members to
# the table of contents of the HTML help documentation and to the tree view.
# The default value is: NO.
# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
TOC_EXPAND = NO
# If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and
# QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that
# can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help
# (.qch) of the generated HTML documentation.
# The default value is: NO.
# This tag requires that the tag GENERATE_HTML is set to YES.
GENERATE_QHP = NO
# If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify
# the file name of the resulting .qch file. The path specified is relative to
# the HTML output folder.
# This tag requires that the tag GENERATE_QHP is set to YES.
QCH_FILE =
# The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help
# Project output. For more information please see Qt Help Project / Namespace
# (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#namespace).
# The default value is: org.doxygen.Project.
# This tag requires that the tag GENERATE_QHP is set to YES.
QHP_NAMESPACE = org.doxygen.Project
# The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt
# Help Project output. For more information please see Qt Help Project / Virtual
# Folders (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#virtual-
# folders).
# The default value is: doc.
# This tag requires that the tag GENERATE_QHP is set to YES.
QHP_VIRTUAL_FOLDER = doc
# If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom
# filter to add. For more information please see Qt Help Project / Custom
# Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom-
# filters).
# This tag requires that the tag GENERATE_QHP is set to YES.
QHP_CUST_FILTER_NAME =
# The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the
# custom filter to add. For more information please see Qt Help Project / Custom
# Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom-
# filters).
# This tag requires that the tag GENERATE_QHP is set to YES.
QHP_CUST_FILTER_ATTRS =
# The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this
# project's filter section matches. Qt Help Project / Filter Attributes (see:
# http://qt-project.org/doc/qt-4.8/qthelpproject.html#filter-attributes).
# This tag requires that the tag GENERATE_QHP is set to YES.
QHP_SECT_FILTER_ATTRS =
# The QHG_LOCATION tag can be used to specify the location of Qt's
# qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the
# generated .qhp file.
# This tag requires that the tag GENERATE_QHP is set to YES.
QHG_LOCATION =
# If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be
# generated, together with the HTML files, they form an Eclipse help plugin. To
# install this plugin and make it available under the help contents menu in
# Eclipse, the contents of the directory containing the HTML and XML files needs
# to be copied into the plugins directory of eclipse. The name of the directory
# within the plugins directory should be the same as the ECLIPSE_DOC_ID value.
# After copying Eclipse needs to be restarted before the help appears.
# The default value is: NO.
# This tag requires that the tag GENERATE_HTML is set to YES.
GENERATE_ECLIPSEHELP = NO
# A unique identifier for the Eclipse help plugin. When installing the plugin
# the directory name containing the HTML and XML files should also have this
# name. Each documentation set should have its own identifier.
# The default value is: org.doxygen.Project.
# This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES.
ECLIPSE_DOC_ID = org.doxygen.Project
# If you want full control over the layout of the generated HTML pages it might
# be necessary to disable the index and replace it with your own. The
# DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top
# of each HTML page. A value of NO enables the index and the value YES disables
# it. Since the tabs in the index contain the same information as the navigation
# tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES.
# The default value is: NO.
# This tag requires that the tag GENERATE_HTML is set to YES.
DISABLE_INDEX = NO
# The GENERATE_TREEVIEW tag is used to specify whether a tree-like index
# structure should be generated to display hierarchical information. If the tag
# value is set to YES, a side panel will be generated containing a tree-like
# index structure (just like the one that is generated for HTML Help). For this
# to work a browser that supports JavaScript, DHTML, CSS and frames is required
# (i.e. any modern browser). Windows users are probably better off using the
# HTML help feature. Via custom style sheets (see HTML_EXTRA_STYLESHEET) one can
# further fine-tune the look of the index. As an example, the default style
# sheet generated by doxygen has an example that shows how to put an image at
# the root of the tree instead of the PROJECT_NAME. Since the tree basically has
# the same information as the tab index, you could consider setting
# DISABLE_INDEX to YES when enabling this option.
# The default value is: NO.
# This tag requires that the tag GENERATE_HTML is set to YES.
GENERATE_TREEVIEW = YES
# The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that
# doxygen will group on one line in the generated HTML documentation.
#
# Note that a value of 0 will completely suppress the enum values from appearing
# in the overview section.
# Minimum value: 0, maximum value: 20, default value: 4.
# This tag requires that the tag GENERATE_HTML is set to YES.
ENUM_VALUES_PER_LINE = 4
# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used
# to set the initial width (in pixels) of the frame in which the tree is shown.
# Minimum value: 0, maximum value: 1500, default value: 250.
# This tag requires that the tag GENERATE_HTML is set to YES.
TREEVIEW_WIDTH = 250
# If the EXT_LINKS_IN_WINDOW option is set to YES, doxygen will open links to
# external symbols imported via tag files in a separate window.
# The default value is: NO.
# This tag requires that the tag GENERATE_HTML is set to YES.
EXT_LINKS_IN_WINDOW = YES
# Use this tag to change the font size of LaTeX formulas included as images in
# the HTML documentation. When you change the font size after a successful
# doxygen run you need to manually remove any form_*.png images from the HTML
# output directory to force them to be regenerated.
# Minimum value: 8, maximum value: 50, default value: 10.
# This tag requires that the tag GENERATE_HTML is set to YES.
FORMULA_FONTSIZE = 10
# Use the FORMULA_TRANPARENT tag to determine whether or not the images
# generated for formulas are transparent PNGs. Transparent PNGs are not
# supported properly for IE 6.0, but are supported on all modern browsers.
#
# Note that when changing this option you need to delete any form_*.png files in
# the HTML output directory before the changes have effect.
# The default value is: YES.
# This tag requires that the tag GENERATE_HTML is set to YES.
FORMULA_TRANSPARENT = YES
# Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see
# http://www.mathjax.org) which uses client side Javascript for the rendering
# instead of using pre-rendered bitmaps. Use this if you do not have LaTeX
# installed or if you want to formulas look prettier in the HTML output. When
# enabled you may also need to install MathJax separately and configure the path
# to it using the MATHJAX_RELPATH option.
# The default value is: NO.
# This tag requires that the tag GENERATE_HTML is set to YES.
USE_MATHJAX = NO
# When MathJax is enabled you can set the default output format to be used for
# the MathJax output. See the MathJax site (see:
# http://docs.mathjax.org/en/latest/output.html) for more details.
# Possible values are: HTML-CSS (which is slower, but has the best
# compatibility), NativeMML (i.e. MathML) and SVG.
# The default value is: HTML-CSS.
# This tag requires that the tag USE_MATHJAX is set to YES.
MATHJAX_FORMAT = HTML-CSS
# When MathJax is enabled you need to specify the location relative to the HTML
# output directory using the MATHJAX_RELPATH option. The destination directory
# should contain the MathJax.js script. For instance, if the mathjax directory
# is located at the same level as the HTML output directory, then
# MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax
# Content Delivery Network so you can quickly see the result without installing
# MathJax. However, it is strongly recommended to install a local copy of
# MathJax from http://www.mathjax.org before deployment.
# The default value is: http://cdn.mathjax.org/mathjax/latest.
# This tag requires that the tag USE_MATHJAX is set to YES.
MATHJAX_RELPATH = http://cdn.mathjax.org/mathjax/latest
# The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax
# extension names that should be enabled during MathJax rendering. For example
# MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols
# This tag requires that the tag USE_MATHJAX is set to YES.
MATHJAX_EXTENSIONS =
# The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces
# of code that will be used on startup of the MathJax code. See the MathJax site
# (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an
# example see the documentation.
# This tag requires that the tag USE_MATHJAX is set to YES.
MATHJAX_CODEFILE =
# When the SEARCHENGINE tag is enabled doxygen will generate a search box for
# the HTML output. The underlying search engine uses javascript and DHTML and
# should work on any modern browser. Note that when using HTML help
# (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET)
# there is already a search function so this one should typically be disabled.
# For large projects the javascript based search engine can be slow, then
# enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to
# search using the keyboard; to jump to the search box use <access key> + S
# (what the <access key> is depends on the OS and browser, but it is typically
# <CTRL>, <ALT>/<option>, or both). Inside the search box use the <cursor down
# key> to jump into the search results window, the results can be navigated
# using the <cursor keys>. Press <Enter> to select an item or <escape> to cancel
# the search. The filter options can be selected when the cursor is inside the
# search box by pressing <Shift>+<cursor down>. Also here use the <cursor keys>
# to select a filter and <Enter> or <escape> to activate or cancel the filter
# option.
# The default value is: YES.
# This tag requires that the tag GENERATE_HTML is set to YES.
SEARCHENGINE = YES
# When the SERVER_BASED_SEARCH tag is enabled the search engine will be
# implemented using a web server instead of a web client using Javascript. There
# are two flavors of web server based searching depending on the EXTERNAL_SEARCH
# setting. When disabled, doxygen will generate a PHP script for searching and
# an index file used by the script. When EXTERNAL_SEARCH is enabled the indexing
# and searching needs to be provided by external tools. See the section
# "External Indexing and Searching" for details.
# The default value is: NO.
# This tag requires that the tag SEARCHENGINE is set to YES.
SERVER_BASED_SEARCH = NO
# When EXTERNAL_SEARCH tag is enabled doxygen will no longer generate the PHP
# script for searching. Instead the search results are written to an XML file
# which needs to be processed by an external indexer. Doxygen will invoke an
# external search engine pointed to by the SEARCHENGINE_URL option to obtain the
# search results.
#
# Doxygen ships with an example indexer (doxyindexer) and search engine
# (doxysearch.cgi) which are based on the open source search engine library
# Xapian (see: http://xapian.org/).
#
# See the section "External Indexing and Searching" for details.
# The default value is: NO.
# This tag requires that the tag SEARCHENGINE is set to YES.
EXTERNAL_SEARCH = NO
# The SEARCHENGINE_URL should point to a search engine hosted by a web server
# which will return the search results when EXTERNAL_SEARCH is enabled.
#
# Doxygen ships with an example indexer (doxyindexer) and search engine
# (doxysearch.cgi) which are based on the open source search engine library
# Xapian (see: http://xapian.org/). See the section "External Indexing and
# Searching" for details.
# This tag requires that the tag SEARCHENGINE is set to YES.
SEARCHENGINE_URL =
# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the unindexed
# search data is written to a file for indexing by an external tool. With the
# SEARCHDATA_FILE tag the name of this file can be specified.
# The default file is: searchdata.xml.
# This tag requires that the tag SEARCHENGINE is set to YES.
SEARCHDATA_FILE = searchdata.xml
# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the
# EXTERNAL_SEARCH_ID tag can be used as an identifier for the project. This is
# useful in combination with EXTRA_SEARCH_MAPPINGS to search through multiple
# projects and redirect the results back to the right project.
# This tag requires that the tag SEARCHENGINE is set to YES.
EXTERNAL_SEARCH_ID =
# The EXTRA_SEARCH_MAPPINGS tag can be used to enable searching through doxygen
# projects other than the one defined by this configuration file, but that are
# all added to the same external search index. Each project needs to have a
# unique id set via EXTERNAL_SEARCH_ID. The search mapping then maps the id of
# to a relative location where the documentation can be found. The format is:
# EXTRA_SEARCH_MAPPINGS = tagname1=loc1 tagname2=loc2 ...
# This tag requires that the tag SEARCHENGINE is set to YES.
EXTRA_SEARCH_MAPPINGS =
#---------------------------------------------------------------------------
# Configuration options related to the LaTeX output
#---------------------------------------------------------------------------
# If the GENERATE_LATEX tag is set to YES, doxygen will generate LaTeX output.
# The default value is: YES.
GENERATE_LATEX = NO
# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. If a
# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
# it.
# The default directory is: latex.
# This tag requires that the tag GENERATE_LATEX is set to YES.
LATEX_OUTPUT =
# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be
# invoked.
#
# Note that when enabling USE_PDFLATEX this option is only used for generating
# bitmaps for formulas in the HTML output, but not in the Makefile that is
# written to the output directory.
# The default file is: latex.
# This tag requires that the tag GENERATE_LATEX is set to YES.
LATEX_CMD_NAME = latex
# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to generate
# index for LaTeX.
# The default file is: makeindex.
# This tag requires that the tag GENERATE_LATEX is set to YES.
MAKEINDEX_CMD_NAME = makeindex
# If the COMPACT_LATEX tag is set to YES, doxygen generates more compact LaTeX
# documents. This may be useful for small projects and may help to save some
# trees in general.
# The default value is: NO.
# This tag requires that the tag GENERATE_LATEX is set to YES.
COMPACT_LATEX = NO
# The PAPER_TYPE tag can be used to set the paper type that is used by the
# printer.
# Possible values are: a4 (210 x 297 mm), letter (8.5 x 11 inches), legal (8.5 x
# 14 inches) and executive (7.25 x 10.5 inches).
# The default value is: a4.
# This tag requires that the tag GENERATE_LATEX is set to YES.
PAPER_TYPE = a4wide
# The EXTRA_PACKAGES tag can be used to specify one or more LaTeX package names
# that should be included in the LaTeX output. To get the times font for
# instance you can specify
# EXTRA_PACKAGES=times
# If left blank no extra packages will be included.
# This tag requires that the tag GENERATE_LATEX is set to YES.
EXTRA_PACKAGES =
# The LATEX_HEADER tag can be used to specify a personal LaTeX header for the
# generated LaTeX document. The header should contain everything until the first
# chapter. If it is left blank doxygen will generate a standard header. See
# section "Doxygen usage" for information on how to let doxygen write the
# default header to a separate file.
#
# Note: Only use a user-defined header if you know what you are doing! The
# following commands have a special meaning inside the header: $title,
# $datetime, $date, $doxygenversion, $projectname, $projectnumber,
# $projectbrief, $projectlogo. Doxygen will replace $title with the empty
# string, for the replacement values of the other commands the user is referred
# to HTML_HEADER.
# This tag requires that the tag GENERATE_LATEX is set to YES.
LATEX_HEADER =
# The LATEX_FOOTER tag can be used to specify a personal LaTeX footer for the
# generated LaTeX document. The footer should contain everything after the last
# chapter. If it is left blank doxygen will generate a standard footer. See
# LATEX_HEADER for more information on how to generate a default footer and what
# special commands can be used inside the footer.
#
# Note: Only use a user-defined footer if you know what you are doing!
# This tag requires that the tag GENERATE_LATEX is set to YES.
LATEX_FOOTER =
# The LATEX_EXTRA_STYLESHEET tag can be used to specify additional user-defined
# LaTeX style sheets that are included after the standard style sheets created
# by doxygen. Using this option one can overrule certain style aspects. Doxygen
# will copy the style sheet files to the output directory.
# Note: The order of the extra style sheet files is of importance (e.g. the last
# style sheet in the list overrules the setting of the previous ones in the
# list).
# This tag requires that the tag GENERATE_LATEX is set to YES.
LATEX_EXTRA_STYLESHEET =
# The LATEX_EXTRA_FILES tag can be used to specify one or more extra images or
# other source files which should be copied to the LATEX_OUTPUT output
# directory. Note that the files will be copied as-is; there are no commands or
# markers available.
# This tag requires that the tag GENERATE_LATEX is set to YES.
LATEX_EXTRA_FILES =
# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated is
# prepared for conversion to PDF (using ps2pdf or pdflatex). The PDF file will
# contain links (just like the HTML output) instead of page references. This
# makes the output suitable for online browsing using a PDF viewer.
# The default value is: YES.
# This tag requires that the tag GENERATE_LATEX is set to YES.
PDF_HYPERLINKS = YES
# If the USE_PDFLATEX tag is set to YES, doxygen will use pdflatex to generate
# the PDF file directly from the LaTeX files. Set this option to YES, to get a
# higher quality PDF documentation.
# The default value is: YES.
# This tag requires that the tag GENERATE_LATEX is set to YES.
USE_PDFLATEX = NO
# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \batchmode
# command to the generated LaTeX files. This will instruct LaTeX to keep running
# if errors occur, instead of asking the user for help. This option is also used
# when generating formulas in HTML.
# The default value is: NO.
# This tag requires that the tag GENERATE_LATEX is set to YES.
LATEX_BATCHMODE = NO
# If the LATEX_HIDE_INDICES tag is set to YES then doxygen will not include the
# index chapters (such as File Index, Compound Index, etc.) in the output.
# The default value is: NO.
# This tag requires that the tag GENERATE_LATEX is set to YES.
LATEX_HIDE_INDICES = NO
# If the LATEX_SOURCE_CODE tag is set to YES then doxygen will include source
# code with syntax highlighting in the LaTeX output.
#
# Note that which sources are shown also depends on other settings such as
# SOURCE_BROWSER.
# The default value is: NO.
# This tag requires that the tag GENERATE_LATEX is set to YES.
LATEX_SOURCE_CODE = NO
# The LATEX_BIB_STYLE tag can be used to specify the style to use for the
# bibliography, e.g. plainnat, or ieeetr. See
# http://en.wikipedia.org/wiki/BibTeX and \cite for more info.
# The default value is: plain.
# This tag requires that the tag GENERATE_LATEX is set to YES.
LATEX_BIB_STYLE = plain
#---------------------------------------------------------------------------
# Configuration options related to the RTF output
#---------------------------------------------------------------------------
# If the GENERATE_RTF tag is set to YES, doxygen will generate RTF output. The
# RTF output is optimized for Word 97 and may not look too pretty with other RTF
# readers/editors.
# The default value is: NO.
GENERATE_RTF = NO
# The RTF_OUTPUT tag is used to specify where the RTF docs will be put. If a
# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
# it.
# The default directory is: rtf.
# This tag requires that the tag GENERATE_RTF is set to YES.
RTF_OUTPUT =
# If the COMPACT_RTF tag is set to YES, doxygen generates more compact RTF
# documents. This may be useful for small projects and may help to save some
# trees in general.
# The default value is: NO.
# This tag requires that the tag GENERATE_RTF is set to YES.
COMPACT_RTF = NO
# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated will
# contain hyperlink fields. The RTF file will contain links (just like the HTML
# output) instead of page references. This makes the output suitable for online
# browsing using Word or some other Word compatible readers that support those
# fields.
#
# Note: WordPad (write) and others do not support links.
# The default value is: NO.
# This tag requires that the tag GENERATE_RTF is set to YES.
RTF_HYPERLINKS = NO
# Load stylesheet definitions from file. Syntax is similar to doxygen's config
# file, i.e. a series of assignments. You only have to provide replacements,
# missing definitions are set to their default value.
#
# See also section "Doxygen usage" for information on how to generate the
# default style sheet that doxygen normally uses.
# This tag requires that the tag GENERATE_RTF is set to YES.
RTF_STYLESHEET_FILE =
# Set optional variables used in the generation of an RTF document. Syntax is
# similar to doxygen's config file. A template extensions file can be generated
# using doxygen -e rtf extensionFile.
# This tag requires that the tag GENERATE_RTF is set to YES.
RTF_EXTENSIONS_FILE =
# If the RTF_SOURCE_CODE tag is set to YES then doxygen will include source code
# with syntax highlighting in the RTF output.
#
# Note that which sources are shown also depends on other settings such as
# SOURCE_BROWSER.
# The default value is: NO.
# This tag requires that the tag GENERATE_RTF is set to YES.
RTF_SOURCE_CODE = NO
#---------------------------------------------------------------------------
# Configuration options related to the man page output
#---------------------------------------------------------------------------
# If the GENERATE_MAN tag is set to YES, doxygen will generate man pages for
# classes and files.
# The default value is: NO.
GENERATE_MAN = NO
# The MAN_OUTPUT tag is used to specify where the man pages will be put. If a
# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
# it. A directory man3 will be created inside the directory specified by
# MAN_OUTPUT.
# The default directory is: man.
# This tag requires that the tag GENERATE_MAN is set to YES.
MAN_OUTPUT =
# The MAN_EXTENSION tag determines the extension that is added to the generated
# man pages. In case the manual section does not start with a number, the number
# 3 is prepended. The dot (.) at the beginning of the MAN_EXTENSION tag is
# optional.
# The default value is: .3.
# This tag requires that the tag GENERATE_MAN is set to YES.
MAN_EXTENSION = .3
# The MAN_SUBDIR tag determines the name of the directory created within
# MAN_OUTPUT in which the man pages are placed. If defaults to man followed by
# MAN_EXTENSION with the initial . removed.
# This tag requires that the tag GENERATE_MAN is set to YES.
MAN_SUBDIR =
# If the MAN_LINKS tag is set to YES and doxygen generates man output, then it
# will generate one additional man file for each entity documented in the real
# man page(s). These additional files only source the real man page, but without
# them the man command would be unable to find the correct page.
# The default value is: NO.
# This tag requires that the tag GENERATE_MAN is set to YES.
MAN_LINKS = NO
#---------------------------------------------------------------------------
# Configuration options related to the XML output
#---------------------------------------------------------------------------
# If the GENERATE_XML tag is set to YES, doxygen will generate an XML file that
# captures the structure of the code including all documentation.
# The default value is: NO.
GENERATE_XML = NO
# The XML_OUTPUT tag is used to specify where the XML pages will be put. If a
# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
# it.
# The default directory is: xml.
# This tag requires that the tag GENERATE_XML is set to YES.
XML_OUTPUT = xml
# If the XML_PROGRAMLISTING tag is set to YES, doxygen will dump the program
# listings (including syntax highlighting and cross-referencing information) to
# the XML output. Note that enabling this will significantly increase the size
# of the XML output.
# The default value is: YES.
# This tag requires that the tag GENERATE_XML is set to YES.
XML_PROGRAMLISTING = YES
#---------------------------------------------------------------------------
# Configuration options related to the DOCBOOK output
#---------------------------------------------------------------------------
# If the GENERATE_DOCBOOK tag is set to YES, doxygen will generate Docbook files
# that can be used to generate PDF.
# The default value is: NO.
GENERATE_DOCBOOK = NO
# The DOCBOOK_OUTPUT tag is used to specify where the Docbook pages will be put.
# If a relative path is entered the value of OUTPUT_DIRECTORY will be put in
# front of it.
# The default directory is: docbook.
# This tag requires that the tag GENERATE_DOCBOOK is set to YES.
DOCBOOK_OUTPUT = docbook
# If the DOCBOOK_PROGRAMLISTING tag is set to YES, doxygen will include the
# program listings (including syntax highlighting and cross-referencing
# information) to the DOCBOOK output. Note that enabling this will significantly
# increase the size of the DOCBOOK output.
# The default value is: NO.
# This tag requires that the tag GENERATE_DOCBOOK is set to YES.
DOCBOOK_PROGRAMLISTING = NO
#---------------------------------------------------------------------------
# Configuration options for the AutoGen Definitions output
#---------------------------------------------------------------------------
# If the GENERATE_AUTOGEN_DEF tag is set to YES, doxygen will generate an
# AutoGen Definitions (see http://autogen.sf.net) file that captures the
# structure of the code including all documentation. Note that this feature is
# still experimental and incomplete at the moment.
# The default value is: NO.
GENERATE_AUTOGEN_DEF = NO
#---------------------------------------------------------------------------
# Configuration options related to the Perl module output
#---------------------------------------------------------------------------
# If the GENERATE_PERLMOD tag is set to YES, doxygen will generate a Perl module
# file that captures the structure of the code including all documentation.
#
# Note that this feature is still experimental and incomplete at the moment.
# The default value is: NO.
GENERATE_PERLMOD = NO
# If the PERLMOD_LATEX tag is set to YES, doxygen will generate the necessary
# Makefile rules, Perl scripts and LaTeX code to be able to generate PDF and DVI
# output from the Perl module output.
# The default value is: NO.
# This tag requires that the tag GENERATE_PERLMOD is set to YES.
PERLMOD_LATEX = NO
# If the PERLMOD_PRETTY tag is set to YES, the Perl module output will be nicely
# formatted so it can be parsed by a human reader. This is useful if you want to
# understand what is going on. On the other hand, if this tag is set to NO, the
# size of the Perl module output will be much smaller and Perl will parse it
# just the same.
# The default value is: YES.
# This tag requires that the tag GENERATE_PERLMOD is set to YES.
PERLMOD_PRETTY = YES
# The names of the make variables in the generated doxyrules.make file are
# prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. This is useful
# so different doxyrules.make files included by the same Makefile don't
# overwrite each other's variables.
# This tag requires that the tag GENERATE_PERLMOD is set to YES.
PERLMOD_MAKEVAR_PREFIX =
#---------------------------------------------------------------------------
# Configuration options related to the preprocessor
#---------------------------------------------------------------------------
# If the ENABLE_PREPROCESSING tag is set to YES, doxygen will evaluate all
# C-preprocessor directives found in the sources and include files.
# The default value is: YES.
ENABLE_PREPROCESSING = YES
# If the MACRO_EXPANSION tag is set to YES, doxygen will expand all macro names
# in the source code. If set to NO, only conditional compilation will be
# performed. Macro expansion can be done in a controlled way by setting
# EXPAND_ONLY_PREDEF to YES.
# The default value is: NO.
# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
MACRO_EXPANSION = YES
# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES then
# the macro expansion is limited to the macros specified with the PREDEFINED and
# EXPAND_AS_DEFINED tags.
# The default value is: NO.
# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
EXPAND_ONLY_PREDEF = YES
# If the SEARCH_INCLUDES tag is set to YES, the include files in the
# INCLUDE_PATH will be searched if a #include is found.
# The default value is: YES.
# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
SEARCH_INCLUDES = YES
# The INCLUDE_PATH tag can be used to specify one or more directories that
# contain include files that are not input files but should be processed by the
# preprocessor.
# This tag requires that the tag SEARCH_INCLUDES is set to YES.
INCLUDE_PATH =
# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard
# patterns (like *.h and *.hpp) to filter out the header-files in the
# directories. If left blank, the patterns specified with FILE_PATTERNS will be
# used.
# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
INCLUDE_FILE_PATTERNS =
# The PREDEFINED tag can be used to specify one or more macro names that are
# defined before the preprocessor is started (similar to the -D option of e.g.
# gcc). The argument of the tag is a list of macros of the form: name or
# name=definition (no spaces). If the definition and the "=" are omitted, "=1"
# is assumed. To prevent a macro definition from being undefined via #undef or
# recursively expanded use the := operator instead of the = operator.
# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
PREDEFINED =
# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this
# tag can be used to specify a list of macro names that should be expanded. The
# macro definition that is found in the sources will be used. Use the PREDEFINED
# tag if you want to use a different macro definition that overrules the
# definition found in the source code.
# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
EXPAND_AS_DEFINED =
# If the SKIP_FUNCTION_MACROS tag is set to YES then doxygen's preprocessor will
# remove all references to function-like macros that are alone on a line, have
# an all uppercase name, and do not end with a semicolon. Such function macros
# are typically used for boiler-plate code, and will confuse the parser if not
# removed.
# The default value is: YES.
# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
SKIP_FUNCTION_MACROS = YES
#---------------------------------------------------------------------------
# Configuration options related to external references
#---------------------------------------------------------------------------
# The TAGFILES tag can be used to specify one or more tag files. For each tag
# file the location of the external documentation should be added. The format of
# a tag file without this location is as follows:
# TAGFILES = file1 file2 ...
# Adding location for the tag files is done as follows:
# TAGFILES = file1=loc1 "file2 = loc2" ...
# where loc1 and loc2 can be relative or absolute paths or URLs. See the
# section "Linking to external documentation" for more information about the use
# of tag files.
# Note: Each tag file must have a unique name (where the name does NOT include
# the path). If a tag file is not located in the directory in which doxygen is
# run, you must also specify the path to the tagfile here.
TAGFILES =
# When a file name is specified after GENERATE_TAGFILE, doxygen will create a
# tag file that is based on the input files it reads. See section "Linking to
# external documentation" for more information about the usage of tag files.
GENERATE_TAGFILE = tskjni_doxygen.tag
# If the ALLEXTERNALS tag is set to YES, all external class will be listed in
# the class index. If set to NO, only the inherited external classes will be
# listed.
# The default value is: NO.
ALLEXTERNALS = NO
# If the EXTERNAL_GROUPS tag is set to YES, all external groups will be listed
# in the modules index. If set to NO, only the current project's groups will be
# listed.
# The default value is: YES.
EXTERNAL_GROUPS = YES
# If the EXTERNAL_PAGES tag is set to YES, all external pages will be listed in
# the related pages index. If set to NO, only the current project's pages will
# be listed.
# The default value is: YES.
EXTERNAL_PAGES = YES
# The PERL_PATH should be the absolute path and name of the perl script
# interpreter (i.e. the result of 'which perl').
# The default file (with absolute path) is: /usr/bin/perl.
PERL_PATH = /usr/bin/perl
#---------------------------------------------------------------------------
# Configuration options related to the dot tool
#---------------------------------------------------------------------------
# If the CLASS_DIAGRAMS tag is set to YES, doxygen will generate a class diagram
# (in HTML and LaTeX) for classes with base or super classes. Setting the tag to
# NO turns the diagrams off. Note that this option also works with HAVE_DOT
# disabled, but it is recommended to install and use dot, since it yields more
# powerful graphs.
# The default value is: YES.
CLASS_DIAGRAMS = NO
# You can define message sequence charts within doxygen comments using the \msc
# command. Doxygen will then run the mscgen tool (see:
# http://www.mcternan.me.uk/mscgen/)) to produce the chart and insert it in the
# documentation. The MSCGEN_PATH tag allows you to specify the directory where
# the mscgen tool resides. If left empty the tool is assumed to be found in the
# default search path.
MSCGEN_PATH =
# You can include diagrams made with dia in doxygen documentation. Doxygen will
# then run dia to produce the diagram and insert it in the documentation. The
# DIA_PATH tag allows you to specify the directory where the dia binary resides.
# If left empty dia is assumed to be found in the default search path.
DIA_PATH =
# If set to YES the inheritance and collaboration graphs will hide inheritance
# and usage relations if the target is undocumented or is not a class.
# The default value is: YES.
HIDE_UNDOC_RELATIONS = YES
# If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is
# available from the path. This tool is part of Graphviz (see:
# http://www.graphviz.org/), a graph visualization toolkit from AT&T and Lucent
# Bell Labs. The other options in this section have no effect if this option is
# set to NO
# The default value is: NO.
HAVE_DOT = NO
# The DOT_NUM_THREADS specifies the number of dot invocations doxygen is allowed
# to run in parallel. When set to 0 doxygen will base this on the number of
# processors available in the system. You can set it explicitly to a value
# larger than 0 to get control over the balance between CPU load and processing
# speed.
# Minimum value: 0, maximum value: 32, default value: 0.
# This tag requires that the tag HAVE_DOT is set to YES.
DOT_NUM_THREADS = 0
# When you want a differently looking font in the dot files that doxygen
# generates you can specify the font name using DOT_FONTNAME. You need to make
# sure dot is able to find the font, which can be done by putting it in a
# standard location or by setting the DOTFONTPATH environment variable or by
# setting DOT_FONTPATH to the directory containing the font.
# The default value is: Helvetica.
# This tag requires that the tag HAVE_DOT is set to YES.
DOT_FONTNAME =
# The DOT_FONTSIZE tag can be used to set the size (in points) of the font of
# dot graphs.
# Minimum value: 4, maximum value: 24, default value: 10.
# This tag requires that the tag HAVE_DOT is set to YES.
DOT_FONTSIZE = 10
# By default doxygen will tell dot to use the default font as specified with
# DOT_FONTNAME. If you specify a different font using DOT_FONTNAME you can set
# the path where dot can find it using this tag.
# This tag requires that the tag HAVE_DOT is set to YES.
DOT_FONTPATH =
# If the CLASS_GRAPH tag is set to YES then doxygen will generate a graph for
# each documented class showing the direct and indirect inheritance relations.
# Setting this tag to YES will force the CLASS_DIAGRAMS tag to NO.
# The default value is: YES.
# This tag requires that the tag HAVE_DOT is set to YES.
CLASS_GRAPH = YES
# If the COLLABORATION_GRAPH tag is set to YES then doxygen will generate a
# graph for each documented class showing the direct and indirect implementation
# dependencies (inheritance, containment, and class references variables) of the
# class with other documented classes.
# The default value is: YES.
# This tag requires that the tag HAVE_DOT is set to YES.
COLLABORATION_GRAPH = YES
# If the GROUP_GRAPHS tag is set to YES then doxygen will generate a graph for
# groups, showing the direct groups dependencies.
# The default value is: YES.
# This tag requires that the tag HAVE_DOT is set to YES.
GROUP_GRAPHS = YES
# If the UML_LOOK tag is set to YES, doxygen will generate inheritance and
# collaboration diagrams in a style similar to the OMG's Unified Modeling
# Language.
# The default value is: NO.
# This tag requires that the tag HAVE_DOT is set to YES.
UML_LOOK = NO
# If the UML_LOOK tag is enabled, the fields and methods are shown inside the
# class node. If there are many fields or methods and many nodes the graph may
# become too big to be useful. The UML_LIMIT_NUM_FIELDS threshold limits the
# number of items for each type to make the size more manageable. Set this to 0
# for no limit. Note that the threshold may be exceeded by 50% before the limit
# is enforced. So when you set the threshold to 10, up to 15 fields may appear,
# but if the number exceeds 15, the total amount of fields shown is limited to
# 10.
# Minimum value: 0, maximum value: 100, default value: 10.
# This tag requires that the tag HAVE_DOT is set to YES.
UML_LIMIT_NUM_FIELDS = 10
# If the TEMPLATE_RELATIONS tag is set to YES then the inheritance and
# collaboration graphs will show the relations between templates and their
# instances.
# The default value is: NO.
# This tag requires that the tag HAVE_DOT is set to YES.
TEMPLATE_RELATIONS = YES
# If the INCLUDE_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are set to
# YES then doxygen will generate a graph for each documented file showing the
# direct and indirect include dependencies of the file with other documented
# files.
# The default value is: YES.
# This tag requires that the tag HAVE_DOT is set to YES.
INCLUDE_GRAPH = YES
# If the INCLUDED_BY_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are
# set to YES then doxygen will generate a graph for each documented file showing
# the direct and indirect include dependencies of the file with other documented
# files.
# The default value is: YES.
# This tag requires that the tag HAVE_DOT is set to YES.
INCLUDED_BY_GRAPH = YES
# If the CALL_GRAPH tag is set to YES then doxygen will generate a call
# dependency graph for every global function or class method.
#
# Note that enabling this option will significantly increase the time of a run.
# So in most cases it will be better to enable call graphs for selected
# functions only using the \callgraph command.
# The default value is: NO.
# This tag requires that the tag HAVE_DOT is set to YES.
CALL_GRAPH = NO
# If the CALLER_GRAPH tag is set to YES then doxygen will generate a caller
# dependency graph for every global function or class method.
#
# Note that enabling this option will significantly increase the time of a run.
# So in most cases it will be better to enable caller graphs for selected
# functions only using the \callergraph command.
# The default value is: NO.
# This tag requires that the tag HAVE_DOT is set to YES.
CALLER_GRAPH = NO
# If the GRAPHICAL_HIERARCHY tag is set to YES then doxygen will graphical
# hierarchy of all classes instead of a textual one.
# The default value is: YES.
# This tag requires that the tag HAVE_DOT is set to YES.
GRAPHICAL_HIERARCHY = YES
# If the DIRECTORY_GRAPH tag is set to YES then doxygen will show the
# dependencies a directory has on other directories in a graphical way. The
# dependency relations are determined by the #include relations between the
# files in the directories.
# The default value is: YES.
# This tag requires that the tag HAVE_DOT is set to YES.
DIRECTORY_GRAPH = YES
# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images
# generated by dot.
# Note: If you choose svg you need to set HTML_FILE_EXTENSION to xhtml in order
# to make the SVG files visible in IE 9+ (other browsers do not have this
# requirement).
# Possible values are: png, jpg, gif and svg.
# The default value is: png.
# This tag requires that the tag HAVE_DOT is set to YES.
DOT_IMAGE_FORMAT = png
# If DOT_IMAGE_FORMAT is set to svg, then this option can be set to YES to
# enable generation of interactive SVG images that allow zooming and panning.
#
# Note that this requires a modern browser other than Internet Explorer. Tested
# and working are Firefox, Chrome, Safari, and Opera.
# Note: For IE 9+ you need to set HTML_FILE_EXTENSION to xhtml in order to make
# the SVG files visible. Older versions of IE do not have SVG support.
# The default value is: NO.
# This tag requires that the tag HAVE_DOT is set to YES.
INTERACTIVE_SVG = NO
# The DOT_PATH tag can be used to specify the path where the dot tool can be
# found. If left blank, it is assumed the dot tool can be found in the path.
# This tag requires that the tag HAVE_DOT is set to YES.
DOT_PATH =
# The DOTFILE_DIRS tag can be used to specify one or more directories that
# contain dot files that are included in the documentation (see the \dotfile
# command).
# This tag requires that the tag HAVE_DOT is set to YES.
DOTFILE_DIRS =
# The MSCFILE_DIRS tag can be used to specify one or more directories that
# contain msc files that are included in the documentation (see the \mscfile
# command).
MSCFILE_DIRS =
# The DIAFILE_DIRS tag can be used to specify one or more directories that
# contain dia files that are included in the documentation (see the \diafile
# command).
DIAFILE_DIRS =
# When using plantuml, the PLANTUML_JAR_PATH tag should be used to specify the
# path where java can find the plantuml.jar file. If left blank, it is assumed
# PlantUML is not used or called during a preprocessing step. Doxygen will
# generate a warning when it encounters a \startuml command in this case and
# will not generate output for the diagram.
PLANTUML_JAR_PATH =
# When using plantuml, the specified paths are searched for files specified by
# the !include statement in a plantuml block.
PLANTUML_INCLUDE_PATH =
# The DOT_GRAPH_MAX_NODES tag can be used to set the maximum number of nodes
# that will be shown in the graph. If the number of nodes in a graph becomes
# larger than this value, doxygen will truncate the graph, which is visualized
# by representing a node as a red box. Note that doxygen if the number of direct
# children of the root node in a graph is already larger than
# DOT_GRAPH_MAX_NODES then the graph will not be shown at all. Also note that
# the size of a graph can be further restricted by MAX_DOT_GRAPH_DEPTH.
# Minimum value: 0, maximum value: 10000, default value: 50.
# This tag requires that the tag HAVE_DOT is set to YES.
DOT_GRAPH_MAX_NODES = 50
# The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the graphs
# generated by dot. A depth value of 3 means that only nodes reachable from the
# root by following a path via at most 3 edges will be shown. Nodes that lay
# further from the root node will be omitted. Note that setting this option to 1
# or 2 may greatly reduce the computation time needed for large code bases. Also
# note that the size of a graph can be further restricted by
# DOT_GRAPH_MAX_NODES. Using a depth of 0 means no depth restriction.
# Minimum value: 0, maximum value: 1000, default value: 0.
# This tag requires that the tag HAVE_DOT is set to YES.
MAX_DOT_GRAPH_DEPTH = 0
# Set the DOT_TRANSPARENT tag to YES to generate images with a transparent
# background. This is disabled by default, because dot on Windows does not seem
# to support this out of the box.
#
# Warning: Depending on the platform used, enabling this option may lead to
# badly anti-aliased labels on the edges of a graph (i.e. they become hard to
# read).
# The default value is: NO.
# This tag requires that the tag HAVE_DOT is set to YES.
DOT_TRANSPARENT = NO
# Set the DOT_MULTI_TARGETS tag to YES to allow dot to generate multiple output
# files in one run (i.e. multiple -o and -T options on the command line). This
# makes dot run faster, but since only newer versions of dot (>1.8.10) support
# this, this feature is disabled by default.
# The default value is: NO.
# This tag requires that the tag HAVE_DOT is set to YES.
DOT_MULTI_TARGETS = NO
# If the GENERATE_LEGEND tag is set to YES doxygen will generate a legend page
# explaining the meaning of the various boxes and arrows in the dot generated
# graphs.
# The default value is: YES.
# This tag requires that the tag HAVE_DOT is set to YES.
GENERATE_LEGEND = YES
# If the DOT_CLEANUP tag is set to YES, doxygen will remove the intermediate dot
# files that are used to generate the various graphs.
# The default value is: YES.
# This tag requires that the tag HAVE_DOT is set to YES.
DOT_CLEANUP = YES
/*! \page artifact_catalog_page Standard Artifacts Catalog
# Introduction
This document reflects current standard usage of artifact and attribute types for posting analysis results to the case blackboard in Autopsy. Refer to \ref mod_bbpage for more background on the blackboard and how to make artifacts.
The catalog section below has one entry for each standard artifact type divided by categories. Each entry lists the required and optional attributes of artifacts of the type. The category types are:
- \ref art_catalog_analysis "Analysis Result": Result from an analysis technique on a given object with a given configuration. Includes Conclusion, Relevance Score, and Confidence.
- \ref art_catalog_data "Data Artifact": Data that was originally embedded by an application/OS in a file or other data container.
NOTE:
- While we have listed some attributes as "Required", nothing will enforce that they exist. Modules that use artifacts from the blackboard should assume that some of the attributes may not actually exist.
- You are not limited to the attributes listed below for each artifact. Attributes are listed below as "Optional" if at least one, but not all, Autopsy modules create them. If you want to store data that is not listed below, use an existing attribute type or make your own.
For the full list of types, refer to:
- org.sleuthkit.datamodel.BlackboardArtifact.ARTIFACT_TYPE
- org.sleuthkit.datamodel.BlackboardAttribute.ATTRIBUTE_TYPE
\section art_catalog_analysis Analysis Result Types
---
## TSK_DATA_SOURCE_USAGE
Describes how a data source was used, e.g., as a SIM card or an OS drive (such as for Windows or Android).
### REQUIRED ATTRIBUTES
- TSK_DESCRIPTION (Description of the usage, e.g., "OS Drive (Windows Vista)").
---
## TSK_ENCRYPTION_DETECTED
An indication that the content is encrypted.
### REQUIRED ATTRIBUTES
- TSK_COMMENT (A comment on the encryption, e.g., encryption type or password)
---
## TSK_ENCRYPTION_SUSPECTED
An indication that the content is likely encrypted.
### REQUIRED ATTRIBUTES
- TSK_COMMENT (Reason for suspecting encryption)
---
## TSK_EXT_MISMATCH_DETECTED
An indication that the registered extensions for a file's mime type do not match the file's extension.
### REQUIRED ATTRIBUTES
None
---
## TSK_FACE_DETECTED
An indication that a human face was detected in some content.
### REQUIRED ATTRIBUTES
None
---
## TSK_HASHSET_HIT
Indicates that the MD5 hash of a file matches a set of known MD5s (possibly user defined).
### REQUIRED ATTRIBUTES
- TSK_SET_NAME (Name of hashset containing the file's MD5)
### OPTIONAL ATTRIBUTES
- TSK_COMMENT (Additional comments about the hit)
---
## TSK_INTERESTING_ITEM
Indicates that the source item matches some set of criteria which deem it interesting. Items with this meta artifact will be brought to the attention of the user.
### REQUIRED ATTRIBUTES
- TSK_SET_NAME (The name of the set of criteria which deemed this item interesting)
### OPTIONAL ATTRIBUTES
- TSK_COMMENT (Comment on the reason that the source item is interesting)
- TSK_CATEGORY (The set membership rule that was satisfied)
- TSK_ASSOCIATED_ARTIFACT (The source artifact when the source item is an artifact)
---
## TSK_KEYWORD_HIT
Indication that the source artifact or file contains a keyword. Keywords are grouped into named sets.
### REQUIRED ATTRIBUTES
- TSK_KEYWORD (Keyword that was found in the artifact or file)
- TSK_KEYWORD_SEARCH_TYPE (Specifies the type of match, e.g., an exact match, a substring match, or a regex match)
- TSK_SET_NAME (The set name that the keyword was contained in)
- TSK_KEYWORD_REGEXP (The regular expression that matched, only required for regex matches)
- TSK_ASSOCIATED_ARTIFACT (Only required if the keyword hit source is an artifact)
### OPTIONAL ATTRIBUTES
- TSK_KEYWORD_PREVIEW (Snippet of text around keyword)
---
## TSK_MALWARE
Indicates the source file's malware status based on the score. A notable score means that the file has been detected to be malware. A score of none means that the file has been detected to not be malware.
### REQUIRED ATTRIBUTES
None
---
## TSK_OBJECT_DETECTED
Indicates that an object was detected in a media file. Typically used by computer vision software to classify images.
### REQUIRED ATTRIBUTES
- TSK_COMMENT (What was detected)
### OPTIONAL ATTRIBUTES
- TSK_DESCRIPTION (Additional comments about the object or observer, e.g., what detected the object)
---
## TSK_PREVIOUSLY_NOTABLE
Indicates that the file or artifact was previously tagged as "Notable" in another Autopsy case.
### REQUIRED ATTRIBUTES
- TSK_CORRELATION_TYPE (The correlation type that was previously tagged as notable)
- TSK_CORRELATION_VALUE (The correlation value that was previously tagged as notable)
- TSK_OTHER_CASES (The list of cases containing this file or artifact at the time the artifact is created)
---
## TSK_PREVIOUSLY_SEEN
Indicates that the file or artifact was previously seen in another Autopsy case.
### REQUIRED ATTRIBUTES
- TSK_CORRELATION_TYPE (The correlation type that was previously seen)
- TSK_CORRELATION_VALUE (The correlation value that was previously seen)
- TSK_OTHER_CASES (The list of cases containing this file or artifact at the time the artifact is created)
---
## TSK_PREVIOUSLY_UNSEEN
Indicates that the file or artifact was previously unseen in another Autopsy case.
### REQUIRED ATTRIBUTES
- TSK_CORRELATION_TYPE (The correlation type that was previously seen)
- TSK_CORRELATION_VALUE (The correlation value that was previously seen)
---
## TSK_USER_CONTENT_SUSPECTED
An indication that some media file content was generated by the user.
### REQUIRED ATTRIBUTES
- TSK_COMMENT (The reason why user-generated content is suspected)
---
## TSK_VERIFICATION_FAILED
An indication that some data did not pass verification. One example would be verifying a SHA-1 hash.
### REQUIRED ATTRIBUTES
- TSK_COMMENT (Reason for failure, what failed)
---
## TSK_WEB_ACCOUNT_TYPE
A web account type entry.
### REQUIRED ATTRIBUTES
- TSK_DOMAIN (Domain of the URL)
- TSK_TEXT (Indicates type of account (admin/moderator/user) and possible platform)
- TSK_URL (URL indicating the user has an account on this domain)
---
## TSK_WEB_CATEGORIZATION
The categorization of a web host using a specific usage type, e.g. mail.google.com would correspond to Web Email.
### REQUIRED ATTRIBUTES
- TSK_NAME (The usage category identifier, e.g. Web Email)
- TSK_DOMAIN (The domain of the host, e.g. google.com)
- TSK_HOST (The full host, e.g. mail.google.com)
---
## TSK_YARA_HIT
Indicates that the some content of the file was a hit for a YARA rule match.
### REQUIRED ATTRIBUTES
- TSK_RULE (The rule that was a hit for this file)
- TSK_SET_NAME (Name of the rule set containing the matching rule YARA rule)
---
## TSK_METADATA_EXIF
EXIF metadata found in an image or audio file.
### REQUIRED ATTRIBUTES
- At least one of:
- TSK_DATETIME_CREATED (Creation date of the file, in seconds since 1970-01-01T00:00:00Z)
- TSK_DEVICE_MAKE (Device make, generally the manufacturer, e.g., Apple)
- TSK_DEVICE_MODEL (Device model, generally the product, e.g., iPhone)
- TSK_GEO_ALTITUDE (The camera's altitude when the image/audio was taken)
- TSK_GEO_LATITUDE (The camera's latitude when the image/audio was taken)
- TSK_GEO_LONGITUDE (The camera's longitude when the image/audio was taken)
<br><br>
\section art_catalog_data Data Artifact Types
---
## TSK_ACCOUNT
Details about a credit card or communications account.
### REQUIRED ATTRIBUTES
- TSK_ACCOUNT_TYPE (Type of the account, e.g., Skype)
- TSK_ID (Unique identifier of the account)
or
TSK_CARD_NUMBER (Credit card number)
### OPTIONAL ATTRIBUTES
- TSK_KEYWORD_SEARCH_DOCUMENT_ID (Document ID of the Solr document that contains the TSK_CARD_NUMBER when the account is a credit card discovered by the Autopsy regular expression search for credit cards)
- TSK_SET_NAME (The keyword list name, i.e., "Credit Card Numbers", when the account is a credit card discovered by the Autopsy regular expression search for credit cards)
---
## TSK_ASSOCIATED_OBJECT
Provides a backwards link to an artifact that references the parent file of this artifact. Example usage is that a downloaded file will have this artifact and it will point back to the TSK_WEB_DOWNLOAD artifact that is associated with a browser's SQLite database. See \ref jni_bb_associated_object.
### REQUIRED ATTRIBUTES
- TSK_ASSOCIATED_ARTIFACT (Artifact ID of associated artifact)
---
## TSK_BACKUP_EVENT
Details about System/aplication/file backups.
### REQUIRED ATTRIBUTES
- TSK_DATETIME_START (Date/Time the backup happened)
### OPTIONAL ATTRIBUTES
- TSK_DATETIME_END (Date/Time the backup ended)
---
## TSK_BLUETOOTH_ADAPTER
Details about a Bluetooth adapter.
### REQUIRED ATTRIBUTES
- TSK_MAC_ADDRESS (MAC address of the Bluetooth adapter)
- TSK_NAME (Name of the device)
- TSK_DATETIME (Time device was last seen)
- TSK_DEVICE_ID (UUID of the device)
---
## TSK_BLUETOOTH_PAIRING
Details about a Bluetooth pairing event.
### REQUIRED ATTRIBUTES
- TSK_DEVICE_NAME (Name of the Bluetooth device)
### OPTIONAL ATTRIBUTES
- TSK_DATETIME (When the pairing occurred, in seconds since 1970-01-01T00:00:00Z)
- TSK_MAC_ADDRESS (MAC address of the Bluetooth device)
- TSK_DEVICE_ID (UUID of the device)
- TSK_DATETIME_ACCESSED (Last Connection Time)
---
## TSK_CALENDAR_ENTRY
A calendar entry in an application file or database.
### REQUIRED ATTRIBUTES
- TSK_CALENDAR_ENTRY_TYPE (E.g., Reminder, Event, Birthday, etc.)
- TSK_DATETIME_START (Start of the entry, in seconds since 1970-01-01T00:00:00Z)
### OPTIONAL ATTRIBUTES
- TSK_DESCRIPTION (Description of the entry, such as a note)
- TSK_LOCATION (Location of the entry, such as an address)
- TSK_DATETIME_END (End of the entry, in seconds since 1970-01-01T00:00:00Z)
---
## TSK_CALLLOG
A call log record in an application file or database.
### REQUIRED ATTRIBUTES
- At least one of:
- TSK_PHONE_NUMBER (A phone number involved in this call record)
- TSK_PHONE_NUMBER_FROM (The phone number that initiated the call)
- TSK_PHONE_NUMBER_TO (The phone number that receives the call)
### OPTIONAL ATTRIBUTES
- TSK_DATETIME_END (When the call ended, in seconds since 1970-01-01T00:00:00Z)
- TSK_DATETIME_START (When the call started, in seconds since 1970-01-01T00:00:00Z)
- TSK_DIRECTION (The communication direction, i.e., Incoming or Outgoing)
- TSK_NAME (The name of the caller or callee)
---
## TSK_CLIPBOARD_CONTENT
Data found on the operating system's clipboard.
### REQUIRED ATTRIBUTES
- TSK_TEXT (Text on the clipboard)
---
## TSK_CONTACT
A contact book entry in an application file or database.
### REQUIRED ATTRIBUTES
- At least one of:
- TSK_EMAIL (An email address associated with the contact)
- TSK_EMAIL_HOME (An email address that is known to be the personal email of the contact)
- TSK_EMAIL_OFFICE (An email address that is known to be the work email of the contact)
- TSK_PHONE_NUMBER (A phone number associated with the contact)
- TSK_PHONE_NUMBER_HOME (A phone number that is known to be the home phone number of the contact)
- TSK_PHONE_NUMBER_MOBILE (A phone number that is known to be the mobile phone number of the contact)
- TSK_PHONE_NUMBER_OFFICE (A phone number that is known to be the work phone number of the contact)
- TSK_NAME (Contact name)
### OPTIONAL ATTRIBUTES
- TSK_ORGANIZATION (An organization that the contact belongs to, e.g., Stanford University, Google)
- TSK_URL (e.g., the URL of an image if the contact is a vCard)
---
## TSK_DELETED_PROG
Programs that have been deleted from the system.
### REQUIRED ATTRIBUTES
- TSK_DATETIME (Date/Time the program was deleted)
- TSK_PROG_NAME (Program that was deleted)
### OPTIONAL Attributes
- TSK_PATH (Location where the program resided before being deleted)
---
## TSK_DEVICE_ATTACHED
Details about a device that was physically attached to a data source.
### REQUIRED ATTRIBUTES
- TSK_DEVICE_ID (String that uniquely identifies the attached device)
### OPTIONAL ATTRIBUTES
- TSK_DATETIME (When the device was attached, in seconds since 1970-01-01T00:00:00Z)
- TSK_DEVICE_MAKE (Make of the attached device, e.g., Apple)
- TSK_DEVICE_MODEL (Model of the attached device, e.g., iPhone 6s)
- TSK_MAC_ADDRESS (Mac address of the attached device)
---
## TSK_DEVICE_INFO
Details about a device data source.
### REQUIRED ATTRIBUTES
- At least one of:
- TSK_IMEI (IMEI number of the device)
- TSK_ICCID (ICCID number of the SIM)
- TSK_IMSI (IMSI number of the device)
---
## TSK_EMAIL_MSG
An email message found in an application file or database.
### OPTIONAL ATTRIBUTES
- At least one of:
- TSK_EMAIL_CONTENT_HTML (Representation of email as HTML)
- TSK_EMAIL_CONTENT_PLAIN (Representation of email as plain text)
- TSK_EMAIL_CONTENT_RTF (Representation of email as RTF)
- TSK_DATETIME_RCVD (When email message was received, in seconds since 1970-01-01T00:00:00Z)
- TSK_DATETIME_SENT (When email message was sent, in seconds since 1970-01-01T00:00:00Z)
- TSK_EMAIL_BCC (BCC'd recipient, multiple recipients should be in a comma separated string)
- TSK_EMAIL_CC (CC'd recipient, multiple recipients should be in a comma separated string)
- TSK_EMAIL_FROM (Email address that sent the message)
- TSK_EMAIL_TO (Email addresses the email message was sent to, multiple emails should be in a comma separated string)
- TSK_HEADERS (Transport message headers)
- TSK_MSG_ID (Message ID supplied by the email application)
- TSK_PATH (Path in the data source to the file containing the email message)
- TSK_SUBJECT (Subject of the email message)
- TSK_THREAD_ID (ID specified by the analysis module to group emails into threads for display purposes)
---
## TSK_EXTRACTED_TEXT
Text extracted from some content.
### REQUIRED ATTRIBUTES
- TSK_TEXT (The extracted text)
---
## TSK_GEN_INFO
A generic information artifact. Each content object will have at most one TSK_GEN_INFO artifact, which is easily accessed through org.sleuthkit.datamodel.AbstractContent.getGenInfoArtifact() and related methods. The TSK_GEN_INFO object is useful for storing values related to the content object without making a new artifact type.
### REQUIRED ATTRIBUTES
None
### OPTIONAL ATTRIBUTES
- TSK_HASH_PHOTODNA (The PhotoDNA hash of an image)
---
## TSK_GPS_AREA
An outline of an area.
### REQUIRED ATTRIBUTES
- TSK_GEO_WAYPOINTS (JSON list of waypoints. Use org.sleuthkit.datamodel.blackboardutils.attributes.GeoWaypoints class to create/process)
### OPTIONAL ATTRIBUTES
- TSK_LOCATION (Location of the route, e.g., a state or city)
- TSK_NAME (Name of the area, e.g., Minute Man Trail)
- TSK_PROG_NAME (Name of the application that was the source of the GPS route)
---
## TSK_GPS_BOOKMARK
A bookmarked GPS location or saved waypoint.
### REQUIRED ATTRIBUTES
- TSK_GEO_LATITUDE (The latitude value of the bookmark)
- TSK_GEO_LONGITUDE (The longitude value of the bookmark)
### OPTIONAL ATTRIBUTES
- TSK_DATETIME (Timestamp of the GPS bookmark, in seconds since 1970-01-01T00:00:00Z)
- TSK_GEO_ALTITUDE (The altitude of the specified latitude and longitude)
- TSK_LOCATION (The address of the bookmark. Ex: 123 Main St.)
- TSK_NAME (The name of the bookmark. Ex: Boston)
- TSK_PROG_NAME (Name of the application that was the source of the GPS bookmark)
---
## TSK_GPS_LAST_KNOWN_LOCATION
The last known location of a GPS connected device. This may be from a perspective other than the device.
### REQUIRED ATTRIBUTES
- TSK_GEO_LATITUDE (Last known latitude value)
- TSK_GEO_LONGITUDE (Last known longitude value)
### OPTIONAL ATTRIBUTES
- TSK_DATETIME (Timestamp of the last known location, in seconds since 1970-01-01T00:00:00Z)
- TSK_GEO_ALTITUDE (Altitude of the last known latitude and longitude)
- TSK_LOCATION (The address of the last known location. Ex: 123 Main St.)
- TSK_NAME (The name of the last known location. Ex: Boston)
---
## TSK_GPS_ROUTE
A GPS route.
### REQUIRED ATTRIBUTES
- TSK_GEO_WAYPOINTS (JSON list of waypoints. Use org.sleuthkit.datamodel.blackboardutils.attributes.GeoWaypoints class to create/process)
### OPTIONAL ATTRIBUTES
- TSK_DATETIME (Timestamp of the GPS route, in seconds since 1970-01-01T00:00:00Z)
- TSK_LOCATION (Location of the route, e.g., a state or city)
- TSK_NAME (Name of the route, e.g., Minute Man Trail)
- TSK_PROG_NAME (Name of the application that was the source of the GPS route)
---
## TSK_GPS_SEARCH
A GPS location that was known to have been searched by the device or user.
### REQUIRED ATTRIBUTES
- TSK_GEO_LATITUDE (The GPS latitude value that was searched)
- TSK_GEO_LONGITUDE (The GPS longitude value that was searched)
### OPTIONAL ATTRIBUTES
- TSK_DATETIME (Timestamp of the GPS search, in seconds since 1970-01-01T00:00:00Z)
- TSK_GEO_ALTITUDE (Altitude of the searched GPS coordinates)
- TSK_LOCATION (The address of the target location, e.g., 123 Main St.)
- TSK_NAME (The name of the target location, e.g., Boston)
---
## TSK_GPS_TRACK
A Global Positioning System (GPS) track artifact records the track, or path, of a GPS-enabled dvice as a connected series of track points. A track point is a location in a geographic coordinate system with latitude, longitude and altitude (elevation) axes.
### REQUIRED ATTRIBUTES
- TSK_GEO_TRACKPOINTS (JSON list of trackpoints. Use org.sleuthkit.datamodel.blackboardutils.attributes.GeoTrackPoints class to create/process)
### OPTIONAL ATTRIBUTES
- TSK_NAME (The name of the trackpoint set. Ex: Boston)
- TSK_PROG_NAME (Name of application containing the GPS trackpoint set)
---
## TSK_INSTALLED_PROG
Details about an installed program.
### REQUIRED ATTRIBUTES
- TSK_PROG_NAME (Name of the installed program)
### OPTIONAL ATTRIBUTES
- TSK_DATETIME (When the program was installed, in seconds since 1970-01-01T00:00:00Z)
- TSK_PATH (Path to the installed program in the data source)
- TSK_PATH_SOURCE (Path to an Android Package Kit (APK) file for an Android program)
- TSK_PERMISSIONS (Permissions of the installed program)
- TSK_VERSION (Version number of the program)
---
## TSK_MESSAGE
A message that is found in some content.
### REQUIRED ATTRIBUTES
- TSK_TEXT (The text of the message)
- TSK_MESSAGE_TYPE (E.g., WhatsApp Message, Skype Message, etc.)
### OPTIONAL ATTRIBUTES
- TSK_ATTACHMENTS (Attachments - use the org.sleuthkit.datamodel.blackboardutils.CommunicationArtifactsHelper class to add an attachment)
- TSK_DATETIME (Timestamp the message was sent or received, in seconds since 1970-01-01T00:00:00Z)
- TSK_DIRECTION (Direction of the message, e.g., incoming or outgoing)
- TSK_EMAIL_FROM (Email address of the sender)
- TSK_EMAIL_TO (Email address of the recipient)
- TSK_PHONE_NUMBER (A phone number associated with the message)
- TSK_PHONE_NUMBER_FROM (The phone number of the sender)
- TSK_PHONE_NUMBER_TO (The phone number of the recipient)
- TSK_READ_STATUS (Status of the message, e.g., read or unread)
- TSK_SUBJECT (Subject of the message)
- TSK_THREAD_ID (ID for keeping threaded messages together)
---
## TSK_METADATA
General metadata for some content.
### REQUIRED ATTRIBUTES
None
### OPTIONAL ATTRIBUTES
- TSK_DATETIME_CREATED (Timestamp the document was created)
- TSK_DATETIME_MODIFIED (Timestamp the document was modified)
- TSK_DESCRIPTION (Title of the document)
- TSK_LAST_PRINTED_DATETIME (Timestamp when document was last printed)
- TSK_ORGANIZATION (Organization/Company who owns the document)
- TSK_OWNER (Author of the document)
- TSK_PROG_NAME (Program used to create the document)
- TSK_USER_ID (Last author of the document)
- TSK_VERSION (Version number of the program used to create the document)
---
## TSK_OS_INFO
Details about an operating system recovered from the data source.
### REQUIRED ATTRIBUTES
- TSK_PROG_NAME (Name of the OS)
### OPTIONAL ATTRIBUTES
- TSK_DATETIME (Datetime of the OS installation, in seconds since 1970-01-01T00:00:00Z)
- TSK_DOMAIN (Windows domain for a Windows OS)
- TSK_ORGANIZATION (Registered organization for the OS installation)
- TSK_OWNER (Registered owner of the OS installation)
- TSK_PATH (System root for the OS installation)
- TSK_PROCESSOR_ARCHITECTURE (Details about the processor architecture as captured by the OS)
- TSK_NAME (Name of computer that the OS was installed on)
- TSK_PRODUCT_ID (Product ID for the OS installation)
- TSK_TEMP_DIR (Temp directory for the OS)
- TSK_VERSION (Version of the OS)
---
## TSK_PROG_NOTIFICATIONS
Notifications to the user.
### REQUIRED ATTRIBUTES
- TSK_DATETIME (When the notification was sent/received)
- TSK_PROG_NAME (Program to send/receive notification)
### OPTIONAL ATTRIBUTES
- TSK_TITLE (Title of the notification)
- TSK_VALUE (Message being sent or received)
---
## TSK_PROG_RUN
The number of times a program/application was run.
### REQUIRED ATTRIBUTES
- TSK_PROG_NAME (Name of the application)
### OPTIONAL ATTRIBUTES
- TSK_COUNT (Number of times program was run, should be at least 1)
- TSK_DATETIME (Timestamp that application was run last, in seconds since 1970-01-01T00:00:00Z)
- TSK_BYTES_SENT (Number of bytes sent)
- TSK_BYTES_RECEIVED (Number of bytes received)
- TSK_USER_NAME (User who executed the program)
- TSK_COMMENT (Source of the attribute)
- TSK_PATH (Path of the executable program)
---
## TSK_RECENT_OBJECT
Indicates recently accessed content. Examples: Recent Documents or Recent Downloads menu items on Windows.
### REQUIRED ATTRIBUTES
- TSK_PATH (Path to the recent object content in the data source)
### OPTIONAL ATTRIBUTES
- TSK_DATETIME_ACCESSED (Timestamp that the content was last accessed at, in seconds since 1970-01-01T00:00:00Z)
- TSK_PATH_ID (ID of the file instance in the data source)
- TSK_PROG_NAME (Application or application extractor that stored this object as recent)
- TSK_NAME (If found in the registry, the name of the attribute)
- TSK_VALUE (If found in the registry, the value of the attribute)
- TSK_COMMENT (What the source of the attribute may be)
---
## TSK_REMOTE_DRIVE
Details about a remote drive found in the data source.
### REQUIRED ATTRIBUTES
- TSK_REMOTE_PATH (Fully qualified UNC path to the remote drive)
### OPTIONAL ATTRIBUTES
- TSK_LOCAL_PATH (The local path of this remote drive. This path may be mapped, e.g., 'D:/' or 'F:/')
---
## TSK_SCREEN_SHOTS
Screenshots from a device or application.
### REQUIRED ATTRIBUTES
- TSK_DATETIME (When the screenshot was taken)
- TSK_PROG_NAME (Program that took the screenshot)
### OPTIONAL ATTRIBUTES
- TSK_PATH (Location of screenshot)
---
## TSK_SERVICE_ACCOUNT
An application or web user account.
### REQUIRED ATTRIBUTES
- TSK_PROG_NAME (The name of the service, e.g., Netflix)
- TSK_USER_ID (User ID of the service account)
### OPTIONAL ATTRIBUTES
- TSK_CATEGORY (Type of service, e.g., Web, TV, Messaging)
- TSK_DATETIME_CREATED (When this service account was created, in seconds since 1970-01-01T00:00:00Z)
- TSK_DESCRIPTION (Name of the mailbox, if this is an email account)
- TSK_DOMAIN (The sign on realm)
- TSK_EMAIL_REPLYTO (Email reply to address, if this is an email account)
- TSK_NAME (Display name of the user account)
- TSK_PASSWORD (Password of the service account)
- TSK_PATH (Path to the application installation, if it is local)
- TSK_SERVER_NAME (Name of the mail server, if this is an email account)
- TSK_URL (URL of the service, if the service is a Web service)
- TSK_URL_DECODED (Decoded URL of the service, if the service is a Web service)
- TSK_USER_NAME (User name of the service account)
---
## TSK_SIM_ATTACHED
Details about a SIM card that was physically attached to the device.
### REQUIRED ATTRIBUTES
- At least one of:
- TSK_ICCID (ICCID number of this SIM card)
- TSK_IMSI (IMSI number of this SIM card)
---
## TSK_SPEED_DIAL_ENTRY
A speed dial entry.
### REQUIRED ATTRIBUTES
- TSK_PHONE_NUMBER (Phone number of the speed dial entry)
### OPTIONAL ATTRIBUTES
- TSK_NAME_PERSON (Contact name of the speed dial entry)
- TSK_SHORTCUT (Keyboard shortcut)
---
## TSK_TL_EVENT
An event in the timeline of a case.
### REQUIRED ATTRIBUTES
- TSK_TL_EVENT_TYPE (The type of the event, e.g., aTimelineEventType)
- TSK_DATETIME (When the event occurred, in seconds since 1970-01-01T00:00:00Z)
- TSK_DESCRIPTION (A description of the event)
---
## TSK_USER_DEVICE_EVENT
Activity on the system or from an application. Example usage is a mobile device being locked and unlocked.
### REQUIRED ATTRIBUTES
- TSK_DATETIME_START (When activity started)
### OPTIONAL ATTRIBUTES
- TSK_ACTIVITY_TYPE (Activity type i.e.: On or Off)
- TSK_DATETIME_END (When activity ended)
- TSK_PROG_NAME (Name of the program doing the activity)
- TSK_VALUE (Connection type)
---
## TSK_WEB_BOOKMARK
A web bookmark entry.
### REQUIRED ATTRIBUTES
- TSK_URL (Bookmarked URL)
### OPTIONAL ATTRIBUTES
- TSK_DATETIME_CREATED (Timestamp that this web bookmark was created, in seconds since 1970-01-01T00:00:00Z)
- TSK_DOMAIN (Domain of the bookmarked URL)
- TSK_PROG_NAME (Name of application or application extractor that stored this web bookmark entry)
- TSK_NAME (Name of the bookmark entry)
- TSK_TITLE (Title of the web page that was bookmarked)
---
## TSK_WEB_CACHE
A web cache entry. The resource that was cached may or may not be present in the data source.
### REQUIRED ATTRIBUTES
- TSK_PATH (Path to the cached file. This could point to a container file that has smaller cached data in it.)
- TSK_URL (URL of the resource cached in this entry)
### OPTIONAL ATTRIBUTES
- TSK_DATETIME_CREATED (Creation date of the cache entry, in seconds since 1970-01-01T00:00:00Z)
- TSK_HEADERS (HTTP headers on cache entry)
- TSK_PATH_ID (Object ID of the source cache file)
- TSK_DOMAIN (Domain of the URL)
---
## TSK_WEB_COOKIE
A Web cookie found.
### REQUIRED ATTRIBUTES
- TSK_URL (Source URL of the web cookie)
- TSK_NAME (The Web cookie name attribute, e.g., sessionToken)
- TSK_VALUE (The Web cookie value attribute)
### OPTIONAL ATTRIBUTES
- TSK_DATETIME_ACCESSED (Datetime the Web Cookie was last accessed, in seconds since 1970-01-01T00:00:00Z)
- TSK_DATETIME_CREATED (Datetime the Web cookie was created, in seconds since 1970-01-01T00:00:00Z)
- TSK_DATETIME_END (Expiration datetime of the Web cookie, in seconds since 1970-01-01T00:00:00Z)
- TSK_DOMAIN (The domain the Web cookie serves)
- TSK_PROG_NAME (Name of the application or application extractor that stored the Web cookie)
---
## TSK_WEB_DOWNLOAD
A Web download. The downloaded resource may or may not be present in the data source.
### REQUIRED ATTRIBUTES
- TSK_URL (URL that hosts this downloaded resource)
### OPTIONAL ATTRIBUTES
- TSK_DATETIME_ACCESSED (Last accessed timestamp, in seconds since 1970-01-01T00:00:00Z)
- TSK_DOMAIN (Domain that hosted the downloaded resource)
- TSK_PATH_ID (Object ID of the file instance in the data source)
- TSK_PATH (Path to the downloaded resource in the datasource)
- TSK_PROG_NAME (Name of the application or application extractor that downloaded this resource)
---
## TSK_WEB_FORM_ADDRESS
Contains autofill data for a person's address. Form data is usually saved by a Web browser.
### REQUIRED ATTRIBUTES
- TSK_LOCATION (The address of the person, e.g., 123 Main St.)
### OPTIONAL ATTRIBUTES
- TSK_COMMENT (Comment if the autofill data is encrypted)
- TSK_COUNT (Number of times the Web form data was used)
- TSK_DATETIME_ACCESSED (Last accessed timestamp of the Web form data, in seconds since 1970-01-01T00:00:00Z)
- TSK_DATETIME_MODIFIED (Last modified timestamp of the Web form data, in seconds since 1970-01-01T00:00:00Z)
- TSK_EMAIL (Email address from the form data)
- TSK_NAME_PERSON (Name of a person from the form data)
- TSK_PHONE_NUMBER (Phone number from the form data)
---
## TSK_WEB_FORM_AUTOFILL
Contains autofill data for a Web form. Form data is usually saved by a Web browser. Each field value pair in the form should be stored in separate artifacts.
### REQUIRED ATTRIBUTES
- One pair of:
- TSK_NAME (Name of the autofill field)
- TSK_VALUE (Value of the autofill field)
### OPTIONAL ATTRIBUTES
- TSK_COMMENT (Comment if the form autofill data is encrypted)
- TSK_COUNT (Number of times this Web form data has been used)
- TSK_DATETIME_CREATED (Datetime this Web form autofill data was created, in seconds since 1970-01-01T00:00:00Z)
- TSK_DATETIME_ACCESSED (Datetime this Web form data was last accessed, in seconds since 1970-01-01T00:00:00Z)
- TSK_PROG_NAME (The application that stored this form information)
---
## TSK_WEB_HISTORY
A Web history entry.
### REQUIRED ATTRIBUTES
- TSK_URL (The URL)
### OPTIONAL ATTRIBUTES
- TSK_DATETIME_ACCESSED (The datetime the URL was accessed, in seconds since 1970-01-01T00:00:00Z)
- TSK_DOMAIN (The domain name of the URL)
- TSK_PROG_NAME (The application or application extractor that stored this Web history entry)
- TSK_REFERRER (The URL of a Web page that linked to the page)
- TSK_TITLE (Title of the Web page that was visited)
- TSK_URL_DECODED (The decoded URL)
- TSK_USER_NAME (Name of the user that viewed the Web page)
- TSK_DATETIME_CREATED (The datetime the page was created, ie: offline pages)
---
## TSK_WEB_SEARCH_QUERY
Details about a Web search query.
### REQUIRED ATTRIBUTES
- TSK_TEXT (Web search query text)
### OPTIONAL ATTRIBUTES
- TSK_DATETIME_ACCESSED (When the Web search query was last used, in seconds since 1970-01-01T00:00:00Z)
- TSK_DOMAIN (Domain of the search engine used to execute the query)
- TSK_PROG_NAME (Application or application extractor that stored the Web search query)
---
## TSK_WIFI_NETWORK
Details about a WiFi network.
### REQUIRED ATTRIBUTES
- TSK_SSID (The name of the WiFi network)
### OPTIONAL ATTRIBUTES
- TSK_DATETIME (Timestamp, in seconds since 1970-01-01T00:00:00Z. This timestamp could be last connected time or creation time)
- TSK_DEVICE_ID (String that uniquely identifies the WiFi network)
- TSK_MAC_ADDRESS (Mac address of the adapter)
- TSK_DEVICE_MODEL (Model of the decvice)
---
## TSK_WIFI_NETWORK_ADAPTER
Details about a WiFi adapter.
### REQUIRED ATTRIBUTES
- TSK_MAC_ADDRESS (Mac address of the adapter)
*/
/*! \page mod_bbpage The Blackboard
\section jni_bb_overview Overview
The blackboard allows modules (in Autopsy or other frameworks) to communicate and store results. A module can post data to the blackboard so that subsequent modules can see its results. It can also query the blackboard to see what previous modules have posted.
\subsection jni_bb_concepts Concepts
The blackboard is a collection of <em>artifacts</em>. Each artifact is a either a data artifact or an analysis result. In general, data artifacts record data found in the image (ex: a call log entry) while analysis results are more subjective (ex: a file matching a user-created interesting file set rule). Each artifact has a type, such as web browser history, EXIF, or GPS route. The Sleuth Kit has many artifact types already defined (see org.sleuthkit.datamodel.BlackboardArtifact.ARTIFACT_TYPE and the \ref artifact_catalog_page "artifact catalog") and you can also \ref jni_bb_artifact2 "create your own".
Each artifact has a set of name-value pairs called <em>attributes</em>. Attributes also have types, such as URL, created date, or device make. The Sleuth Kit has many attribute types already defined (see org.sleuthkit.datamodel.BlackboardAttribute.ATTRIBUTE_TYPE) and you can also \ref jni_bb_artifact2 "create your own".
See the \ref artifact_catalog_page "artifact catalog" for a list of artifacts and the attributes that should be associated with each.
\subsection jni_bb_specialart Special Artifact Types
There are two special types of artifacts that are used a bit differently than the rest.
The first is the org.sleuthkit.datamodel.BlackboardArtifact.ARTIFACT_TYPE.TSK_GEN_INFO artifact. A Content object should have only one artifact of this type and it is used to store a independent attributes that will not be displayed in the UI. Autopsy used to store the MD5 hash and MIME type in TSK_GEN_INFO, but they are now in the files table of the database. There are special methods to access this artifact to ensure that only a single TSK_GEN_INFO artifact is created per Content object and that you get a cached version of the artifact. These methods will be given in the relevant sections below.
The second special type of artifact is the TSK_ASSOCIATED_OBJECT. All artifacts are created as the child of a file or artifact. This TSK_ASSOCIATED_OBJECT is used to make additional relationships with files and artifacts apart from this parent-child relationship. See the \ref jni_bb_associated_object section below.
\section jni_bb_access Accessing the Blackboard
Modules can access the blackboard from either org.sleuthkit.datamodel.SleuthkitCase, org.sleuthkit.datamodel.Blackboard, or a org.sleuthkit.datamodel.Content object. The methods associated with org.sleuthkit.datamodel.Content all limit the Blackboard to a specific file.
\subsection jni_bb_access_post Posting to the Blackboard
First you need to decide what type of artifact you are making and what category it is. Artifact types fall into two categories:
<ul>
<li>Analysis Result: Result from an analysis technique on a given object with a given configuration. Includes Conclusion, Relevance Score, and Confidence.
<li>Data Artifact: Data that was originally embedded by an application/OS in a file or other data container.
</ul>
Consult the \ref artifact_catalog_page "artifact catalog" for a list of built-in types and what categories they belong to. If you are creating a data artifact, you can optionally add an OS account to it. If you are creating an analysis result, you can optionally add a score and other notes about the result. Note that you must use the category defined in the artifact catalog for each type or you will get an error. For example, you can't create a web bookmark analysis result.
There are may ways to create artifacts, but we will focus on creating them through the Blackboard class or directly through a Content object. Regardless of how they are created, all artifacts must be associated with a Content object.
<ul>
<li>org.sleuthkit.datamodel.AbstractContent.newDataArtifact(BlackboardArtifact.Type artifactType, Collection<BlackboardAttribute> attributesList, Long osAccountId)
<li>org.sleuthkit.datamodel.AbstractContent.newAnalysisResult(BlackboardArtifact.Type artifactType, Score score, String conclusion, String configuration, String justification, Collection<BlackboardAttribute> attributesList)
<li>org.sleuthkit.datamodel.Blackboard.newDataArtifact(BlackboardArtifact.Type artifactType, long sourceObjId, Long dataSourceObjId, Collection<BlackboardAttribute> attributes, Long osAccountId)
<li>org.sleuthkit.datamodel.Blackboard.newAnalysisResult(BlackboardArtifact.Type artifactType, long objId, Long dataSourceObjId, Score score,
String conclusion, String configuration, String justification, Collection<BlackboardAttribute> attributesList, CaseDbTransaction transaction)
</ul>
Attributes are created by making a new instance of org.sleuthkit.datamodel.BlackboardAttribute using one of the various constructors. Attributes can either be added when creating the artifact using the methods in the above list or at a later time using org.sleuthkit.datamodel.BlackboardArtifact.addAttribute() (or org.sleuthkit.datamodel.BlackboardArtifact.addAttributes() if you have several to add - it’s faster). Note that you should not manually add attributes of type JSON for standard attribute types such as TSK_ATTACHMENTS or TSK_GEO_TRACKPOINTS. Instead, you should use the helper classes in org.sleuthkit.datamodel.blackboardutils.attributes or org.sleuthkit.datamodel.blackboardutils to create your artifacts.
If you want to create an attribute in the TSK_GEN_INFO artifact, use org.sleuthkit.datamodel.Content.getGenInfoArtifact() to ensure that you do not create a second TSK_GEN_INFO artifact for the file and to ensure that you used the cached version (which will be faster for you).
\subsubsection jni_bb_artifact2 Creating Multiple Artifacts or Multiple Attributes
In some cases, it may not be clear if you should post multiple single-attribute artifacts for a file or post a single multiple-attribute artifact.
Here are some guidelines:
- If a single file is associated with multiple items of the same type (e.g., log entries in a log file, bookmarks in a bookmark file, cookies in a cookie database), then each instance should be posted as a separate artifact so that you can differentiate them and keep all related attributes clearly grouped (e.g., it is clear which date goes with which log entry).
- All attributes in artifacts other than in org.sleuthkit.datamodel.BlackboardArtifact.ARTIFACT_TYPE.TSK_GEN_INFO artifacts should be closely related to each other.
\subsubsection jni_bb_artifact_helpers Artifact Helpers
Artifact helpers are a set of classes that make it easier for module developers to create artifacts.
These classes provide methods that abstract the details of artifacts and attributes, and provide simpler and more readable API.
The following helpers are available:
<ul>
<li>org.sleuthkit.datamodel.blackboardutils.ArtifactsHelper - provides methods for creating general artifacts
<ul>
<li>addInstalledPrograms(): creates TSK_INSTALLED_PROG artifact
</ul></ul>
<ul>
<li>org.sleuthkit.datamodel.blackboardutils.WebBrowserArtifactsHelper - provides methods for creating web browser related artifacts
<ul>
<li>addWebBookmark(): creates TSK_WEB_BOOKMARK artifact for browser bookmarks
<li>addWebCookie(): creates TSK_WEB_COOKIE artifact for browser cookies
<li>addWebDownload(): creates TSK_WEB_DOWNLOAD artifact for web downloads.
<li>addWebFormAddress(): creates TSK_WEB_FORM_ADDRESS artifact for form address data
<li>addWebFormAutofill(): creates TSK_WEB_FORM_AUTOFILL artifact for autofill data
<li>addWebHistory(): creates TSK_WEB_HISTORY artifact for web history.
</ul></ul>
<ul>
<li>org.sleuthkit.datamodel.blackboardutils.CommunicationArtifactsHelper - provides methods for communication related artifacts: contacts, call logs, messages.
<ul>
<li>addCalllog(): creates TSK_CALLLOG artifact for call logs.
<li>addContact() creates TSK_CONTACT artifact for contacts.
<li>addMessage() creates a TSK_MESSAGE artifact for messages.
<li>addAttachments() adds attachments to a message.
</ul></ul>
<ul>
<li>org.sleuthkit.datamodel.blackboardutils.GeoArtifactsHelper - provides methods for GPS related artifacts
<ul>
<li>addRoute(): creates TSK_ROUTE artifact for GPS routes.
<li>addContact() creates TSK_CONTACT artifact for contacts.
<li>addMessage() creates a TSK_MESSAGE artifact for messages.
<li>addAttachments() adds attachments to a message.
</ul></ul>
\subsubsection jni_bb_associated_object Associated Objects
Artifacts should be created as children of the file that they were derived or parsed from. For example, a TSK_WEB_DOWNLOAD artifact would be a child of the browser's SQLite database that was parsed. This creates a relationship between the source file and the artifact. But, sometimes you also want to make a relationship between the artifact and another file (or artifact). This is where the TSK_ASSOCIATED_OBJECT artifact comes in.
For example, suppose you have a module that parses a SQLite database that has a log of downloaded files. Each entry might contain the URL the file was downloaded from, timestamp information, and the location the file was saved to on disk. This data would be saved in a TSK_WEB_DOWNLOAD artifact that would be a child of the SQLite database. But suppose the downloaded file also exists in our image. It would be helpful to link that file to our TSK_WEB_DOWNLOAD artifact to show when and where it was downloaded from.
We achieve this relationship by creating a TSK_ASSOCIATED_OBJECT artifact on the downloaded file. This artifact stores the ID of the TSK_WEB_DOWNLOAD artifact in a TSK_ASSOCIATED_ARTIFACT attribute so we have a direct link from the file to the artifact that shows where it came from.
\image html associated_object.png
\subsection jni_bb_query Querying the Blackboard
You can find artifacts by querying the blackboard in a variety of ways. It is preferable to use the methods that specifically return either data artifacts or analysis results since these will contain the complete information for the artifact. You can use the more general "Artifact" or "BlackboardArtifact" methods to get both, however these results will only contain the blackboard attributes and not any associated OS account or score/justification.
You can find artifacts using a variety of ways:
- org.sleuthkit.datamodel.Content.getAllDataArtifacts() to get all data artifacts for a specific Content object.
- org.sleuthkit.datamodel.Content.getAnalysisResults() to get analysis results of a given type for a specific Content object.
- org.sleuthkit.datamodel.Content.getArtifacts() in its various forms to get a specific type of artifact for a specific Content object.
- org.sleuthkit.datamodel.Content.getGenInfoArtifact() to get the TSK_GEN_INFO artifact for a specific content object.
- org.sleuthkit.datamodel.SleuthkitCase.getBlackboardArtifacts() in its various forms to get artifacts based on some combination of artifact type, attribute type and value, and content object.
\section jni_bb_custom_types Custom Artifacts and Attributes
This section outlines how to create artifact and attribute types because the standard ones do not meet your needs. These custom artifacts will be displayed
in the Autopsy UI alongside the built in artifacts and will also appear in the reports.
\subsection jni_bb_custom_make Making Custom Artifacts and Attributes
org.sleuthkit.datamodel.SleuthkitCase.addBlackboardArtifactType() is used to create a custom artifact. Give it the display name, unique name and category (data artifact or analysis result) and it will return a org.sleuthkit.datamodel.BlackboardArtifact.Type object with a unique ID. You will need to call this once for each case to create the artifact ID. You can then use this ID to make an artifact of the given type. To check if the artifact type has already been added to the blackboard or to get the ID after it was created, use org.sleuthkit.datamodel.SleuthkitCase.getArtifactType().
To create custom attributes, use org.sleuthkit.datamodel.SleuthkitCase.addArtifactAttributeType() to create the artifact type and get its ID. Like artifacts, you must create the attribute type for each new case. To get a type after it has been created in the case, use org.sleuthkit.datamodel.SleuthkitCase.getAttributeType(). Your attribute will be a name-value pair where the value is of the type you specified when creating it. The current types are: String, Integer, Long, Double, Byte, Datetime, and JSON. If you believe you need to create an attribute with type JSON, please read the
\ref jni_bb_json_attr_overview "overview" and \ref jni_bb_json_attr "tutorial" sections below.
Note that "TSK" is an abbreviation of "The Sleuth Kit." Artifact and attribute type names with a "TSK_" prefix indicate the names of standard or "built in" types. User-defined artifact and attribute types should not be given names with "TSK_" prefixes.
\subsection jni_bb_json_attr_overview JSON Attribute Overview
This section will give a quick overview of how to use JSON attributes. If this is your first time using JSON attributes please read the \ref jni_bb_json_attr below as well.
\subsubsection jni_bb_json_attr_overview_usage JSON Attribute Usage
Attributes with values of type JSON should be used only when the data can't be stored as an unordered set of attributes. To date, the most common need for this has been where an artifact needs to store multiple ordered instances of the same type of data in a single artifact. For example, one of the standard JSON attributes is TSK_GEO_TRACKPOINTS which stores an ordered list of track points, each containing coordinates, a timestamp, and other data.
\subsubsection jni_bb_json_attr_overview_format JSON Attribute Format
The underlying data in a JSON attribute will be either an array of individual attributes or an array of maps of attributes. For example, an artifact containing two track points could look similar to this (some attributes have been removed for brevity):
\verbatim
{"pointList":
[
{"TSK_DATETIME":1255822646,
"TSK_GEO_LATITUDE":47.644548,
"TSK_GEO_LONGITUDE":-122.326897},
{"TSK_DATETIME":1255822651,
"TSK_GEO_LATITUDE":47.644548,
"TSK_GEO_LONGITUDE":-122.326897}
]
}
\endverbatim
In practice you will not be required to deal with the raw JSON, but it is important to note that in the name/value pairs, the name should always be the name of a blackboard artifact type. This allows Autopsy to better process each attribute, for example by displaying timestamps in human-readable format.
\subsubsection jni_bb_json_attr_overview_create Saving JSON Attributes
To start, follow the instructions in the \ref jni_bb_custom_make section above to create your custom attribute with value type JSON. Next you'll need to put your data into the new attribute. There are two general methods:
<ol><li>Manually create the JSON string. This is not recommended as the code will be hard to read and prone to errors.
<li> Create a helper plain old Java object (POJO) to hold the data you want to serialize.
</ol>
Assuming you go the POJO route (highly recommended), there are two options for creating your class. As discussed above, each field name should match an attribute name (either built-in or custom). You could create a class like this:
\verbatim
class WebLogEntry {
long TSK_DATETIME;
String TSK_URL;
\endverbatim
The downside here is that your code will likely be a bit less readable like this. The other option is to use annotations specifying which attribute type goes with each of your fields, like this:
\verbatim
class WebLogEntry {
@SerializedName("TSK_DATETIME")
long accessDate;
@SerializedName("TSK_URL")
String urlVisited;
\endverbatim
You may need to make multiple POJOs to hold the data you need to serialize. This would most commonly happen if you want to store a list of values. In our example above, we would likely need to create a WebLog class to hold our list of WebLogEntry objects.
Now we need to convert our object into a JSON attribute. The easiest way to do this using the method org.sleuthkit.datamodel.blackboardutils.attributes.BlackboardJsonAttrUtil.toAttribute(). This method will return a BlackboardAttribute serialized from your object. You can then add this new attribute to your BlackboardArtifact.
\subsubsection jni_bb_json_attr_overview_load Loading JSON Attributes
If you need to process JSON attributes you created and you created your own POJO as discussed in the previous section, you can use the method org.sleuthkit.datamodel.blackboardutils.attributes.BlackboardJsonAttrUtil.fromAttribute(). It will return an instance of your class containing the data from a given BlackboardAttribute.
\subsection jni_bb_json_attr JSON Attribute Tutorial
The following describes an example of when you might need a JSON-valued attribute and the different methods for creating one. It also shows generally how to create custom artifacts and attributes so may be useful even if you do not need a JSON-type attribute.
Suppose we had a module that could record the last few times an app was accessed and which user opened it. The data we'd like to store for one app could have the form:
\verbatim
App name: Sample App
Logins: user1, 2020-03-31 10:06:37 EDT
user2, 2020-03-30 06:19:57 EDT
user1, 2020-03-26 18:59:57 EDT
\endverbatim
We could make a separate artifact for each of those logins (each with the app name, user name, and timestamp) it might be nicer to have them all under one and keep them in order. This is where the JSON-type attribute comes into play. We can store all the login data in a single blackboard attribute.
To start, we'll need to create our new artifact and attribute types. We'll need a new artifact type to hold our login data and a new attribute type to hold the logins themselves (this will be our JSON attribute). We'll use a standard attribute later for the app name. This part should only be done once, possibly in the startUp() method of your ingest module.
\verbatim
SleuthkitCase skCase = Case.getCurrentCaseThrows().getSleuthkitCase();
// Add the new artifact type to the case if it does not already exist
String artifactName = "APP_LOG";
String artifactDisplayName = "Application Logins";
BlackboardArtifact.Type artifactType = skCase.getArtifactType(artifactName);
if (artifactType == null) {
artifactType = skCase.addBlackboardArtifactType(artifactName, artifactDisplayName);
}
// Add the new attribute type to the case if it does not already exist
String attributeName = "LOGIN_DATA";
String attributeDisplayName = "Login Data";
BlackboardAttribute.Type loginDataAttributeType = skCase.getAttributeType(attributeName);
if (loginDataAttributeType == null) {
loginDataAttributeType = skCase.addArtifactAttributeType(attributeName,
BlackboardAttribute.TSK_BLACKBOARD_ATTRIBUTE_VALUE_TYPE.JSON, attributeDisplayName);
}
\endverbatim
You'll want to save the new artifact and attribute type objects to use later.
Now our ingest module can create artifacts for the data it extracts. In the code below, we create our new "APP_LOG" artifact, add a standard attribute for the user name, and then create and store a JSON-formatted string which will contain each entry from the "loginData" list. Note that manually creating the JSON as shown below is not recommeded and is just for illustrative purposes - an easier method will be given afterward.
\verbatim
BlackboardArtifact art = content.newArtifact(artifactType.getTypeID());
List<BlackboardAttribute> attributes = new ArrayList<>();
attributes.add(new BlackboardAttribute(BlackboardAttribute.ATTRIBUTE_TYPE.TSK_PROG_NAME, moduleName, appName));
String jsonLoginStr = "{ LoginData : [ ";
String dataStr = "";
for(LoginData data : loginData) {
if (!dataStr.isEmpty()) {
dataStr += ", ";
}
dataStr += "{\"TSK_USER_NAME\" : \"" + data.getUserName() + "\", "
+ "\"TSK_DATATIME\" : \"" + data.getTimestamp() + "\"} ";
}
jsonLoginStr += dataStr + " ] }";
attributes.add(new BlackboardAttribute(loginDataAttributeType, moduleName, jsonLoginStr));
art.addAttributes(attributes);
\endverbatim
It is important that each of the name-value pairs starts with an existing blackboard attribute name. This will allow Autopsy to use the corresponding value, for example, to extract out a timestamp to show this artifact in the <a href="http://sleuthkit.org/autopsy/docs/user-docs/latest/timeline_page.html">Timeline viewer</a>. Here's what our newly-created artifact will look like in Autopsy:
\image html json_attribute.png
The above method for storing the data works but formatting the JSON attribute manually is prone to errors. Luckily, in most cases instead of writing the JSON ourselves we can serialize a Java object. If the data that will go into the JSON attribute is contained in plain old Java objects (POJOs), then we can add annotations to that class to produce the JSON automatically. Here they've been added to the LoginData class:
\verbatim
// Requires package com.google.gson.annotations.SerializedName;
private class LoginData {
@SerializedName("TSK_USER_NAME")
String userName;
@SerializedName("TSK_DATETIME")
long timestamp;
LoginData(String userName, long timestamp) {
this.userName = userName;
this.timestamp = timestamp;
}
}
\endverbatim
We want our JSON attribute to store a list of these LoginData objects, so we'll create another POJO for that:
\verbatim
private class LoginDataLog {
List<LoginData> dataLog;
LoginDataLog() {
dataLog = new ArrayList<>();
}
void addData(LoginData data) {
dataLog.add(data);
}
}
\endverbatim
Now we use org.sleuthkit.datamodel.blackboardutils.attributes.BlackboardJsonAttrUtil.toAttribute() to convert our LoginDataLog object into a BlackboardAttribute, greatly simplifying the code. Here, "dataLog" is an instance of a LoginDataLog object that contains all of the login data.
\verbatim
BlackboardArtifact art = content.newArtifact(artifactType.getTypeID());
List<BlackboardAttribute> attributes = new ArrayList<>();
attributes.add(new BlackboardAttribute(BlackboardAttribute.ATTRIBUTE_TYPE.TSK_PROG_NAME, moduleName, appName));
attributes.add(BlackboardJsonAttrUtil.toAttribute(loginDataAttributeType, moduleName, dataLog));
art.addAttributes(attributes);
\endverbatim
*/
/*! \page mod_compage Communications
NOTE: This is a work in progress
\section jni_com_overview Overview
The Java code and database in Sleuth Kit contain special classes and tables to deal with communications between two parties. This page outlines what a developer should do when they are parsing communications data so that it can be properly displayed and used by other code (such as the Autopsy Communications UI).
\section jni_com_types Terminology
First, let's cover the terminology that we use.
\subsection jni_com_types_account Accounts
An <b>Account</b> is an entity with a type and an identifier that is unique to the type. Common examples of types include:
- Credit Card (and the unique identifier is the credit card number)
- Email (and the unique identifier is the email address)
- Phone (and the unique identifier is the phone number)
- Twitter (with a unique identifier of the login)
- ...
Accounts are found in a digital investigation when parsing structured data (such as email messages) or keyword searching.
\subsection jni_com_types_relationships Relationships
Two accounts have a <b>relationship</b> if they are believed to have communicated in some way. Examples of interactions that cause a relationship are:
- Being part of the same email message
- Being in a call log
- Being in an address book
When there are multiple people involved with an email message, a relationship is made between each of them. For example, if A sends a message to B and CC:s C, then there will be relationships between A <-> B, A <-> C, and B <-> C. Relationships in The Sleuth Kit are not directional.
A <b>relationship source</b> is where we learned about the relationship. This typically comes from Blackboard Artifacts, but may come from generic files in the future.
\subsection jni_com_types_devaccount Device Accounts
In some situations, we may not know a specific account that a relationship exists with. For example, when we find a contact book a thumb drive, we want to make a relationship between the accounts in the contact book and the accounts associated with the owner of that thumb drive. But, we may not know which accounts are for that owner. The contacts could be just a bunch of vCards and not tied to a specific email or phone number.
In this situation, we make a <b>device account</b> that is associated with the data source or device being analyzed. You should make an account of type Account.Type.DEVICE (instead of something like EMAIL) and the identifier is the device id of the data source where the other accounts were located.
\section jni_com_add Adding Communication Information to Database
Now let's cover what you should do when you are parsing some communications data and want to store it in the TSK database. Let's assume we are parsing a smart phone app that has messages.
\subsection jni_com_add_acct Adding Account Instances
When you encounter a message, the first thing to do is store information about the accounts. TSK wants to know about each <i>file</i> that had a reference of the account. You should call org.sleuthkit.datamodel.CommunicationsManager.createAccountFileInstance() for each file that you encounter a given account.
To make a device account, you'd have logic similar to:
\code
AccountFileInstance deviceAccountInstance = tskCase.getCommunicationsManager().createAccountFileInstance(Account.Type.DEVICE,
abstractFile.getDataSource().getDeviceId(), "Module Name", abstractFile);
\endcode
Behind the scenes, createAccountFileInstance will make an entry in the accounts table for each unique account on a given device and will make a org.sleuthkit.datamodel.BlackboardArtifact for each unique account in a given file.
If you want to create a custom account type, call org.sleuthkit.datamodel.CommunicationsManager.addAccountType().
\subsection jni_com_add_msg Adding The Message (Relationship Source)
You also need to make sure that you store the org.sleuthkit.datamodel.BlackboardArtifact that used the accounts and had the relationship. You can do this before or after calling createAccountFileInstance(). The order does not matter.
For a messaging app, you would make org.sleuthkit.datamodel.BlackboardArtifact objects with a type of org.sleuthkit.datamodel.BlackboardArtifact.ARTIFACT_TYPE.TSK_MESSAGE. That artifact would store various name and value pairs using org.sleuthkit.datamodel.BlackboardAttribute.ATTRIBUTE_TYPE values. There is nothing communication-specific about this step. It is the same Blackboard artifacts and attributes that are used in many other places.
\subsection jni_com_add_relationship Adding the Relationship
The final step is to store the relationships between the accounts. You can do this via org.sleuthkit.datamodel.CommunicationsManager.addRelationships(). This method will require you to pass in the org.sleuthkit.datamodel.AccountInstance objects that you created and the org.sleuthkit.datamodel.BlackboardArtifact that you created for the message or other source.
The source of the relationship can be a device account (for things like call logs and contacts) if you are unsure about the specific account (such as phone number) associated with the device.
As an example, you can refer to some code in Autopsy, such as:
- [Email Module addArtifact()] (https://github.com/sleuthkit/autopsy/blob/develop/thunderbirdparser/src/org/sleuthkit/autopsy/thunderbirdparser/ThunderbirdMboxFileIngestModule.java)
\section jni_com_comm_artifacts_helper Communication Artifacts Helper
An alternative to individually creating artifacts, accounts and relationships is to use the org.sleuthkit.datamodel.blackboardutils.CommunicationArtifactsHelper. CommunicationArtifactsHelper provides APIs that create the artifact, create accounts, and create relationships between the accounts, all with a single API call.
\subsection jni_com_comm_artifacts_helper_create_helper Creating a Communications Artifacts Helper
To use the communication artifacts helper, you must first create a new instance of the helper for each source file from which you are extracting communications artifacts. To create a helper, use the constructor org.sleuthkit.datamodel.blackboardutils.CommunicationArtifactsHelper.CommunicationArtifactsHelper().
When creating the helper, you must specify the account type for the accounts that will be created by this instance of the helper. Addtionally, you may specify the "self" account identifier - i.e. the application specific account identifier for the owner of the device, if it is known.
If the self account is not known, you may omit it, in which case the helper uses the Device account as proxy for the self account.
\subsection jni_com_comm_artifacts_helper_add_contact Adding Contacts
Use the org.sleuthkit.datamodel.blackboardutils.CommunicationArtifactsHelper.addContact() method to add contacts.
The helper creates a TSK_CONTACT artifact. It also creates contact accounts for each of the specified contact method, and finally creates relationships between the contact accounts and the self account.
\subsection jni_com_comm_artifacts_helper_add_calllog Adding Call logs
Use the org.sleuthkit.datamodel.blackboardutils.CommunicationArtifactsHelper.addCalllog() method to add call log.
The helper creates a TSK_CALLLOG artifact. It also creates accounts for the caller and each of the callees, if specified. Finally it creates a relationship between the caller and each of the callees.
\subsection jni_com_comm_artifacts_helper_add_message Adding Messages
Use the org.sleuthkit.datamodel.blackboardutils.CommunicationArtifactsHelper.addMessage() method to add a message.
The helper creates a TSK_MESSAGE artifact. It also creates accounts for the sender and each of the recipients, if specified. Finally it creates a relationship between the sender and each of the recipients.
\subsection jni_com_comm_artifacts_helper_add_attachments Adding Attachments to message
Use the org.sleuthkit.datamodel.blackboardutils.CommunicationArtifactsHelper.addAttachments() method to add org.sleuthkit.datamodel.blackboardutils.attributes.MessageAttachments to a message.
As an example, you can refer to some code in Autopsy, such as:
- [Android Text Messages] (https://github.com/sleuthkit/autopsy/blob/develop/InternalPythonModules/android/textmessage.py)
- [Facebook messenger Messages] (https://github.com/sleuthkit/autopsy/blob/develop/InternalPythonModules/android/fbmessenger.py)
\section jni_com_schema Database Schema
For details of how this is stored in the database, refer to the
<a href="http://wiki.sleuthkit.org/index.php?title=Database_v7.2_Schema#Communications_.2F_Accounts">wiki</a>.
*/