pigz [ -cdfhikKlLnNqrRtTz0..9 ] [ -b blocksize ] [ -p threads ] [ -S suffix ] [ name ... ]
unpigz [ -cfhikKlLnNqrRtTz ]      [ -b blocksize ] [ -p threads ] [ -S suffix ] [ name ... ]

Pigz compresses using threads to make use of multiple processors and cores.

The input is broken up into 128 KB chunks with each compressed in parallel. The individual check value for each chunk is also calculated in parallel. The compressed data is written in order to the output, and a combined check value is calculated from the individual check values.
The compressed data format generated is in the gzip, zlib, or single-entry zip format using the deflate compression method. The compression produces partial raw deflate streams which are concatenated by a single write thread and wrapped with the appropriate header and trailer, where the trailer contains the combined check value.

Each partial raw deflate stream is terminated by an empty stored block (using the Z_SYNC_FLUSH option of zlib), in order to end that partial bit stream at a byte boundary. That allows the partial streams to be concatenated simply as sequences of bytes. This adds a very small four to five byte overhead to the output for each input chunk.
The default input block size is 128K, but can be changed with -b. The number of compress threads is set by default to the number of online processors, which can be changed using -p . -p 1 avoids the use of threads .

The input blocks, while compressed independently, have the last 32K of the previous block loaded as a preset dictionary to preserve the compression effectiveness of deflating in a single thread. This can be turned off using -i or --independent , so that the blocks can be decompressed independently for partial error recovery or for random access.
Decompression can’t be parallelized, at least not without specially prepared deflate streams for that purpose. As a result, pigz uses a single thread (the main thread) for decompression, but will create three other threads for reading, writing, and check calculation, which can speed up decompression under some circumstances. Parallel decompression can be turned off by specifying one process ( -dp 1 or -tp 1 ). Compressed files can be restored to their original form using pigz -d or unpigz.

Regulate the speed of compression using the specified digit #,
-1 or --fast indicates the fastest compression method (less compression) and
-9 or --best indicates the slowest compression method (best compression).
Level 0 is no compression.
--blocksize k
Set compression block size to kK (default 128KiB).
Write all processed output to stdout (won't delete).
(nice to pipe output to another process like ftp?)
Force overwrite, compress .gz, links, and to terminal.
Compress blocks independently for damage recovery.
Do not delete original file after processing.
Compress to PKWare zip (.zip) single entry format.
Do not store or restore file name in/from header.
Store/restore file name and mod time in/from header.
--processes n
Allow up to n processes (default is the number of online processors)
Output no messages, even on error.
Process the contents of all subdirectories.
--suffix .sss
Use suffix .sss instead of .gz (for compression).
Test the integritry Test the integrity of the compressed input.
Do not store or restore mod time in/from header.
Provide more verbose output.
Compress to zlib (.zz) instead of gzip for mat.
List the contents of the compressed input.
Decompress the compressed input.



This software is provided 'as-is', without any express or implied warranty. In no event will the author be held liable for any damages arising from the use of this software.
Copyright (C) 2007, 2008, 2009, 2010 Mark Adler