Go to the first, previous, next, last section, table of contents.


Miscellanea

These are just some random thoughts of mine. Your mileage may vary.

Limitations of the compressed file format

bzip2-1.0, 0.9.5 and 0.9.0 use exactly the same file format as the previous version, bzip2-0.1. This decision was made in the interests of stability. Creating yet another incompatible compressed file format would create further confusion and disruption for users.

Nevertheless, this is not a painless decision. Development work since the release of bzip2-0.1 in August 1997 has shown complexities in the file format which slow down decompression and, in retrospect, are unnecessary. These are:

It would be fair to say that the bzip2 format was frozen before I properly and fully understood the performance consequences of doing so.

Improvements which I was able to incorporate into 0.9.0, despite using the same file format, are:

Further ahead, it would be nice to be able to do random access into files. This will require some careful design of compressed file formats.

Portability issues

After some consideration, I have decided not to use GNU autoconf to configure 0.9.5 or 1.0.

autoconf, admirable and wonderful though it is, mainly assists with portability problems between Unix-like platforms. But bzip2 doesn't have much in the way of portability problems on Unix; most of the difficulties appear when porting to the Mac, or to Microsoft's operating systems. autoconf doesn't help in those cases, and brings in a whole load of new complexity.

Most people should be able to compile the library and program under Unix straight out-of-the-box, so to speak, especially if you have a version of GNU C available.

There are a couple of __inline__ directives in the code. GNU C (gcc) should be able to handle them. If you're not using GNU C, your C compiler shouldn't see them at all. If your compiler does, for some reason, see them and doesn't like them, just #define __inline__ to be /* */. One easy way to do this is to compile with the flag -D__inline__=, which should be understood by most Unix compilers.

If you still have difficulties, try compiling with the macro BZ_STRICT_ANSI defined. This should enable you to build the library in a strictly ANSI compliant environment. Building the program itself like this is dangerous and not supported, since you remove bzip2's checks against compressing directories, symbolic links, devices, and other not-really-a-file entities. This could cause filesystem corruption!

One other thing: if you create a bzip2 binary for public distribution, please try and link it statically (gcc -s). This avoids all sorts of library-version issues that others may encounter later on.

If you build bzip2 on Win32, you must set BZ_UNIX to 0 and BZ_LCCWIN32 to 1, in the file bzip2.c, before compiling. Otherwise the resulting binary won't work correctly.

Reporting bugs

I tried pretty hard to make sure bzip2 is bug free, both by design and by testing. Hopefully you'll never need to read this section for real.

Nevertheless, if bzip2 dies with a segmentation fault, a bus error or an internal assertion failure, it will ask you to email me a bug report. Experience with version 0.1 shows that almost all these problems can be traced to either compiler bugs or hardware problems.

If you've incorporated libbzip2 into your own program and are getting problems, please, please, please, check that the parameters you are passing in calls to the library, are correct, and in accordance with what the documentation says is allowable. I have tried to make the library robust against such problems, but I'm sure I haven't succeeded.

Finally, if the above comments don't help, you'll have to send me a bug report. Now, it's just amazing how many people will send me a bug report saying something like

   bzip2 crashed with segmentation fault on my machine

and absolutely nothing else. Needless to say, a such a report is totally, utterly, completely and comprehensively 100% useless; a waste of your time, my time, and net bandwidth. With no details at all, there's no way I can possibly begin to figure out what the problem is.

The rules of the game are: facts, facts, facts. Don't omit them because "oh, they won't be relevant". At the bare minimum:

   Machine type.  Operating system version.  
   Exact version of bzip2 (do bzip2 -V).  
   Exact version of the compiler used.  
   Flags passed to the compiler.

However, the most important single thing that will help me is the file that you were trying to compress or decompress at the time the problem happened. Without that, my ability to do anything more than speculate about the cause, is limited.

Please remember that I connect to the Internet with a modem, so you should contact me before mailing me huge files.

Did you get the right package?

bzip2 is a resource hog. It soaks up large amounts of CPU cycles and memory. Also, it gives very large latencies. In the worst case, you can feed many megabytes of uncompressed data into the library before getting any compressed output, so this probably rules out applications requiring interactive behaviour.

These aren't faults of my implementation, I hope, but more an intrinsic property of the Burrows-Wheeler transform (unfortunately). Maybe this isn't what you want.

If you want a compressor and/or library which is faster, uses less memory but gets pretty good compression, and has minimal latency, consider Jean-loup Gailly's and Mark Adler's work, zlib-1.1.2 and gzip-1.2.4. Look for them at

http://www.cdrom.com/pub/infozip/zlib and http://www.gzip.org respectively.

For something faster and lighter still, you might try Markus F X J Oberhumer's LZO real-time compression/decompression library, at
http://wildsau.idv.uni-linz.ac.at/mfx/lzo.html.

If you want to use the bzip2 algorithms to compress small blocks of data, 64k bytes or smaller, for example on an on-the-fly disk compressor, you'd be well advised not to use this library. Instead, I've made a special library tuned for that kind of use. It's part of e2compr-0.40, an on-the-fly disk compressor for the Linux ext2 filesystem. Look at http://www.netspace.net.au/~reiter/e2compr.

Testing

A record of the tests I've done.

First, some data sets:

The tests conducted are as follows. Each test means compressing (a copy of) each file in the data set, decompressing it and comparing it against the original.

First, a bunch of tests with block sizes and internal buffer sizes set very small, to detect any problems with the blocking and buffering mechanisms. This required modifying the source code so as to try to break it.

  1. Data set H, with buffer size of 1 byte, and block size of 23 bytes.
  2. Data set B, buffer sizes 1 byte, block size 1 byte.
  3. As (2) but small-mode decompression.
  4. As (2) with block size 2 bytes.
  5. As (2) with block size 3 bytes.
  6. As (2) with block size 4 bytes.
  7. As (2) with block size 5 bytes.
  8. As (2) with block size 6 bytes and small-mode decompression.
  9. H with buffer size of 1 byte, but normal block size (up to 900000 bytes).

Then some tests with unmodified source code.

  1. H, all settings normal.
  2. As (1), with small-mode decompress.
  3. H, compress with flag -1.
  4. H, compress with flag -s, decompress with flag -s.
  5. Forwards compatibility: H, bzip2-0.1pl2 compressing, bzip2-0.9.5 decompressing, all settings normal.
  6. Backwards compatibility: H, bzip2-0.9.5 compressing, bzip2-0.1pl2 decompressing, all settings normal.
  7. Bigger tests: A, all settings normal.
  8. As (7), using the fallback (Sadakane-like) sorting algorithm.
  9. As (8), compress with flag -1, decompress with flag -s.
  10. H, using the fallback sorting algorithm.
  11. Forwards compatibility: A, bzip2-0.1pl2 compressing, bzip2-0.9.5 decompressing, all settings normal.
  12. Backwards compatibility: A, bzip2-0.9.5 compressing, bzip2-0.1pl2 decompressing, all settings normal.
  13. Misc test: about 400 megabytes of .tar files with bzip2 compiled with Checker (a memory access error detector, like Purify).
  14. Misc tests to make sure it builds and runs ok on non-Linux/x86 platforms.

These tests were conducted on a 225 MHz IDT WinChip machine, running Linux 2.0.36. They represent nearly a week of continuous computation. All tests completed successfully.

Further reading

bzip2 is not research work, in the sense that it doesn't present any new ideas. Rather, it's an engineering exercise based on existing ideas.

Four documents describe essentially all the ideas behind bzip2:

Michael Burrows and D. J. Wheeler:
  "A block-sorting lossless data compression algorithm"
   10th May 1994. 
   Digital SRC Research Report 124.
   ftp://ftp.digital.com/pub/DEC/SRC/research-reports/SRC-124.ps.gz
   If you have trouble finding it, try searching at the
   New Zealand Digital Library, http://www.nzdl.org.

Daniel S. Hirschberg and Debra A. LeLewer
  "Efficient Decoding of Prefix Codes"
   Communications of the ACM, April 1990, Vol 33, Number 4.
   You might be able to get an electronic copy of this
      from the ACM Digital Library.

David J. Wheeler
   Program bred3.c and accompanying document bred3.ps.
   This contains the idea behind the multi-table Huffman
   coding scheme.
   ftp://ftp.cl.cam.ac.uk/users/djw3/

Jon L. Bentley and Robert Sedgewick
  "Fast Algorithms for Sorting and Searching Strings"
   Available from Sedgewick's web page,
   www.cs.princeton.edu/~rs

The following paper gives valuable additional insights into the algorithm, but is not immediately the basis of any code used in bzip2.

Peter Fenwick:
   Block Sorting Text Compression
   Proceedings of the 19th Australasian Computer Science Conference,
     Melbourne, Australia.  Jan 31 - Feb 2, 1996.
   ftp://ftp.cs.auckland.ac.nz/pub/peter-f/ACSC96paper.ps

Kunihiko Sadakane's sorting algorithm, mentioned above, is available from:

http://naomi.is.s.u-tokyo.ac.jp/~sada/papers/Sada98b.ps.gz

The Manber-Myers suffix array construction algorithm is described in a paper available from:

http://www.cs.arizona.edu/people/gene/PAPERS/suffix.ps

Finally, the following paper documents some recent investigations I made into the performance of sorting algorithms:

Julian Seward:
   On the Performance of BWT Sorting Algorithms
   Proceedings of the IEEE Data Compression Conference 2000
     Snowbird, Utah.  28-30 March 2000.


Go to the first, previous, next, last section, table of contents.