Conntrack hash function comparisons

Many thanks to Patrick Schaaf, Don Cohen, Martin Josefsson and Joakim Axelsson for their work and thanks to Robert Uzgalis for providing the source code of his buzhash function.

I used a modified version of Patrick's cttest program.

Hash function comparisons

We test the hash functions by different input datasets:

We also vary the hash sizes, testing different cases:

The actual hash sizes are summarized in the table below, where n goes from 13 to 18:
2^n-12^nfirst prime > 2^n
8191 8192 8209
16383 16384 16411
32767 32768 32771
65535 65536 65537
131071 131072 131101
262143 262144 262147

Ideal hash

The ideal hash was put equal number of entries into every hash bucket.

In order to get an overall picture, we generate several graphs "relative" to the idealhash:

Because the hashes (usually) behave very badly for hash sizes of power of two, only the results for odd number of hash sizes are displayed on these graphs.

Random hash

The random hash places the conntrack entries into the hash buckets randomly. On the hash comparison graphs our reference is the random hash. As the first analysis of the hashes, we look for longer maximal bucket lengths than that of the random hash and/or large deviations from the graph of the random hash.

Please note, the random hash is not identical with the ideal hash and we actually test the random number generator as well. There are cases, when the random hash "behaves" a little bit oddly:
dataset matrix, hash size 8191: random,
dataset matrix, hash size 8209: random,
etc.

The original conntrack hash function

The original hash turns out to be really bad for hash sizes of power of two:
dataset matrix, hash size 8192: original,
dataset matrix, hash size 16384: original,
etc, but there are other cases as well, when it performs very badly:
dataset transproxy, overall
dataset matrix, hash size 65537: original,
dataset 500, hash size 16383: original,
etc, etc.

65537 is a magic bad numer for the original hash.

We truly need a better hash function.

Guilmette and Pearson hashes

The guilmette hash was suggested by Ron Guilmette on comp.compilers, 13 Dec 1990 06:25:10. The pearson hash is the famous one :-). The original version of both hashes generate hash keys between 0-255, I created the tested variants which produce unsigned long hash keys. Both hashes are bad for hash sizes of power of two.

In some cases there are smaller problems with these hashes for odd hash sizes as well:
dataset matrix, hash size 32767: guilmette,
dataset transproxy, hash size 8209: pearson,
dataset 500, hash size 8191: guilmette,
dataset 5000, hash size 16411: pearson,
dataset 50000, hash size 32767: pearson,
etc.

crc32 hashes: crc32, crc32_2x6, crc32_bof

The crc32 hashes use the crc32 function of zlib, the implementation is by Patrick. For every variant one can find hash size - dataset combinations, when there are tiny problems with the hashes:
dataset matrix, hash size 8191: crc32,
dataset matrix, hash size 16411: crc32 2x6,
dataset matrix, hash size 65537: crc32, crc32 bof,
dataset matrix, hash size 131071: crc32, crc32 bof,
dataset 500, hash size 8191: crc32 2x6,
dataset 500, hash size 8191: crc32, crc32 2x6,
etc.

These hashes are not sensitive for hash size of power of two.

abcd hashes: abcd, abcdx, abcdef, abcd_xor

The abcd and abcdef hashes were proposed by Don Cohen, the latter coded by Joakim Axelsson. The abcd_xor hash is by Joakim Axelsson and Martin Josefsson. I "wrote" the abcdx hash which is the same as the abcd hash, but the four random multipliers were chosen so that at any bit postition in the four numbers there are exactly two zeros and two ones (similarly to the rtable in buzhash). The abcd hashes are bad at hash sizes of power of two ( dataset matrix, hash size 8192, etc, etc.)

Otherwise the abcd hashes are remarkably good - there are little anomalies for every hash in some cases,
dataset matrix, hash size 8191: abcd,
dataset matrix, hash size 16411: abcdef, abcd xor,
dataset matrix, hash size 65535: abcdef,
dataset matrix, hash size 65537: abcdx, abcdef,
dataset matrix, hash size 131071: abcdx,
dataset transproxy, hash size 16411: abcd, abcd xor,
dataset 500, hash size 8209: abcdef, abcd xor,
dataset 500, hash size 131071: abcdef,
dataset 50000, hash size 32771: abcd,
etc.

rt hashes: rt, rt_ab, rt_ab2, rt_new

The rt hashes were derived from the IP routing code by Joakim and Martin. For the real dataset, the ab variant has got the smallest deviations from the random hash compared to the other ones from this family:
dataset matrix, hash size 32771: rt new,
dataset matrix, hash size 65535: rt ab2,
dataset matrix, hash size 131071: rt,
etc

At the artifical tests the rt family behaves very badly, even at the smallest dataset - at the beginning rt and rt ab seem to be OK:
dataset 500, hash size 8191: rt ab2, rt new,
dataset 500, hash size 8209: rt ab2, rt new,
etc

but later there are cases when rt and rt ab produce bad results as well:
dataset 500, hash size 131071: rt, rt ab,
dataset 5000, hash size 65535: rt, rt ab,
etc

jenkins hashes and buzhash: jenkins, jenkins2, jenkins2b, buzhash

The jenkins hash is by Bob Jenkins while buzhash is by Robert Uzgalis. The jenkins2 and jenkins2b variants are hashes optimized for conntrack by me. These hashes overall perform very well. There are slight deviations, like
dataset matrix, hash size 8191: jenkins2b, buzhash,
dataset matrix, hash size 8902: jenkins, jenkins2b,
dataset matrix, hash size 16383: jenkins2,
dataset 500, hash size 8191: buzhash,
mostly for hash sizes not power of two.

Speed comparison

Sorting the hash algorithms according to the speed we found the following:

Looking closer to the hash functions faster than the original one

we can draw the following conclusion: if we want anything faster than the original, then we have to choose from the abcd or rt families. The only hash function which is slower but not much slower than the original one is the optimized jenkins hash. Due to the slowness, we have to disclose the crc32, guilmette, pearson and buzhash hashes.

"Best" hashes

The "best" hashes according to the speed test might be the optimized jenkins, the abcd and rt hashes. Due to the bad test results we can disclose the rt hashes. Therefore there is another comparison of the abcd, abcdx, jenkins2, jenkins2b and random hashes: They are overall good hash functions. At the artifical tests the abcd hashes produce better results than the jenkins hashes. Looking at the maximal bucket length graphs, abcdx seems to produce smoother curves than abcd.

In my opinion the abcd or abcdx hash should be chosen as the new conntrack hash function.

Please correct me if I'm wrong somewhere. Any suggestion for improvements is welcomed.

Jozsef Kadlecsik