Some Results on the Best Match Problem
The "best-match problem" is concerned with the complexity of finding the best match between a randomly chosen query word and the members of a randomly chosen set of data words. Of principal interest is whether it is possible to significantly reduce the search time required, as compared to exhaustive comparison, by use of memory redundancy (file structure). Minskv and Papert conjecture that "the speed-up values of large memory redundancies is very small, and for large data sets with long word lengths there are no practical alternatives to large searches that inspect large parts of memory". For this report we present two algorithms that do yield significant speed-up, although at the cost of large memory redundancies. (Whether these algorithms constitute counterexamples to the Minskv-Papert conjecture depends on one's interpretation of their term “large memory redundancies".) The algorithms are subjected to statistical analysis and time-memory trade-off curves are given.