In-set or not-in-set, that’s the question. Using bloom filters for probabilistic set operations

Ndewo egwu mmadu,

The more I learn about probabilistic systems, the more they fascinate me. Bloom filters are a key mechanism in probabilistic systems: You can verify 100% sure if an element is not in a set, but can’t know exactly if it is in. This contract lets us build a bloom filter with a fixed size in comparison to an ever-growing set with keeping a lookup time O(1). Interesting concept you will find various applications for, the moment you understood it. This week’s paper by Burton H. Bloom (hence the name Bloom filter) is the first implementation of the idea on which all further research is build upon.


If you enjoy reading the Weekly CS Paper, I would be really thankful if you would support it with a few bucks: gum.co/weeklycspaper. The newsletter will stay free forever!

Software exists to create business value

I am Simon Frey, the author of the Weekly CS Paper Newsletter. And I have great news: You can work with me

As CTO as a Service, I will help you choose the right technology for your company, build up your team and be a deeply technical sparring partner for your product development strategy.

Checkout my website simon-frey.com to learn more or directly contact me via the button below.

Simon Frey Header image
Let’s work together!

Abstract:

In this paper trade-offs among certain computational factors in hash coding are analyzed. The paradigm problem considered is that of testing a series of messages one-by-one for membership in a given set of messages. Two new hash- coding methods are examined and compared with a particular conventional hash-coding method. The computational factors considered are the size of the hash area (space), the time required to identify a message as a nonmember of the given set (reject time), and an allowable error frequency. The new methods are intended to reduce the amount of space required to contain the hash-coded information from that associated with conventional methods. The reduction in space is accomplished by exploiting the possibility that a small fraction of errors of commission may be tolerable in some applications, in particular, applications in which a large amount of data is involved and a core resident hash area is consequently not feasible using conventional methods. In such applications, it is envisaged that overall performance could be improved by using a smaller core resident hash area in conjunction with the new methods and, when necessary, by using some secondary and perhaps time-consuming test to “catch” the small fraction of errors associated with the new methods. An example is discussed which illustrates possible areas of application for the new methods. Analysis of the paradigm problem demonstrates that allowing a small number of test messages to be falsely identified as members of the given set will permit a much smaller hash area to be used without increasing reject time.

Download Link:

https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.295.7552&rep=rep1&type=pdf


Additional Links:

Weekly in-depth computer science knowledge to become a better programmer. For free!
Over 2000 subcribers. One click unsubscribe.