Abstract

Inference in probabilistic (or: Bayesian) networks is a problem with many applications in AI, for example in decision support systems. One way of solving it is using the method of conditioning, which requires a loop cutset of the network. Finding an optimal loop cutset for a given network is NP-hard, but building on work by Bodlaender we show it does have a kernel of cubic size. A kernelization is typically used as a first step in a 'fixed parameter tractable' algorithm, but can also be used as an ordinary preprocessing heuristic by choosing a safe value for the parameter. We have imple- mented this kernelization algorithm and experimentally demonstrate that it is both efficient and effective: it handles networks with thousands of vertices in a couple of seconds and, even when used as just a regular preprocessing heuristic, significantly speeds up the subsequent calculation of an optimal loop cutset.

Links and resources

Tags