Document Type

Article

Date

2000

Embargo Period

1-29-2012

Keywords

Distributed binary hypothesis testing with idependent identical sensors, Sensor rules, Bayesian criterion, Neyman-Pearson criterion, Langrange multiplier method, Quasiconvex

Language

English

Disciplines

Electrical and Computer Engineering

Description/Abstract

We consider the problem of distributed binary hypothesis testing with independent identical sensors. It is well known that for this problem the optimal sensor rules are a likelihood ratio threshold tests and the optimal fusion rule is a K-out-of-N rule [1]. Under the Bayesian criterion, we show that for a fixed K-out-of-N fusion rule, the probability of error is a quasiconvex function of the likelihood ratio threshold used in the sensor decision rule. Therefore, the probability of error has a single minimum and a unique optimal threshold achieves this minimum. We obtain a sufficient and necessary condition on the optimal threshold, except in some trivial situations where one hypothesis is always decided. We present a method for determining whether or not the solution is trivial. Under the Neyman-Pearson criterion, we show that when the Lagrange multiplier method is used for a fixed K-out-of-N fusion rule, the objective function is quasiconvex and hence has a single minimum point, and the resulting ROC is concave downward. These results are illustrated by means of three examples.

Share

COinS