Backpropagation, Modular Networks, Classification Problems, Multiclass problems, Feedforward Networks
One connectionist approach to the classification problem, which has gained popularity in recent years, is the use of backpropagation-trained feed-forward neural networks. In practice, however, we find that the rate of convergence of net output error is especially low when training networks for multi-class problems. In this paper, we show that while backpropagation will reduce the Euclidean distance between the actual and desired output vectors, the difference between some of the components of these vectors will actually increase in the first iteration. Furthermore, the magnitudes of subsequent weight changes in each iteration are very small, so that many iterations are required to compensate for the increased error in some components in the initial iterations. We describe a modular network architecture to improve the rate of learning for such classification problems. Our basic approach is to reduce a K-class problem to set of K two-class problems with a separately trained network for each of the K problems. We also present the results from several experiments comparing our new algorithm and approach with standard backpropagation, and find that speedups of about one order of magnitude can be obtained.
Anand, Rangachari; Mehrotra, Kishan; Mohan, Chilukuri; and Ranka, Sanjay, "An Efficient Neural Algorithm for the Multiclass Problem" (1991). Electrical Engineering and Computer Science Technical Reports. Paper 136.