Identification of neural network input types and input ranking via sensitivity analysis
Date of Award
Doctor of Philosophy (PhD)
Electrical Engineering and Computer Science
Input types, Sensitivity analysis, Neural network, Partially connected
Electrical and Computer Engineering | Industrial Engineering
In this dissertation we introduce methods for identifying input types and for determining an input ranking via sensitivity analysis. For input-output mapping (IOM) problems, fully connected neural networks (FCNNs) have been commonly used as a matter of course, since they usually do not need a priori information about data. Because of the "black-box" style of training method, FCNNs may have unnecessary connections between input layers and hidden layers that can cause expensive computation and take longer for training due to the complex internal connections. We introduce a method to develop partially connected neural networks (PCNNs) by removing unnecessary connections in FCNNs. These PCNNs perform almost as well as FCNNs, and in some cases they yield an even better performance. They are structured by identifying input types (ITs) that, to our best knowledge, had not thus far been studied.
We identify ITs by analyzing the input sensitivity changes due to amplification of one input by a specific amount of the ratio of the input at a time. We present a mathematical formulation of the identification of ITs that provides regularity in sensitivity changes of inputs according to the ITs. For instance, uncoupled inputs are not affected by the amplification of other inputs, but coupled inputs are mutually affected by the amplification of any one of them.
In feedforward neural networks, all inputs contribute to a greater or lesser extent when calculating the outputs. Therefore, inputs may be ordered from the greatest contributor to the least. We present a new method of determining the input ranking of three-layered PCNNs in order to solve several issues that arose from examining related literature. Specifically, the sensitivity of an input is dependent on the magnitude of other inputs; thus, the actual ranking becomes obscure. Also, some methods require a large number of neurons in the hidden layer in order to obtain a statistical input ranking that represents the entire input space.
Several examples, simulated ones as well as those with real data, namely, blood pressure estimation, which inspired us to develop our methods, are given to demonstrate how well our methods work.
Surface provides description only. Full text is available to ProQuest subscribers. Ask your Librarian for assistance.
Kang, Sanggil, "Identification of neural network input types and input ranking via sensitivity analysis" (2002). Electrical Engineering and Computer Science - Dissertations. Paper 103.