What is PNN?
PNN or “Probabilistic Neural Network” is Donald Specht’s term for kernel discriminant analysis. You can think of it as a normalized RBF network in which there is a hidden unit centered at every training case. These RBF units are called “kernels” and are usually probability density functions such as the Gaussian. The hidden-to-output weights are usually 1 or 0; for each hidden unit, a weight of 1 is used for the connection going to the output that the case belongs to, while all other connections are given weights of 0. Alternatively, you can adjust these weights for the prior probabilities of each class. So the only weights that need to be learned are the widths of the RBF units. These widths (often a single width is used) are called “smoothing parameters” or “bandwidths” and are usually chosen by cross-validation or by more esoteric methods that are not well-known in the neural net literature; gradient descent is not used. Specht’s claim that a PNN trains 100,000 times faster than back