Direct and Inverse Distribution of Neural Networks.
Keywords:
Recurrent, Neural Networks, hidden layersAbstract
The most common training method of neural networks is to successively propagate the observation vectors and determine the weight coefficients in such a way that the output values are as close as possible to the required data. This is called tutoring. Because for each vector observation we have the desired result. And we, accordingly, require the result from the network to be exactly close to the desired value. It is possible to create an algorithm that finds the weighting coefficients in the best way (maximum speed, maximum value close to the required result).
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.