|Budget Amount *help
¥2,100,000 (Direct Cost : ¥2,100,000)
Fiscal Year 1999 : ¥300,000 (Direct Cost : ¥300,000)
Fiscal Year 1998 : ¥900,000 (Direct Cost : ¥900,000)
Fiscal Year 1997 : ¥900,000 (Direct Cost : ¥900,000)
Learning models with finite multi-resolution functions, for example, neural networks, gaussian mixtures, and finite wavelets, are often used in pattern recognition, robotic control, and time sequence prediction. However, their mathematical foundation has not been established because they are not linear or not regular models. In this research, we clarified the two aspects of their mathematical properties, (1) function approximation abilities, and (2) statistical estimation effciencies.
(1) It is well known that function approximation errors by their models depend on the functional topologies. In this research, we proposed a method to clarify the function approximation errors based on the assumption that the target functions are randomly taken from the function probability measures. Based on this assumption, we proved that the average function approximation errors are determined by the covariance of the coeficients of the functions, or the sparseness of the functions. This result shows a critrion whether the multi-resolution analysis is useful or not.
(2) In order to clarify the statistical estimation errors, we have shown that the stochastic complexity of the learning model is determined by the deepest singularities of the model, and we developed an algorithm to calculate the learning efficiency based on the Sato-Bernstein's b-function and Hironaka's resolution of singularities.
The problems for the future are to depelop a method to estimate the functional probability measure of the images and sounds and to establish the mathematical foundation of the maximum likelihood method.