Statistical theory for Gaussian process function approximation based on theory of image restoration and its application
Project/Area Number |
14580438
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Intelligent informatics
|
Research Institution | RIKEN (The Institute of Physical and Chemical Research) |
Principal Investigator |
OKADA Masato RIKEN (The Institute of Physical and Chemical Research), Laboratory for mathematical Neuroscience, Deputy Laboratory Head, 脳数理研究チーム, 副チームリーダー (90233345)
|
Project Period (FY) |
2002 – 2003
|
Project Status |
Completed (Fiscal Year 2003)
|
Budget Amount *help |
¥3,700,000 (Direct Cost: ¥3,700,000)
Fiscal Year 2003: ¥1,600,000 (Direct Cost: ¥1,600,000)
Fiscal Year 2002: ¥2,100,000 (Direct Cost: ¥2,100,000)
|
Keywords | function approximation / image restoration / Gaussian model / Fourier transformation / singularity / statistical mechanics / learning machine / on-line learning / 並進対称性 |
Research Abstract |
It had been usually assumed that the additive noise does not correlated with each other in the frameworks of the image restoration and function approximation. We instigated the use of the Baysian inference to restore noise-degraded images under conditions of spatially correlated noise. We obtained the expected value of a restored image and obtained the optimal hyper-parameters. We discussed whether the conventional spatially uncorrelated noise model could cope with the spatially correlated noise or not, and found it could not. Furthermore, we discussed the hyper-parameter estimation based on the maximum marginalized likelihood method, and found an iterative algorithm for obtaining the maximum can not be converged. We thought the reason is singularity in the model. Thus, we concentrated the parameter estimation for hierarchical models with singular structure. Using two-layer neural networks, we investigate influences of singularities on dynamics of standard gradient learning and natural gradient learning under various learning conditions. In the standard gradient learning, we found a quasi-plateau phenomenon, which is severer than the well known plateau in some cases. The slow convergence clue to the quasi-plateau and plateau becomes extremely serious when an optimal point is in a neighborhood of a singularity. In the natural gradient learning, however, the quasi-plateau and plateau are not observed and convergence speed is hardly affected by singularity.
|
Report
(3 results)
Research Products
(20 results)