Learning of a sparse code by a constrained optimization
Project/Area Number |
19700219
|
Research Category |
Grant-in-Aid for Young Scientists (B)
|
Allocation Type | Single-year Grants |
Research Field |
Sensitivity informatics/Soft computing
|
Research Institution | Kyoto University (2008-2009) Nara Institute of Science and Technology (2007) |
Principal Investigator |
SHIN-ICHI Maeda Kyoto University, 情報学研究科, 助教 (20379530)
|
Project Period (FY) |
2007 – 2009
|
Project Status |
Completed (Fiscal Year 2009)
|
Budget Amount *help |
¥2,600,000 (Direct Cost: ¥2,300,000、Indirect Cost: ¥300,000)
Fiscal Year 2009: ¥650,000 (Direct Cost: ¥500,000、Indirect Cost: ¥150,000)
Fiscal Year 2008: ¥650,000 (Direct Cost: ¥500,000、Indirect Cost: ¥150,000)
Fiscal Year 2007: ¥1,300,000 (Direct Cost: ¥1,300,000)
|
Keywords | 階層ニューラルネットワーク / Contrastive Divergence Learning / 制約付き最適化 / EMアルゴリズム / スパースコーディング / 階層型モデル / 特徴抽出 / contrastive divergence / ニューラルネットワーク / ボルツマンマシン / autoencoder / pretraining / 自由エネルギー最小化 / companding function |
Research Abstract |
Hinton et al. (2006) showed that the performance of hierarchical neural networks couldbe greatly improved by an innovation of the learning algorithm and training with a largeamount of data. In this research project, I aim to clear up the causes which enable anefficient learning. As the results, I've succeeded to clearly show the reason why thelearning by EM algorithm freezes in some cases, generalize contrastive divergencelearning which was used in the training of the hierarchical neural networks and newlyderive the convergence condition of the algorithm. We also successfully demonstrated thatan efficient coding of natural images can be obtained by the learning based on aconstrained optimization.
|
Report
(4 results)
Research Products
(20 results)
-
-
-
-
-
-
-
-
-
-
-
-
-
-
[Presentation] 疎ベイズ解像度合成2008
Author(s)
兼村厚範, 前田新一, 石井信
Organizer
日本神経回路学会第18回全国大会
Place of Presentation
つくば
Year and Date
2008-09-25
Related Report
-
-
-
-
-
-