2014 Fiscal Year Final Research Report
The Minimum Description Length Principle and Learning Theory
Project/Area Number |
24500018
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Multi-year Fund |
Section | 一般 |
Research Field |
Fundamental theory of informatics
|
Research Institution | Kyushu University |
Principal Investigator |
TAKEUCHI Junichi 九州大学, システム情報科学研究科(研究院, 教授 (80432871)
|
Co-Investigator(Renkei-kenkyūsha) |
ONO Hirotaka 九州大学, 経済学研究院, 准教授 (00346826)
|
Research Collaborator |
BARRON Andrew R. Yale University, Dept. of Statistics, 教授
|
Project Period (FY) |
2012-04-01 – 2015-03-31
|
Keywords | 記述長最小原理 / MDL / 確率的コンプレキシティ / 指数型分布族 / 木情報源モデル / 局所指数族バンドル / Jeffreys事前分布 |
Outline of Final Research Achievements |
Concerning the minimum description length (MDL) principle, we performed the studies on evaluation of the stochastic complexity (SC) and the data compression and sequential prediction strategies which achieves the SC for various target models. Those algorithms performs as the minimax strategies with respect to logarithmic or coding regret. We obtained the minimax strategies when the target model is the i.i.d. general smooth families of probability distributions. Our strategy is a combination of the mixture of the target class with Jeffreys prior and a mixture of the local exponential family bundle of the target class. In particular, we succeeded to remove the restriction on the set of data sequences for the mixture family case and tree model case. We also studied sparse superposition code (Barron et al. 2011-), which is one of capacity achieving codes.
|
Free Research Field |
情報理論と機械学習
|