Dimensionality Reduction for Designing Online Algorithms
Project/Area Number |
15500001
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Fundamental theory of informatics
|
Research Institution | Tohoku University |
Principal Investigator |
TAKIMOTO Eiji Tohoku University, Graduate School of Information Sciences, Associate Professor, 大学院・情報科学研究科, 助教授 (50236395)
|
Project Period (FY) |
2003 – 2004
|
Project Status |
Completed (Fiscal Year 2004)
|
Budget Amount *help |
¥2,900,000 (Direct Cost: ¥2,900,000)
Fiscal Year 2004: ¥800,000 (Direct Cost: ¥800,000)
Fiscal Year 2003: ¥2,100,000 (Direct Cost: ¥2,100,000)
|
Keywords | online prediction / kernel method / data compression / dimensionality reduction / Boosting / risk information / 重み更新アルゴリズム / カーネル / 確率モデル |
Research Abstract |
A number of methods have been developed for predicting nearly as well as the best predictor among a set of experts. These methods have the same mechanism of making predictions that are based on the weighted average of experts' advices. In many natural applications, however, we have to deal with exponentially or infinitely many experts to be combined, and so it is computationally infeasible to explicitly maintain weights for all experts. In this research, we proposed a method of maintaining some parameter vector in a low dimensional space that implicitly represents the weight vector, with which we can efficiently simulate the weighted average prediction. Below are the major results obtained in this research project. For the class of exponentially many predictors associated with the paths of a graph, we gave a method of efficiently simulating the weighted average prediction by maintaining probabilistic flows on the edges. This gives a new kernel called the path kernel which turned out to be useful in many applications. We proposed a new scheme of Boosting by dividing and merging the domain repeatedly to form a decision diagram as its final hypothesis. This gives a unified framework in which we can now analyze the AdaBoost-type and the Decision Tree-type algorithms that were thought to be derived from quite different principles. We generalized the model so that the learner is allowed to see the bounds on the losses of experts (risk information) and gave a tight performance bound of the Aggregating Algorithm.
|
Report
(3 results)
Research Products
(37 results)