1994 Fiscal Year Final Research Report Summary
Efficient learning algorithms based on infomation compression
Project/Area Number |
05452349
|
Research Category |
Grant-in-Aid for General Scientific Research (B)
|
Allocation Type | Single-year Grants |
Research Field |
計算機科学
|
Research Institution | Tohoku University |
Principal Investigator |
JIMBO Shuji Tohoku Univ., Faculty of Engineering, Assistant, 工学部, 助手 (00226391)
|
Co-Investigator(Kenkyū-buntansha) |
ASO Hirotomo Tohoku Univ., Faculty of Engineering, Professor, 工学部, 教授 (10005522)
TAKIMOTO Eiji Tohoku Univ., Graduate School of Information Sciences, Assistant, 大学院・情報科学研究科, 助手 (50236395)
MARUOKA Akira Tohoku Univ., Graduate School of Information Sciences, Professor, 大学院・情報科学研究科, 教授 (50005427)
|
Project Period (FY) |
1993 – 1994
|
Keywords | PAC learning model / Learning algorithm / Information compression / VC dimension / Monotonicity / Conservativeness / Monotone DNF / Character feature for pattern matching |
Research Abstract |
1. Information compressing and gaining mechanism in learning process In PAC learning model a learning algorithm is expected to produce a hypothesis that approximates a target function by using a sequence of examples of the target. On the other hand the notion of an information compressing algorithm, called an Occan algorithm, has been introduced and its relation to a PAC learning algorithm has been investigated. It has been shown that an Occam algorithm is immediately a PAC learning algorithm, while a PAC learning algorithm can be modified to obtain a randomized Occam algorithm. We investigate relationship between these types of algorithms. We show that a PAC learning algorithm is not necessarily an Occam algorithm by giving a counter example. Reasonal conditions, called preservability and monotonicity, which natural PAC learning algorithms are expected to satisfy are introduced. And it is conjectured that a PAC learning algorithm becomes immediately an Occam algorithm under any of these
… More
two conditions. Although the conjecture has not been proved so far, it is verified that the statement holds under some technical conditions. Furthermore, a motion of an information gaining algorithm is introduced and its relation to a PAC learning algorithm and an information compressing algorithm is explored. 2. Learning of disjunctive normal form formulae In the field of computational learning it is one of the most important open problems to decide whether or not disjunctive normal form (DNF) formulae are learnable form examples. The main results obtained are stated as follows : Monotone DNF formulae with log n terms are learnable from positive examples ; New Boolean functions, called kappa term functions, are learnable from examples. 3. Computational complexity and appoximate computation Computational resources needed for learning depend on the complexity of its target. Various issues, such as Boolean complexity, approximate computation, pseudo-randomness, concerning computational complexity of target functions are investigated. 4. Extracting character feature on pattern matching When character recognition is implemented by using the method of pattern matching, it is crucial how to make a feature vector for each character pattern. In view of information compression, the problem of defining the feature vectors for precies recognition is investigated. Less
|