Co-Investigator(Kenkyū-buntansha) |
MISHIMA Taketoshi Saitama University, faculty of engineering, professor, 工学部, 教授 (30245310)
MIZOGUCHI Hiroshi Tokyo Univ.of Science, dept.of mechanical engineering, professor, 理工学部, 教授 (00262113)
|
Budget Amount *help |
¥2,800,000 (Direct Cost: ¥2,800,000)
Fiscal Year 2004: ¥1,000,000 (Direct Cost: ¥1,000,000)
Fiscal Year 2003: ¥900,000 (Direct Cost: ¥900,000)
Fiscal Year 2002: ¥900,000 (Direct Cost: ¥900,000)
|
Research Abstract |
Dimensionality Reduction The present study considers dimensionality reduction methods as a preprocessing of pattern recognition. Though PCA(principal component analysis) is a most popular conventional method of dimensionality reduction, it has a drawback for our purpose because it does not use class informations which are attatched to training samples. On the other hand, another popular method LDA(linear discriminant analysis) uses class information. However, it has a restriction on the number of reduced dimension according to the number of classes. This restriction can cause excessive reduction. In order to overcome these problems, we have considered dimensionality reduction methods based on the difference between classes in the reduced data as a criterion for goodness ofreductions. First, we have examined a method in which we measure the difference of distributions based on Kullback-Leibler information. This method uses class informations, and it does not have any restriction on the nu
… More
mber of reduced dimension. Through experiments with fimdamental tasks, we have observed that this method decreases the rate of wrong classifications compared with PCA and LDA for linear non-separatable tasks, and so on. Two problems of this methods are (1)it uses fitting of multidimensional normal distributions, and (2)it needs iterative calculations in optimization. As for (2), we have proposed a method in which a linear transformation, that whitens only one class, is applied to the whole data and then PCA is performed. It does not need iterative calculations any more, whereas it has similar property to the previous method. On the other hand, as for (1), we improved our method with a virtual potential between different classes. The last methods realizes lower rate of wrong classifications than the previous methods for dimensionality reduction to particularity low dimensions. Information Representation For multi-labeled learning tasks, we have considered two ways of information representation, neural network method and and conditional distribution method, and their behaviors are invesitigated. Less
|