Knowledge Integration from Uncertain Infromation
Project/Area Number |
17300043
|
Research Category |
Grant-in-Aid for Scientific Research (B)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Intelligent informatics
|
Research Institution | Tokyo Institute of Technology |
Principal Investigator |
SATO Taisuke Tokyo Institute of Technology, Graduate School of Information Science and Engineering, Professor (90272690)
|
Co-Investigator(Kenkyū-buntansha) |
KAMEYA Yoshitaka Tokyo Institute of Technology, Graduate School of Information Science and Engineering, Assistant Professor (60361789)
|
Project Period (FY) |
2005 – 2007
|
Project Status |
Completed (Fiscal Year 2007)
|
Budget Amount *help |
¥11,020,000 (Direct Cost: ¥10,000,000、Indirect Cost: ¥1,020,000)
Fiscal Year 2007: ¥4,420,000 (Direct Cost: ¥3,400,000、Indirect Cost: ¥1,020,000)
Fiscal Year 2006: ¥3,400,000 (Direct Cost: ¥3,400,000)
Fiscal Year 2005: ¥3,200,000 (Direct Cost: ¥3,200,000)
|
Keywords | Probabilistic modeling language / PRISM / Variational Bayes / PROSM / 機械学習 / 情報基礎 / 統計数学 / 情報システム / 人工知能 |
Research Abstract |
We have developed PRISM which is a symbolic-statistical modeling language designed for complex phenomena governed by logic and probability. A PRISM program defines a distribution on structured data such as strings, trees and graphs using definite clauses with probabilistic built-ins. The parameters contained in it are statistically estimated by the EM algorithm from data. PRISM 1.11, the latest version released in 2007 has the following features. (1) Efficient memory management due to B-Prolog7.0 on which PRISM is built. (2) Parallel EM learning. Empirically we have observed linear acceleration of learning speed w.r.t. the number of CPUs. (3) Deterministic annealing for EM learning. (4) Generalization of the Viterbi algorithm for N-Viterbi algorithm which returns N topmost answers. (5) VB (variational Bayes) is available. It is applicable to models that go beyond known models such as hidden Markov models. In particular VB is important because it enables us to add priors over parameter distributions to combat the data sparseness problem. We allow Dirichlet priors and VB estimate their hyper parameters from data. In addition, VB provides us with an approximation of marginal log-likelihood which is applied to model selection.
|
Report
(4 results)
Research Products
(43 results)