Establishing Equivalent Transformation from Syntactic Parsing Models to Hierarchical Probabilistic Automaton
Project/Area Number |
24800004
|
Research Category |
Grant-in-Aid for Research Activity Start-up
|
Allocation Type | Single-year Grants |
Research Field |
Intelligent informatics
|
Research Institution | University of Tsukuba |
Principal Investigator |
WAKABAYASHI Kei 筑波大学, 図書館情報メディア系, 助教 (40631908)
|
Project Period (FY) |
2012-08-31 – 2014-03-31
|
Project Status |
Completed (Fiscal Year 2013)
|
Budget Amount *help |
¥2,990,000 (Direct Cost: ¥2,300,000、Indirect Cost: ¥690,000)
Fiscal Year 2013: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
Fiscal Year 2012: ¥1,560,000 (Direct Cost: ¥1,200,000、Indirect Cost: ¥360,000)
|
Keywords | 教師なし構文解析 / 階層的確率オートマトン / 階層型隠れマルコフモデル / 確率文脈自由文法 / 依存構造解析 / チャンキング / フレーズ抽出 / 構文解析 / 確率モデル推論 |
Research Abstract |
Syntactic parsing is a data analysis technique for estimating structures of sequence data. Existing syntactic parsing models have a problem that the computation time extremely increase for longer sequences, therefore they can hardly be applied to the big data analysis. In this project, we established the equivalent transformation from the existing parsing models to the hierarchical probabilistic automaton which can parse in the linear computation time with respect to the length of sequence. I demonstrated that the proposed transformation enable us to execute approximate parsing of longer sequences very faster, and established the fast and effective sequence data analysis applications of noun phrase extraction and topical phrase extraction.
|
Report
(3 results)
Research Products
(8 results)