• 研究課題をさがす
  • 研究者をさがす
  • KAKENの使い方
  1. 課題ページに戻る

2014 年度 実施状況報告書

Improving High-Performance Neural Network-based Sequence Classifiers

研究課題

研究課題/領域番号 26730093
研究機関九州大学

研究代表者

FRINKEN Volkmar  九州大学, システム情報科学研究科(研究院, 学術研究員 (70724417)

研究期間 (年度) 2014-04-01 – 2016-03-31
キーワードMachine Learning / Sequence Classification / Deep Learning / Pattern Recognition / LSTM Neural Networks
研究実績の概要

The work on high performance-based sequence classifiers resulted in valuable insight and one publication. The biggest problem in analyzing sequences are long-term dependencies and contextual information. A study on many different network architectures revealed underlying mechanism of successful sequence processing. By using deep recurrent neural networks, the classification of a sequence element depends upon two processing dimensions: the layers of the neural network and previous sequence elements. This adds another dimension to the training and new algorithms are needed. Currently, efficient training algorithms can deal either with sequence dependencies or multiple layers, but not both at the same time.

現在までの達成度 (区分)
現在までの達成度 (区分)

3: やや遅れている

理由

Gaining insights into the behavior of the recurrent neural network sequence classifier turned out to be extremely challenging. Initial estimations to model the internal states of a NN with a hidden Markov model - as mentioned in the "Purpose of the Research" section in the initial submission - had been unsuccessful. To still reach the goal of improving training and performance of these recognizers, a different approach was chosen. A detailed study on different, deep, architectures, nevertheless, revealed interesting insights that can be used in the future.

今後の研究の推進方策

Currently, the internal behavior and processes of large neural network-based recognition system for sequence processing are still not fully understood. More research has to be done. In addition, there a need for new training algorithms became obvious during my research. Deep neural networks for sequences have to deal with sequence dependencies as well as multiple layers. These are two related, yet different dimensions of dependencies that need to be taken into account. Existing training algorithms currently can only handle either very deep NN for static data, or sequential data with shallow networks. A combined approach, however, is very promising for future research directions.

次年度使用額が生じた理由

The estimation of the travel expenses cannot always be done with great certainty. In the end, the flight was cheaper than anticipated. The money needed was a bit less than I expected. Since there was no usage for that money in the last fiscal year, it is better to transfer it to the next year.

次年度使用額の使用計画

There is no current plan for the extra 10,186円. It will be kept as a buffer, in case some items turn out to be more expensive.

  • 研究成果

    (1件)

すべて 2015

すべて 学会発表 (1件)

  • [学会発表] Deep BLSTM Neural Networks for Unconstrained Continuous Handwritten Text Recognition2015

    • 著者名/発表者名
      Volkmar Frinken
    • 学会等名
      Int'l Conf. on Document Analysis and Recognition (ICDAR)
    • 発表場所
      Gammarth, Tunisia
    • 年月日
      2015-08-23 – 2015-08-26

URL: 

公開日: 2016-06-01  

サービス概要 検索マニュアル よくある質問 お知らせ 利用規程 科研費による研究の帰属

Powered by NII kakenhi