Multi-modal person authentication based on writing behavior
Project/Area Number |
23760345
|
Research Category |
Grant-in-Aid for Young Scientists (B)
|
Allocation Type | Multi-year Fund |
Research Field |
Communication/Network engineering
|
Research Institution | Osaka University |
Principal Investigator |
MURAMATSU Daigo 大阪大学, 産業科学研究所, 特任講師 (00386624)
|
Research Collaborator |
HASHIMOTO Yuki 成蹊大学, 理工学部, 学生
SEKIGUCHI Satoru 早稲田大学, 先進理工学部, 学生
|
Project Period (FY) |
2011 – 2013
|
Project Status |
Completed (Fiscal Year 2013)
|
Budget Amount *help |
¥4,420,000 (Direct Cost: ¥3,400,000、Indirect Cost: ¥1,020,000)
Fiscal Year 2013: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
Fiscal Year 2012: ¥1,430,000 (Direct Cost: ¥1,100,000、Indirect Cost: ¥330,000)
Fiscal Year 2011: ¥1,560,000 (Direct Cost: ¥1,200,000、Indirect Cost: ¥360,000)
|
Keywords | バイオメトリクス / マルチモーダル / 筆記動作認証 / オンライン署名認証 / ペン持ち方認証 / スコアレベル統合 / 機械学習 / 画像、文章、音声等の認識 / 暗号・認証等 |
Research Abstract |
Multi-modal person authentication algorithm based on signature writing behavior was developed in this study. The developed algorithm uses not only pen tip trajectory (online signature), but also dynamic pen-holding style information, which includes how to hold a pen, and how to move a pen using a hand. These information can be extracted from a signature writing image sequences captured by a single camera. Therefore, we can improve recognition accuracy without increasing data capturing sensor. We analyzed the image sequences and extracted useful features for person authentication. Combination of these extracted features improves authentication accuracy.
|
Report
(4 results)
Research Products
(27 results)