Project/Area Number |
12680215
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Educational technology
|
Research Institution | Kyushu University |
Principal Investigator |
ARITA Daisaku (2001-2002) Department of Information Systems, Research Associate, システム情報科学研究院, 助手 (70304756)
菅沼 明 (2000) 九州大学, 大学院・システム情報科学研究院, 助教授 (70235852)
|
Co-Investigator(Kenkyū-buntansha) |
SUGANUMA Akira Department of Information Systems, Associate Professor, システム情報科学研究院, 助教授 (70235852)
TANIGUCHI Rin-ichiro Department of Information Systems, Professor, システム情報科学研究院, 教授 (20136550)
有田 大作 九州大学, 大学院・システム情報科学研究院, 助手 (70304756)
|
Project Period (FY) |
2000 – 2002
|
Project Status |
Completed (Fiscal Year 2002)
|
Budget Amount *help |
¥3,400,000 (Direct Cost: ¥3,400,000)
Fiscal Year 2002: ¥800,000 (Direct Cost: ¥800,000)
Fiscal Year 2001: ¥900,000 (Direct Cost: ¥900,000)
Fiscal Year 2000: ¥1,700,000 (Direct Cost: ¥1,700,000)
|
Keywords | Image processing / Supporting system / Distant lecture / Automatic recording / Estimation of lecture / Active camera |
Research Abstract |
We have developed the method recording the latest object written on a blackboard. Our method analyzes a lecture scene captured by a steady camera and extracts the latest one by using the subtraction method between two successive frames and the background subtraction method. It is important for our system (ACE) to extract and focus on the explained object so that the system appropriately records the lecture scene for the distant lecture realized by transmitting video and audio of the live lecture. To detect the explained object is so difficult. We assumed that the teacher explained the object as soon as he wrote it. Our system consequently zooms in on the latest object written on the blackboard as the explained object. We have developed a prototype system of ACE and conducted an experiment of applying it to a real 25-minutes lecture on Mathematics for 85 undergraduates. We recorded the lecture scene by ACE and by a steady camera in order to compare them. As the results of our experiment, we confirmed that the scene captured by ACE is good enough for students to read contents on the blackboard. And as the results of our questioner, we made sure that ACE is superior to the steady camera with 1% level of significance. The evaluation of ACE is not high at the question: could you watch something to want. Because students do not always want look at the explained object immediately and ACE does not always extract the accurate latest object. Then we have designed the other method that enabled the students to refer the object they wanted.
|