Budget Amount *help |
¥2,300,000 (Direct Cost: ¥2,300,000)
Fiscal Year 1996: ¥800,000 (Direct Cost: ¥800,000)
Fiscal Year 1995: ¥1,500,000 (Direct Cost: ¥1,500,000)
|
Research Abstract |
1. Recognition of Facial Expressions Based upon Potential Field. The field of induction on the retina, called a potential field, of which similar mechanism has been studied as a model of the human vision, plays an important role in perceiving human faces. This research explores a new idea for recognizing human facial expressions from an overall pattern of the face, represented in a potential field activated by edges in a single input image, rather than from changes in the shapes of the facial organs or their geometrical relationships. A two dimensional gird, called Potential Net, of which nodes are moved by forces of the image edges and springs connected to their four neighbors is used as a model of the potential field. Thus, a net state vector representing displacements of the nodes in the Net represents also the overall input pattern. Each facial expression is modeled as a net such that its state is the average of the net states for a variety of subjects with a same expression. Since
… More
the dimension of the net state vector too high, it is mapped into a lower dimensional space, called Emotion Space, determined by the K-L expansion. Thus, the facial expression in an image is estimated from its mapping into the Emotion Space. The performance of the method under various image conditions is studied by experiments. 2. Locating Human Faces in Images Before applying the above method, one needs to locate the face, which often appears against complex background, in the image. Since the net state carries faceness information, the average state over a variety of subjects is encoded into a mosaic as generic face apperance model. The mosaic is then scanned in the net state for the input image as a variable size template to match. Areas in the input net with high matching scores are selected as candidates for the face area. By projecting and then backprojecting the candidates into/from the Emotion Space, a finalist is selected. Further analysis around the finalist using the edge projections determines a precise location of the face. Experiments of the integrated system of locating the faces and recognizing their expressions indicate the proposed method reliably works for real imagery. Less
|