2019 Fiscal Year Final Research Report
Modeling the Perceptual Underpinnings for Quality Assessment of Restored Textures
Project/Area Number |
17K00232
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Multi-year Fund |
Section | 一般 |
Research Field |
Perceptual information processing
|
Research Institution | Shizuoka University |
Principal Investigator |
|
Co-Investigator(Kenkyū-buntansha) |
大橋 剛介 静岡大学, 工学部, 教授 (80293603)
|
Project Period (FY) |
2017-04-01 – 2020-03-31
|
Keywords | quality assessment / image restoration / image enhancement / visual detection |
Outline of Final Research Achievements |
Images and video can suffer a loss in visual quality due to processing, transmission, and archiving. In this project, we aimed to research and develop computer algorithms for judging and restoring the lost visual details in such images. We found that textures can be created based on the statistics of the original images, and then these textures can be added to the images to perform the restoration. However, the textures must be properly adjusted in contrast to have a positive effect on quality. Via a series of visual experiments, we found that these optimal contrast adjustment factors are related to the visibility of each texture and how well the texture matches the image. We further found that textures from different images, but from the same image category, can serve as suitable source statistics for the creation of the textures. In addition, based in part on these findings, we developed two computer algorithms for performing quality assessment of distorted images.
|
Free Research Field |
perceptual image processing
|
Academic Significance and Societal Importance of the Research Achievements |
Image restoration and enhancement have largely focused on removing artifacts and/or enhancing sharpness/contrast/colorfulness. We took a radically new approach by adding more noise. We demonstrated that adding shaped noise (matched random textures) can increase sharpness while hiding artifacts.
|