2014 Fiscal Year Annual Research Report
スパース表現と辞書学習に基づく画像処理アルゴリズム研究
Project/Area Number |
14J10950
|
Research Institution | The University of Aizu |
Principal Investigator |
リ ゼンニ 会津大学, コンピュータ・情報システム学, 特別研究員(DC2)
|
Project Period (FY) |
2014-04-25 – 2016-03-31
|
Keywords | Sparse representation / Dictionary learning |
Outline of Annual Research Achievements |
In the last year, we have proposed novel and efficient algorithms for learning dictionary. The problem of dictionary learning was cast as the minimization of the approximation error function with the coherence penalty of the dictionary atoms and with the sparsity regularization of the coefficient matrix. For efficiently solving the problem, we turned it into a set of minimizations of piecewise quadratic univariate subproblems, each of which was a single variable vector problem, either of one dictionary atom or one coefficient vector. Although the subproblems were still nonsmooth, we could find the optimal solution in closed form using proximal operators. This led to the socalled fastPDL algorithm. This algorithm updates the dictionary and the coefficient matrix in the same manner, forcing the coefficients to be as sparse as possible while simultaneously forcing the atoms of the dictionary to be as incoherent as possible. We have verified that fastPDL is efficient in both computational complexity and convergence rate. In the experiments, we have used the camera and smart tablet for data collection and server computers for programing. The numerical experiments have shown that the proposed algorithm is more efficient than the state-of-the-art algorithms. In addition, we have described two applications of the proposed algorithm, one for a nonnegative signal and the other for a general signal, and have shown that our algorithm has a significant advantage in computation time for both of them.
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
The main contributions are developing dictionary learning algorithms for sparse reprtesentation of signals. In the last year, we have written two journal papers and two conference papers as follows: [1] Zhenni Li, Shuxue Ding and Yujie Li, A Fast Algorithm for Learning Overcomplete Dictionary for Sparse Representation Based on Proximal Operators, Neural computation, accepted, May,2015. [2] Zhenni Li, Shuxue Ding, Yujie Li and Wuhui Chen, Incoherent Dictionary Learning Log-Regularized Constraint Based on Proximal Operators, IEEE transactions on singnal processing, under review. [3] Zhenni Li, Shuxue Ding, Yujie Li, Dictionary learning with log-regularizer for sparse representation, 2015 IEEE International Conference on Digital Signal Processing (DSP2015), accepted. [4] Zhenni Li, Shuxue Ding, Yujie Li, Zunyi Tang, Wuhui Chen, Improving dictionary learning using the ITAKURA-SAITO divergence, IEEE China Summit and International Conference on Signal and Information Processing (2014ChinaSIP), pp.733-737, July9-13,2014.
|
Strategy for Future Research Activity |
1. We have already presented the incoherent dictionary learning algorithms. As we known, the coherence is related to the sparsity. We will analyze the coherence to learn the most efficient dictionary. 2. We have applied the fastPDL algorithm to the practicalaoolication in the fields of denoising on images and audio data. Improving the application of denoising is still our future work. The further applications also remain as our future work, such as classification, deblurring and inpainting. 3. The mixed norm regularizer is designed to promote sparsity among group of features or sensors and to avoid the issue of the overfitting. Developing the mixed regularizer to the form of det-norm 2 and applying it to dictionary learning problem are the our future works. The determinant-based sparseness measure can be easily solved by a typical QP method.
|