研究実績の概要 |
The purpose of this research is to obtainthe proper feature spaces for rejection along with the classification. Rejection is a kind of option to refuse to assign a label for an unknown sample, which has a very ambiguous feature that points to any classes. In very contrast to ambiguousness is the “absoluteness”. To know what is “absolutely correct” is another approach to the correct rejection. This year, I concentrated on absoluteness and designed a novel method by employing the ranking algorithm to obtain the absolute samples with their features. Specifically, there are two parts of the proposed method: (1) the Siamese network that measures the differences between the two inputs feeds into the model simultaneously, and (2) the Top-Rank neural network model that ranks the concatenated features of the two inputs under the positive-negative order. What is special in this proposal is that the ranking model is designed in a way that pushes all “absolute positive signatures” in front of any other signatures. This made the model able to only rank the “absolute” positive samples in the front of all samples to leave the ambiguous ones behind. Not only will it give the hint to the proper rejection feature that I needed but also increase the safety of signature verification models. This work is accepted by the 15th IAPR International Workshop on Document Analysis System.
|
現在までの達成度 (区分) |
現在までの達成度 (区分)
2: おおむね順調に進展している
理由
My research is progressing well in total. Specifically, by empoying the top-rank learning method, it is made possible to obtain the "absolute" samples, for both "absolute" positive and "absolute" negative, that are not our targets to be rejected (who should be some kinds of ambiguous). From this results, not only did I got the better-to-reject features (just the opposite from the "absolute" features), but I also broadened my horizons to studies besides classification problems. However, since I used signature verification datasets for this year's research, which should be operated in the pair-wise way, it was difficult to directly employ the model Learning with Rejection in my initial plan. Even so, I found it more valuable doing my research this year, for learning more and thinking even more about a subject, the rejection in my case, instead of sticking to a specific model.
|
今後の研究の推進方策 |
After pursuing the method of obtaining proper rejection features in the last year's research, from now on, I am planning to go back to my main subject, which is rejection. Instead of manipulating the structure of Learning with Rejection directly, I will turn my attention to studying and improving another model named SelectiveNet, which is a Deep Learning-based model that could implement classification with the rejection at the same optimization process. With the experiments obtained in the previous research, I am looking forward to constructing a rejection function with a rather reasonable rejection feature space that takes the proposed method from previous work into consideration.
|