研究実績の概要 |
My research focuses on improving the reliability of machine learning models, particularly in scenarios where mistakes are unacceptable, such as medical image recognition and signature verification. Since no model can guarantee perfection, it is essential to improve their reliability under the assumption that mistakes may occur. In this report, I discuss my research from two perspectives: rejection operation and top-rank learning. The rejection operation removes samples that significantly impact the recognition performance, such as those with ambiguous confidence scores. These samples may arise from under-learning or be intrinsically unable to be classified into a single class. In contrast to the rejection operation, top-rank learning aims to improve the model's reliability from an “absolute” perspective. This methodology is more suitable for applications requiring high reliability rather than overall performance, such as identifying patients who “absolutely” do not have cancer from patients with a slight chance of having cancer. I propose a novel machine-learning framework to achieve this goal, and this work is under review. I applied the rejection operation and top-rank learning methodologies to a writer-independent signature verification task for my recent research achievement. The proposed framework improved the model’s reliability, and this is the first application of these two machine-learning frameworks to highly reliable signature verification. The results were quantitative and qualitatively robust.
|
現在までの達成度 (区分) |
現在までの達成度 (区分)
2: おおむね順調に進展している
理由
The progress is stuck to schedule because I made plans before executing this research, especially for the part that constructs the models and prepares the dataset. The process that took time mostly was the training of the model. Because the research aims to improve the model reliability, ensuring the fairness of the performance evaluation is of the greatest importance. Therefore, 10-fold cross-validation is conducted. Besides, since the dataset did not give the training and test sets splits, an inner 5-fold cross-validation (nested cross-validation) was applied. The progress could be made faster with more GPUs.
|
今後の研究の推進方策 |
New research on top-rank learning robust to outliers will be done as future work. This work aims to prevent the impact of outliers on the ranking direction. Since top-rank learning focuses on obtaining absolute positives, negative outliers can significantly influence the ranking direction. The goal is to enable the top-rank learning model to learn a proper ranking direction even with the existence of outliers. This work is underway, and its performance is ensured to some extent. I plan to publish this work as another journal paper this summer. The model training will be done in a parallel manner with more GPUs.
|