2016 Fiscal Year Research-status Report
Evolutionary Approaches to Learning Self-awareness for a Decentralized System
Project/Area Number |
15K00343
|
Research Institution | The University of Aizu |
Principal Investigator |
劉 勇 会津大学, コンピュータ理工学部, 上級准教授 (60325967)
|
Project Period (FY) |
2015-04-01 – 2018-03-31
|
Keywords | neural networks / machine learning / awareness computing |
Outline of Annual Research Achievements |
More and more data has been waiting for the powerful learning systems to analyze nowadays. When the amount of data keeps growing, the learning system should be able to grow itself in online learning. A decentralized learning system with a set of self-aware neural network subsystems has been developed, which is able to solve complex tasks more satisfactorily by learning to subdivide the tasks.
This project has improved negative correlation learning algorithms in designing a decentralized learning system with the enforced self-awareness. Self-awareness is a kind of ability of recognizing oneself as an individual being different from the environment and other individuals. Two ways of negative correlation selections have been introduced in negative correlation learning for letting individual neural networks be able to adapt the learning error functions in the whole learning process. The first negative correlation selection is based on the over negative correlation learning, while the second negative correlation selection is through difference learning.
Experimental results have shown that such neural networks being aware of their own behavior and performance are able to manage trade-offs among different goals at run-time, and better meet their requirements for predictions on the unknown data in the applications.
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
Two ways of negative correlation selections have been tested in negative correlation learning, which could either weaken or strengthen the data learning signal for the different individual neural networks in a decentralized learning system.
When a data signal tends to make one neural network to be the same as the other neural networks in the learning system, the learning weight on the learning signal would be scaled down. When a learning signal tends to push one neural network to be different to other neural networks, the learning weight on the learning signal would be enforced. With selective learning, all the individual neural networks will learn the given data more cooperatively and efficiently. Particularly, when a data point has been learned by too many neural networks in the same learning system, some neural networks should choose to learn less and less from the data point in the further learning process. When a data point has been learned by too few neural networks, other neural networks would be encouraged to learn more from the data point.
The research results have been published and presented in one journal paper and seven international conferences.
|
Strategy for Future Research Activity |
In the past implementation of negative correlation learning with negative selections, the weights for strengthening or weakening the learning signals are kept the same in the whole learning process. Ideally, both the weakening weights and the strengthening weights should be able to be adapted in the learning process. Not only would the strengthening weights or the weakening weights depend on the application tasks, but also the architecture of the learning system. In the next implementation of negative correlation learning with negative correlation selections, each neural network will have its own weakening or strengthening weights in learning. During the learning, each neural network will be able to continuely adapt them in order to cooperate other neural networks in the same learning system.
Instead of letting each neural network in the learning system to possess self-awareness by itself during the learning, the predefined self-awareness could be initially built for each neural network in a decentralized learning system. One way to implement such predefined self-awareness is through the random assignment of subsets from the whole learning data set. It is expected that the predefined self-awareness for each individual neural network could make the whole learning system more robust in online learning.
|