|Budget Amount *help
¥6,100,000 (Direct Cost : ¥6,100,000)
Fiscal Year 1993 : ¥800,000 (Direct Cost : ¥800,000)
Fiscal Year 1992 : ¥5,300,000 (Direct Cost : ¥5,300,000)
The objective of the research was to develop the metaphor of ecological societies for distributed autonomous information systems. Several metaphor of ecological societies for the information systems was introduced ; Neural Networks, Genetic Algorithms, Reinforcement Learning and etc., . These methodologies are applied as the fundamental strategies to get the adaptation and the evolution of the systems. In addition, especially we concentrated to make the methodologies for the task processing by the multi agents in the dynamical environments. To enable the objective, we proposed extending of the fundamental adaptive strategies and developing of the communication methods between the agents.
The following are the parts of the summaries of our results :
1) We introduce the concept of "information fields". The information fields have the role of communication among the anonymous agents in the distributed systems. We especially developed the theory of the vibrating potential methods based on th
e information field. This method is enable to construct the flexible and large distributed autonomous systems. The applications of derived methods are shown of the efficiencies.
The dynamical environment is the intrinsic for the ecological societies. Thus we developed the methods for the evolution and the learning for realizing the adaptation in the dynamical environment.
2) We developed the extended genetic algorithms, called filtering GA,which can continue the search effort without stack and convergent to the local minima. The filtering GA use the concept of the coevolution between the genetic population influenced by the filter and the meta population creating the filter. The Filtering GA can be over the hard problems for the standard GAs.
3) To realize the adaptive task processing by the distributed agents, we developed the schema of the cooperative task processing by the non-uniform agents based on the reinforcement learning. The schema applied to the game environment in which two teams of agents compete with each other to get the foods like as the real ant colonies. Through the evaluation using the game environment, we discussed the characteristics and the properties of the learning in the competitive agents teams to increase the adaptability for the complex tasks.