研究実績の概要 |
In the past year, we've exceeded our project's main goal by exploring a crucial aspect of Artificial Intelligence: how to effectively integrate new information into formal systems, such as databases or multi-agent systems, within the context of systems resilience. This poses a significant challenge when the incoming data originates from diverse sources, potentially conflicting with the system's existing understanding. Yet, it's vital to integrate this information while adhering to rational principles. Over decades, scholars have scrutinized these principles, and our research has focused on two main areas.
Firstly, we've pinpointed limitations in the current framework of iterated change, specifically iterated belief revision, which governs how a system's state evolves with new information. To address this, we've introduced new rationality principles guiding how multiple change iterations affect the system. Additionally, we've proposed practical strategies for iterated revision that align with these principles. Secondly, we've delved into belief update, another form of system adaptation distinct from belief revision, and presented two credible models for it.
Our findings have been disseminated through publications and presentations at the 20th International Conference on Principles of Knowledge Representation and Reasoning (KR'23), a prestigious forum dedicated to the theoretical aspects of Artificial Intelligence and Knowledge Representation.
|