2023 Fiscal Year Research-status Report
The Problem of Other Minds in an Age of Social Robots
Project/Area Number |
23K00005
|
Research Institution | The University of Tokyo |
Principal Investigator |
オデイ ジョン 東京大学, 大学院総合文化研究科, 教授 (50534377)
|
Project Period (FY) |
2023-04-01 – 2026-03-31
|
Keywords | Philosophy of Mind / Other Minds / Consciousness / Artificial social agents |
Outline of Annual Research Achievements |
The problem of Other Minds faced unexpected new data in early 2023 with the release of ChaptGPT, which caused widespread speculation both in the popular discourse and also in some academic discussions that artificial intelligence may be nearing the level of Artificial General Intelligence (AGI). An agent with general intelligence, that is to say intelligence with at least roughly the range and flexibility of human intelligence, would (almost by definition) pass the Turing Test and to that extent have a claim to be regarded as possessing a mind. On the other hand, the underlying technology (Large Language Models) seems incompatible with the presumed basic requirements for possessing a mind. However, to the extent that LLMs seem capable of “fooling” interlocutors into interacting with them as if they are conscious agents, their effect on is extremely relevant to this project. On this topic, I gave a talk to the Alliance of Asian Liberal Arts Universities’ 6th Annual President’s Forum on November 24, 2023 “Can AI help us become better people”.
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
Progress is as planned.
|
Strategy for Future Research Activity |
The nest stage of the project is to evaluate the philosophical research on Direct Social Perception. The aim of this stage is to come to some tentative conclusions as to whether consciousness in some form is the sort of feature that is directly perceivable, for example through emotional expression, intention in action, or more lengthy interaction. If it makes sense to say that aspects of mindedness are directly perceivable, then those properties should be able to act as criteria enabling as to detect the emergence of mindedness in artificial agents.
|