A lot of work has been devoted to representing human knowledge into artificial systems. Since the early days of AI there has been always a great deal of interest in gathering new tools from theories whose main purpose is to consider how humans handle concepts. The formalization of these tools in a way that could be turned into computer programs, is going a long way with many success stories.
Recently, we attended a very interesting conference, The Future of Teleosemantics, about one of the most discussed and promising theories of meaning developed in the last few years, namely the teleological theory of mental content, or teleosemantics. This theory can be considered as one of the most fruitful research programs in contemporary philosophy of mind. Since its inception in the 1980s, teleosemantics has continuously been refined, enhanced and extended to new domains, like biology and neuroscience. At the conference we attended, organized in Bielefeld, by the research project Advancing Teleosemantics, there were some of the most representative players in this filed, to name a few: Ruth Millikan, David Papineau, Nicholas Shea and Marc Artiga.
What came out is that, in the last few years, teleosemantics has seen a number of exciting new developments that seem to be successfully exploitable in the field of AI. For instance, teleosemantics can provide interesting insights on how to build integrated AI perception and reasoning systems. Moreover, it can provide a framework for evaluating “the explanatory value” of different representational systems. It may also provide interesting insights on how to design hybrid architectures where symbolic and lexical knowledge are linked to connectionist structures, such as neural networks.
Discussing with the speakers we realised that, besides improving the state of the art in AI and KR, the computational implementation of teleosemantics could pave the way to large case studies.
And there is still a lack of work in this direction…