This is the second edition of the workshop IARML@IJCAI. The first edition took place at IJCAI-ECAI 2022, Vienna, Austria with promising impact and that counted with the participation of several colleagues from Europe, America and Asia.
Analogical reasoning is a remarkable human capability used to solve hard reasoning tasks. It consists in transferring knowledge from a source domain to a different, but somewhat similar, target domain by relying simultaneously on similarities and differences. Analogies have preoccupied humanity at least since antiquity (cf the works of Atistotle, Theon of Smyrna, among others) and have been in more recent years characterized as being “at the core of cognition” (Hofstadter 2001) showing that they permeate almost every aspect of cognition (Hofstadter and Sanders, 2013). Analogies have been tackled from various angles.
Traditionally, analogical proportions, i.e., statements of the form “A is to B as C is to D”, are the basis of analogical inference. They contributed to case-based reasoning and to multiple machine learning tasks such as classification, decision making and machine translation with competitive results. Also, analogical extrapolation can support dataset augmentation (analogical extension) for model learning, especially in environments with few labeled examples. Other approaches include the Structure Mapping approach of Dedre Gentner that is based on logical descriptions (in the form of predicate-argument structures) of two domains: the more relational similarity one has between the two domains, the more analogous they can be considered. According to Hofstadter and the Fluid Analogies Research Group, analogy making is intimately related with abstraction and the search of a “common essence”, which can lead to deep understanding of any concept or situation.
Recent neural techniques, such as representation learning, enabled efficient approaches to detecting and solving analogies in domains where symbolic approaches had shown their limits. Transformer architectures trained using vast amounts of data have given us Large Language Models (LLMs) such as Chat-GPT, seem to exhibit human-like conversational and analogy making capacities (Webb et al. 2022). However, better evaluation metrics are needed in order to measure elusive concepts such as intelligence and understanding (Mitchel 2023). More than ever we need to understand the role that analogies, abstraction and similarities between concepts play in language and cognition.
The purpose of this workshop is to bring together AI researchers at the cross roads of machine learning, natural language processing, knowledge representation and reasoning, who are interested in the various applications of analogical reasoning in machine learning or, conversely, of machine learning techniques to improve analogical reasoning.