Document Type

Thesis

Degree

Master of Arts

Major

Philosophy

Date of Defense

7-27-2024

Graduate Advisor

Gualtiero Piccinini

Committee

Gualtiero Piccinini

Lauren Olin

Waldemar Rohloff

Abstract

Currently, Large Language Models (LLMs) based on the Generative Pre-trained Transformer architecture have made breakthroughs in multilingual language processing tasks and performed similar language processing capabilities to humans. However, their performance in dealing with tasks involving reasoning, especially logical reasoning, show significant shortcomings. In this paper, we argue that logical reasoning competence requires a certain type of representation acquisition capability. Based on this, in order to assess whether LLMs have the potential to overcome the shortcomings of their logical reasoning competence in their subsequent development, this work compares the representation acquisition processes of humans and LLMs. This comparison reveals that although LLMs use similar representations to humans in processing multilingual language tasks, they do not have the same representation acquisition capability as humans, and there is a fundamental difference in the process of representation acquisition between humans and LLMs. We refer to this difference as a mapping deficit in the representation acquisition of LLMs, which explains why LLMs succeed in processing multilingual language tasks even though they do not have the same representation acquisition capability as humans, and why LLMs’ logical reasoning competence shows significant shortcomings. This work aims to enhance the logical reasoning competence of LLMs in the future, and we believe that if the mapping deficit in the representation acquisition of LLMs is solved, then the logical reasoning competence of LLMs will also improve.

Share

COinS