Week 9
Critical Approaches to Language and Meaning
Meaning without reference in large language models
- This short paper argues that LLMs can acquire human-like word meaning and understand relationships between words, even without directly interacting with the world. It’s hard to judge the merit of this paper since it is primarily a literature-based summary and opinion piece. The argument that reference is strictly necessary for language use has been weakened by the performance of current LLMs. However, claiming a similarity in how humans understand meaning might be a stretch.
Models as Agent Models
- This paper was interesting, as I have recently been considering a similar idea: if an LLM can internally infer the author’s intention or internal state, this could constrain the space of possible predictions more effectively than relying solely on textual patterns. The paper explicitly adopts a pragmatic theory of language, referencing thinkers like Austin.
Do Language Models’ Words Refer?
- This paper introduced me to the internalist/externalist distinction regarding how words gain meaning. Its key contribution is reframing the debate about whether Language Models (LMs) can refer. The authors argue that common skepticism—the idea that LMs cannot refer because they lack direct world experience, internal beliefs, or sensory input (the “grounding problem”)—implicitly relies on an internalist view of meaning.
- The paper challenges this by applying externalism. Using thought experiments like Putnam’s Twin Earth (‘water’ meaning H₂O vs. XYZ despite identical internal thoughts) or the elm/beech case (referring correctly despite lacking distinguishing knowledge), it demonstrates that human reference often depends less on the speaker’s internal mental state and more on the word’s external causal history within a speech community or deference to experts.
- Therefore, the paper contends, if humans can refer to things like ‘elm trees’ or ‘RNA vaccines’ without deep personal understanding or experience, then the absence of such internal states in LMs isn’t, by itself, a decisive reason to conclude their words cannot refer. This shifts the focus away from the LM’s internal “mind” and towards its connection (via training data) to the historical, external chains of reference in human language.