Unveiling the Epistemic Foundations of AI Testimony by LLMs

Published On Fri Apr 25 2025
Unveiling the Epistemic Foundations of AI Testimony by LLMs

Testimony by LLMs | AI & SOCIETY

Artificial testimony generated by large language models (LLMs) serves as a valuable source of knowledge. However, the criteria for successful knowledge acquisition from artificial testifiers differ from those for human testifiers. As a result, the epistemic foundation of artificial testimonial knowledge deviates from traditional epistemological theories related to human testimony. Building upon Thomas Reid’s teachings, a new epistemological theory is proposed to guide the credibility of artificially generated statements.Generative AI Image

Avoid common mistakes on your manuscript

With the advancement of Artificial Intelligence (AI), especially Large Language Models (LLMs), individuals tend to rely on the credibility of information produced by LLMs, such as ChatGPT. However, this optimistic perception of AI systems encounters challenges, as current LLMs fall short in passing the Turing Test. Despite attempts to align with Turing’s behaviorism, which overlooks the inherent differences between human beings and LLMs, it remains crucial to acknowledge that current LLMs do not replicate human interlocutors' communication effectively with users.

Two key epistemic inquiries arise concerning the utilization of LLMs for knowledge acquisition:

The Question of Dissimilarity

Does artificial testimony generated by LLMs differ significantly from human testimony as a viable knowledge source?

Introduction to Testimony Image

The Question of Justification

What unique characteristics must LLMs possess to establish themselves as credible sources of knowledge?

One potential perspective proposes that machine-generated text, including output from LLMs, and human testimony share common ground as sources of knowledge supported by similar epistemic principles. For example, Sosa advocates for an instrumentally reliabilist perspective on testimony, suggesting that both machines and human testifiers must meet a reliability standard to convey accurate information for knowledge acquisition. However, this externalist approach overlooks the intentional aspect of human testimony, which involves the recipient's trust in the testifier, distinct from a mere reliance on an instrument.

Turing Test Image

An alternative viewpoint considers the dissimilarities between AI-generated content and human testimony, questioning the meaningfulness of labeling AI systems as potential testifiers. While LLMs may lack intentional communication similar to human agents, underestimating their role in testimonial practices may overlook their efficacy in interacting with human users. Despite the absence of inherent intentionality, designers can embed intentions, such as deception, into AI systems through programming, influencing users' perceptions during human-AI interactions.

In practical scenarios, users often attribute epistemic agency to intelligent LLMs, acknowledging their outputs as valuable sources of information. As researchers explore the dynamics of AI testimony and human-machine interactions, the credibility of textual output from AI systems is increasingly recognized within the realm of testimony. Consequently, viewing LLMs as conversational interlocutors presents a compelling perspective on leveraging their capabilities in knowledge transmission.MatSci-LLM Image