Content

/

Blog Posts

/

Do LLMs Really Reason?

blog posts

Do LLMs Really Reason?

June 10, 2025

The featured image for a post titled "Do LLMs Really Reason?"

This piece originally appeared at Second Best.

Large language models have impressive linguistic abilities, but do they understand what they say or merely parrot? “Reasoning models” like OpenAI’s o3 are great at multi-step problem solving, but do they really reason or is it just elaborate pattern matching? Anthropic’s Claude reports having inner experiences, but is this evidence of true subjectivity or an oddity of next token prediction?

This is my second post interpreting Kant and Hegel’s philosophical systems through the lens of modern concepts in AI and computer science. The previous post dealt with the theoretical dimension of reason, i.e. our relationship to the world and knowledge of it. This post deals with the practical dimension of reason, including morality, language and culture, although the two are inter-related.

As thinkers concerned with the nature of thought itself, Kant and Hegel’s insights are surprisingly relevant to the above questions and more. Indeed, as we’ll see, Hegel elaborated on Kant to develop a theory of meaning and autonomy that is strikingly similar to how LLMs and reasoning models work in practice—and which may even provide a recipe for training AIs with a genuine sense of self.

Continue reading at Second Best.

Explore More Policy Areas

InnovationGovernanceNational SecurityEducation
Show All