Recent breakthroughs in large language models (LLMs) demonstrate impressive fluency and in-context learning abilities.
However, these pure neural network models struggle with deeper logical reasoning and compositional generalization.
On the other hand, symbolic representations like knowledge graphs, business logic flows, and software programs enable robust reasoning but face challenges in scale and interface complexity.
Emerging techniques explore combining neural networks with symbolic systems in a “neurosymbolic” approach to get the best of both worlds. This article analyzes two papers showcasing different applications but similar overall philosophies for synergizing LLMs with formal representations.
What is Reasoning?
Reasoning refers to the process of applying logic to derive new conclusions from available information. Key aspects of reasoning include:
- Logical reasoning: Making inferences through valid arguments, using techniques like deduction, induction, abduction etc.
- Causal reasoning: Identifying correlation/causation between events and variables to make predictions.
- Commonsense reasoning: Using everyday knowledge about the world to interpret situations.
- Explainable reasoning: Articulating the underlying thought process behind conclusions.
Advanced reasoning skills allow tackling complex real-world problems methodically, similar to programming. Hallmarks of expert reasoning include strategizing solutions, modularizing problems, seeking additional data, and updating beliefs based on new evidence.
Limits of Current LLMs
Large language models display impressive fluency in language and narrow task competency. However, evaluations show gaps in complex logical reasoning, long-term consistency, multi-step inference chains, and providing explainable reasoning trails. LLMs currently lack robust structured thinking.
The Neurosymbolic Solution
Neurosymbolic systems combine neural learning approaches like LLMs with structured…