The Quest for Truth: From Ramon Llull’s Logic Machine to Modern Language Models

In the 13th century, Ramon Llull, a young patrician from Majorca, embarked on a quest to create a logic machine that could prove the existence of the Christian God. His vision was to develop a device that could converse with readers and provide truthful answers about faith, akin to an early chatbot. Llull’s machine, inspired by Muslim astrologers’ combinatorial device, aimed to combine divine attributes into logically true statements. Despite his meticulous examples, Llull’s invention failed to gain traction and he met a tragic end, reportedly stoned to death on a missionary trip to Tunisia.

Throughout history, humans have sought automated certainty, a means to access truth beyond human fallibility. Gottfried Wilhelm Leibniz, a 17th-century mathematician, envisioned a mechanical logic machine that could calculate truth by discovering the fundamental concepts from which all others could be constructed. Leibniz’s dream of a divine language representing the relationships between thoughts aimed to bring about utopia and perfect various fields of knowledge.

George Boole, an English mathematician in the 19th century, furthered the pursuit of automated certainty by developing a system of language based on Boolean logic. Boole’s work demonstrated that logical propositions could be expressed as mathematical equations, paving the way for the use of algebra on ideas to calculate truth. However, Boole’s logic found limited practical application and remained confined to philosophy departments.

It was not until the 20th century that Claude Shannon, an American mathematician, harnessed Boolean logic to optimize telephone switch routing. Shannon’s groundbreaking work laid the foundation for modern computing, giving rise to the binary code that powers the digital realm. Shannon’s insights promised to transform human thought into the organized language of logic, offering the potential for truth-seeking.

The development of language models, such as ChatGPT, has furthered the quest for automated certainty. These models, trained on vast amounts of text, mimic the statistics and patterns of natural language. While they can generate seemingly coherent responses, they lack the ability to distinguish between fact and fiction. Language models like ChatGPT are distillations of human beliefs and chatter, rather than external calculators of pure truth.

Despite the advancements in language models, they still fall short of the original vision of truth machines. They make reasoning mistakes similar to humans and lack the ability to self-correct. Reflection even hampers their performance. The pursuit of automated certainty has not progressed much beyond Llull’s logic machine, as uncertainty has been automated instead.