The idea that thought is computation or that human thinking is a natural form of computing is ubiquitous in science and philosophy today. Interestingly, over three centuries ago, the German polymath Gottfried Wilhelm Leibniz (1646–1716) embraced the idea that thought is essentially computational and had himself begun trying to ‘mechanize reason’. Leibniz even worked on developing the binary system of 0s and 1s with which our computers operate – all digital data stored and used by computers consists of strings of binary digits, or bits – and he tried to develop a calculus of reasoning that would contain some of the most pathbreaking insights into algorithmic problem-solving methods and computational approaches.
Implicit in computational interpretations of thinking are ideas such as decomposition into less complex, the ordering and condensing of ideas into symbolic format, and the processing of these symbols according to a set of rules. The resulting picture is that of systematic and repeatable step-by-step transitions as portrayed by ‘well-behaved’ rational thought, whether in humans or machines. And the universality of computation (that it can be applied to many systems) was famously introduced by Alan Turing in 1936, where he showed that a single machine was able to perform any algorithmic process, which made possible the development of general-purpose computers.
Leibniz was a man of extraordinary intellectual energy, prodigiously wide-ranging interests and breathtaking work output. Apart from his official duties as court adviser and librarian, Leibniz contributed to a vast array of fields. Natural philosopher, mathematician, lawyer, theodicist, and inventor of calculators, clocks and windmills, Leibniz was a truly global thinker with a network of over a thousand correspondents. What made him in many ways unusual was his relentless optimism about progress, his tireless efforts towards creating academic institutions around Europe that would bring knowledge to the people, and his honest desire for intellectual collaboration, communication and exchange.

Leibniz’s pioneering work is seen today as anticipating symbolic logic, Boolean algebra and binary coding, although his ideas never directly influenced where we are today. It’s more the case that once his visionary insights much later emerged from the thousands and thousands of archived manuscript pages, it became clear that many of his thoughts were extraordinarily prescient and seem entirely consonant with our modern understanding of machine intelligence.
Leibniz trusted in human ingenuity and creativity, but he also believed that there were tasks we could and should relegate to machines. Clocks could tell time so much better than humans who inevitably fell asleep at some point, and mechanical calculators such as the Pascaline (which had helped Pierre Pascal’s father in his work as a tax supervisor in Rouen) proved to be far more reliable in their additions and subtractions of taxes. And perhaps precisely in order to free up and strengthen the imagination, Leibniz saw the mechanization of certain areas of human thinking as entirely justified if not liberating. Why waste time on tedious and lengthy mathematical calculations when anyone could do the same work just as accurately and far more efficiently with the aid of a machine?
And if we could save time and effort by letting machines do basic mathematical calculations for us, might other forms of reasoning be computable? How about generating logical consequences to be drawn from premises, or calculating the right legal sentence for an offence according to the law and the facts, or finding the most plausible explanations for given phenomena? Was there a way to mechanize reasoning altogether – careful step-by-step thinking – and what would it take? Was rational thought mechanically possible, and, if so, how?
In what follows we will look at some of Leibniz’s early projects where he already presented us with many of the theoretical prerequisites of today’s thinking and reasoning machines: a combinatorics of symbols, the idea of a universal language for easy computing, binary notation and algorithmic rules for structuring processes, and instructions for the first calculating machines. This is how it all began …
The Laboratory of Ideas
The scene is set for the year 1666, when a twenty-year-old Leibniz submitted his doctoral thesis in philosophy on what he considered one of the most urgent projects for natural philosophy and the developing sciences of his era. His dissertation gave a detailed outline of what he called the ‘art of combinations’, where he provided several methods and an all-together structured approach to understanding and applying combinatorial rules in various areas of discourse. The ‘art’ was that of being skilful rather than artistic (although conceptual artists like Sol LeWitt have used combinatorics in creating visual art pieces). Understanding combinatorics, Leibniz thought, would give us the mastery to construct a comprehensive system of knowledge and a universal method of logical reasoning. As a method, the art of combinations would unveil all possible variations or possible arrangements and rearrangements of fundamental building blocks in whatever field of analysis. And to get there, the task had to be split into three phases.
‘Leibniz trusted in human ingenuity and creativity, but he also believed that there were tasks we could and should relegate to machines.’
A first step was to dissemble and simplify the complexities of our thoughts and ideas about things – in particular ideas that were considered firm knowledge – to arrive at the most elementary components in things, as chemistry looks to decompose complex compounds into chemical elements to understand basic being. Leibniz thought this was in principle possible in all fields of knowledge and could be achieved by drawing appropriate distinctions and dissolving complex ideas into ideas so basic that we were left with undefinables – a true alphabet of human thought – from which everything else originates. This would give us a fundamental set of concepts on which a ‘logic of invention’ could operate via combinatorial moves.
Once these primitive concepts were identified and a conceptual network of primitives was established, we would proceed by attributing each primitive concept with a symbol which in turn would have a place within a universal system of signs. And as a final step we could then take these encoded simple concepts and recombine them into new complex ones. With analysis as a procedure of decomposition, and synthesis going in the opposite direction from the parts to the whole, Leibniz assumed that once all these steps were taken, we could not merely reconstruct but generate new possible truths via his combinatorial technique.
Leibniz imagined that this ultimate compression of a particular idea into one symbol or sign would operate on the deepest level of abstraction, and here we could calculate outcomes and discover new proofs, all the while transforming our messy physical world into something that had the structure and clarity of algebra or arithmetic. In other words, as Leibniz rather confidently proclaimed, with the help of the ‘art of combinations’ all arguments and controversies could be solved calmly and politely by having the quarrelling parties sit down with pen and paper and simply ‘calculate’ the solution.
The art of combinations gave a first glimpse of what would become a lifelong project for Leibniz, that is, to establish the foundations of a science of everything – a type of meta-science concerned with the ordering of all forms of knowledge, whereby analysis and synthesis played a technical role towards the establishment of a system of pictures and symbols that represented fundamental ideas or concepts. Although Leibniz when later looking back admitted that this project was perhaps somewhat overambitious and naïve, it seems that straight from the start Leibniz’s intuition was to focus on the systematization and formalization of knowledge, imagining a whole architecture for how – in modern terms – knowledge could be made computational.
Further projects connected with the combinatorics started to appear under a variety of rather ambitious names. This included a ‘universal language’, a ‘calculus of reasoning’, an ‘encyclopaedia’, each again contributing to and in service of a grand ‘general science’. Applications of his method would appear in ultimately all areas – science, law, medicine, engineering, theology, and perhaps even music and the fine arts.
The Art of Signs and the ‘Symbolator’
As part of the combinatorics, the art of signs was a visionary attempt to create a universal, computable language of thought by assigning symbolic representations to things and ideas, and in effect creating a uniform ‘alphabet of thought’.
Leibniz thought that individual primitive concepts should be replaced by some kind of encoded markers which were free of any ambiguities so often associated with the words of a natural language. Leibniz described symbols as clearly visible – either written or drawn or sculpted – and representing a certain undeniable thought. The symbol was meant to capture synoptically in a single glance the essence of what it stood in for, and being such it would have a fixed place amongst symbols in the language of thought. In some of his writings Leibniz suggested that a symbol should perform well enough in its pictorial representation of a thing so that anyone, no matter where on earth and no matter what language they spoke, could figure out what it meant.
The ‘art’ in the art of signs consisted in carefully forming and ordering symbols so that they reflected both the structure of human thought and, through it, the rational structure of the world. By arranging symbols to match how our thoughts relate to one another, complex ideas could be built out of simpler parts. This allowed reasoning to become clearer and less ambiguous than in ordinary language, helping us to move towards the truth, even though the full complexity of reality can never be completely captured by symbols. Symbolic notation was there to help with the visualizations of relations.
Akin to mathematical calculations, the vision was to establish a calculus of reason – a symbolator that would operate with the help of a network of abstract symbols to determine systematically the truth or error of things. Leibniz thought that his symbol-methodology would grant us a ‘very powerful improvement of the human reason’, which may even be applied to investigations concerning history, the ‘examination of natural bodies’, medicine, law, and, in general, to any activity involving reason and any kind of decision-making as it stands.
There would be occasions when calculations or computations proceeded on a straightforwardly deductive basis following what Aristotle with his syllogisms had already established as entailment relations, or on a par with those we find in geometry or in analytic propositions. Here as in any system based on formal logic, we can be sure to draw the correct inference by paying attention exclusively to the way premises are related to one another – that is, on purely structural grounds – independently of their contents, so that B follows from A plus B regardless of what A meant and what B meant. As long as we stuck to the rules as defined we would never infer a falsehood from true premises, even if we have no idea what either the premises or the conclusions were about. This means that we have rules that allow us to conclude from the facts that ‘I am in a room, and this room is in a house’ to the fact that ‘I am in a house’.
But there might also be applications when the solution is neither determined nor fully expressible by the data, so that our computations are subject to the rules of inductive logic and probable reasoning (whether Bayesian or stochastic). Under these rules we are looking for the most probable scenario or the hypothesis that provides the best explanation for why things turn out to be as they are.
Once these various forms of reasoning had been formalized into algorithmic symbol manipulations, perhaps the day would come when machines could be built to compute and calculate outcomes and solutions for us. We could then show, with a satisfying clarity and directness, how rational, reason-guided thought transitions are mechanically possible. This, so Leibniz, would produce not just better ways of thinking but lead to immense progress and the unlocking of the many mysteries of the natural world. Throughout his life Leibniz left many preliminary works such as tables of definitions and sketches where he, for example, tried to use prime numbers as unique signs. And yet, despite significant advances and no lack of effort, as a project, the ‘universal system of signs’ was never realized. Nevertheless, Leibniz’s interests were not merely theoretical.
‘From his plans for the mechanization of well-behaved thought, to the building of mechanical automata that could compute for us, to binary notation, Leibniz’s work was at the very least eerily ahead of its time.’
Calculators are Go
A couple of years later, by the early 1670s Leibniz had decided against an academic appointment at the University of Altdorf, had very briefly flirted with alchemy as a secretary of a society in Nuremberg, and eventually decided to embark on a life working (where there might be funding for his many interests) as a court adviser and diplomat. He was employed in various capacities and tasked with different projects, amongst them to give legal advice on questions of ongoing political wranglings and territorial disputes in Europe.
One such diplomatic posting in 1672 took Leibniz to Paris, where he would happily remain for four more years, thoroughly enjoying its intellectual climate, learning about all the very latest ideas and methods, and interacting with many of its leading lights. With only rudimentary knowledge of mathematics, Leibniz decided to embark in more serious studies and sought out the biggest name in town, the highly respected mathematician Christiaan Huygens, who, before agreeing to mentor him, set Leibniz a test (of finding the sum of the reciprocals of triangular numbers). Leibniz returned in a matter of days with the correct answer. The subsequent four years under Huygens’s mentorship proved to be amongst the most intellectually fertile periods of Leibniz’s life, culminating in groundbreaking mathematical discoveries.
Nevertheless, in the early days of January 1673, Leibniz found himself in London partly on behalf of a diplomatic mission, but also with a note from Huygens, who informed Oldenburg, the secretary of the Royal Society, that ‘Mr Leibniz will show you a version of his machine for the multiplication of numbers, which is very ingenious.’ Leibniz had become preoccupied with improving on Pascal’s calculator and started to develop his own internal mechanisms for a mechanical machine that could not only do addition and subtraction, but also multiplication and division.
On 22 January 1673, Leibniz attended the meeting of the Royal Society where he presented his then still incomplete calculating machine – a delicate device made of wood, with wheels, a stepped drum as the core calculation mechanism, dials and cranks – which did not function as well as it could, leaving Leibniz to promise a return with an updated and improved version as soon as he could leave the appropriate instruction on how to cast it in metal. (See Figure 1.)
Leibniz’s Stepped Reckoner – the first calculator that could add, subtract, multiply and divide.

Most of the esteemed members of the Society were impressed enough with the new calculating machine to elect Leibniz to the Royal Society in April 1673. Nevertheless, the completion of the calculator had to wait. In the following months and to Oldenburg’s dismay, Leibniz seemed to concentrate less on his engineering project and more on mathematical solutions to what would become his independent discovery of the infinitesimal calculus in 1674 – an achievement he would later have to share with the influential Isaac Newton.
As with many of his projects, Leibniz continued refining the design of the Stepped Reckoner over the next two decades, but work progressed only very slowly, often delayed because it was difficult to find craftsmen to build it. On several occasions over the years, Leibniz claimed that the machine was finally completed. In reality, this was a ruse to attract investors who he thought were more likely to buy 200 units of a product they believed to be finished, than support him in what was effectively the research and development phase. Only its gear mechanism, called the Leibniz wheel, went into production in many calculating machines and was used for the next 200 years. In the meantime, Leibniz started to become preoccupied with yet another project that would foreshadow developments in computer science.
Everything Reduces to 0 and 1
Although there are other number systems in practice today, decimal, or base-10, is by far the most common. We now count up to 10 and then repeat starting the numbering with 1 again, but this has not always been the case. Decimal numbering with Hindu-Arabic numerals had been introduced in the thirteenth century by Leonardo Fibonacci in his Book of Calculations but was adopted by trade only slowly and reluctantly.
In the second half of the 1670s – Leibniz was still only in his early thirties – further projects to do with decomposition and reduction emerged and started to take shape, at first only in the sketches of number tables and simple sums scrawled in the margins of manuscripts. Leibniz’s preoccupation with reductive techniques had begun to expand into geometry and number systems. In geometry he thought everything can be expressed by a straight line and a circle. In number theory all we needed were the digits 0 and 1 to account for all other numbers. Finally, in a piece called ‘On Binary Progression’ dated March 1679, we can find an outline of his thoughts on algorithms for conversion between decimal and binary notation and his attempts to mechanize binary calculation.
Manuscript evidence from those years shows Leibniz jotting down the power of 2 geometric sequence – 1, 2, 4, 8, 16, 32, 64 …, and he explained much later reminiscing about his explorations that it had struck him when working more generally on devising methods and formulae that in comparison with other non-decimal number systems binary was ‘the simplest and most natural base’.
In binary the value of a digit depends not only on the digit itself but also on its position in a sequence. Using only 0s and 1s, we can write the first sixteen numbers as follows:


For Leibniz, binary notation – the use of only two signs to express all numbers – offered a clarity that proved useful in his work on problems involving prime and perfect numbers.
Leibniz kept quietly working on binary, which he deemed superior in matters of theory, as well as base-16 (or ‘sedecimal’ as he called it) for matters of practice, both at the core of computer programming today with base-16 seen as a particularly compact and efficient way to represent binary data. Representing numbers in binary with its positional notation would become absolutely fundamental to computer technology and computer programming for its direct correlation with the on/off states of electrical circuits.
Unable to stir up much interest from amongst his peers for the time being, Leibniz filled many pages with investigations into some interesting recurring patterns he observed in the binary representations of numbers when arranged in columns (or column-periodicity), which would become particularly crucial in mathematical and computational contexts.
Over time, he started thinking of binary as having even broader metaphysical significance – and perhaps serving as a crucial nexus connecting philosophy, theology and mathematics. Around 1694–5, Leibniz ventured to suggest for the first time that the representation of all numbers by 0 and 1 was comparable to the theological doctrine of creation ex nihilo – the creation of all things out of nothing (0) by God (1).
And another exciting development came when after writing to a couple of Jesuit missionaries in China, Leibniz received a long-awaited letter from Joachim Bouvet – a long-time scholar of the Yijing or Book of Changes whose job it was to tutor the great Kangxi emperor for up to two hours a day in algebra and geometry. Upon reading Leibniz’s account of binary arithmetic, Bouvet was struck by the resemblance between Leibniz’s binary numeration and the sixty-four hexagrams of the Yijing, in which reportedly the unfolding of the universe is vividly portrayed as the mixing of the yin and yang cosmic forces. Leibniz’s binary, Bouvet suggested, and the hexagrams with six stacked lines (unbroken ones for yin, and broken ones for yang) had exactly the same underlying formal structures. (See Figure 2.)
The Yijing, or Book of Changes, represents the structure of change in the universe through broken and unbroken lines (yin and yang), which Leibniz famously compared to the binary digits 0 and 1.

Resonating like a symbolic echo from an ancient civilization, the Yijing, so it seemed, confirmed that Leibniz had not been alone in his quest for the primitives of thought and a language that would make the act of thinking just as the act of calculation a reflection of the fundamental structure of nature itself. Binary on this reading was in fact the key to deciphering the secrets of the famous hexagrams of the Yijing and with it the secrets of creation itself. Leibniz was thrilled.
As history has it, not much more came of it after that. Apart from his binary calculator (1679), Leibniz designed a binary clock for the visually impaired in 1716 (the year he died), which could be read with the help of a single hand and an ingenious arrangement of dimples and notches on the clock’s face that reflected the twelve hours. Only the basic elements of Leibniz’s work on binary notation and arithmetic were published in 1705. His more advanced ideas – for example, on binary fractions, column periodicity, and even binary calculating machines – remained hidden among his papers for many years. They were only fully appreciated much later, after George Boole’s discovery of binary logic in 1847, when its symbolic language and algebraic method revealed fundamental logical principles underlying human thought and reasoning.
Rationality is Mechanically Possible!
Leibniz did not quite have the means or the technological provisions to create a reasoning machine any more sophisticated than the mechanical calculator he designed. But he intended for the elements of his calculus of reasoning to become a universal procedure whereby symbols were encoded signs that could stand in for many things as he explained: numbers in algebra, or points in geometry, or qualities in Aristotelian syllogistic logic, or simply words of a natural language in cryptography. Symbolic representation offered a way of hand-crafting knowledge and gave us a principled handling of their contents according to syntactic rules, that is, rule-based algebraic procedures that could be applied universally independent of context.
This would become a basic insight for any attempts to mechanize reason (to produce ‘well-behaved’ thought). In certain crucial domains, a symbol’s form – its shape or structural characteristics – preserves and leaves untouched its meaning content. This assumption is at work in Leibniz’s calculus of reason, in classical logic as we saw, but also in the Turing Machine, and in symbolic and connectionist artificial intelligence. It allows us to posit that computational manipulations of syntactical elements yield meaningful results rather than nonsense. In such a framework, semantics is maintained and emerges naturally from the system’s formal operations.
In offering us a symbolic system that could be subjected to whichever rules were appropriate in a domain of discourse, Leibniz gave us a sense of how rationality was mechanically possible, how reason could be mechanically explained as thought-to-thought computations on symbols. In such a system, symbols were structurally indivisible and semantically primitive vehicles of specific contents, and computations were mechanical, automatic processes in accordance with algebraic rules. From Leibniz’s stepped drum as the central processing unit for algebraic operations, Alan Turing made this idea concrete by demonstrating that for all formally specifiable input-output routines, a well-programmed machine could replace a human.
What made the machine as much as minds rational in this setup was the ability to perform computations on thoughts that had a syntactical structuring, where ‘computation’ meant formal operations according to an algorithm.
Conclusion
The history of philosophy is the history of people transforming our conceptual spaces in a way that opens up new avenues of thinking about things that make way for progress. As Descartes before him had successfully merged geometry with algebra to create analytical geometry and change forever how we thought about matter and the mind, Leibniz took a similarly transformative step. For over two millennia, logic and mathematics had evolved as separate disciplines when Leibniz made the visionary leap of applying algebraic principles to reasoning, forging a mathematical framework for logic which would 300 years later transform both fields.
Even though Leibniz was not able to put his ideas into practice, all the pieces started to fall into place with Boole’s reinvention of algebraic logic in 1847 and, more importantly, Claude Shannon’s idea in 1937 of using Boole’s algebraic approach to describe electrical circuits with switches representing the binary states of on (1) and off (0). This now would lead to the widespread use of the binary system in modern computing and digital technology.
Computation can be self-contained (classic symbolic), and it can be ampliative and go beyond what is contained in the data (connectionist). It can confirm us in how to reason solidly, and it can lead us to new ‘variations’ that signal opportunities for progress. Computation is a transformative force in modern science, one that enables researchers to explore complex systems, process vast datasets, and push the boundaries of scientific understanding in ways Leibniz could only dream of.
From his plans for the mechanization of well-behaved thought, to the building of mechanical automata that could compute for us, to binary notation, Leibniz’s work was at the very least eerily ahead of its time. A single exa-scale supercomputer today can perform 2.746 exaFLOPS – that is 2,746,000,000,000,000,000 – of calculations per second. We now can run whole earth models to track climate change or simulate extinction scenarios. In biology, computational methods have been crucial for understanding protein folding. In physics and chemistry, computational methods are essential for studying quantum systems, designing new materials and understanding molecular interactions. None of this would be possible with traditional experimental methods alone. Advanced AI systems now help scientists find explanations by generating new hypotheses. Most recently it took Google’s ‘Co-Scientist’ two days to arrive independently at a still unpublished hypothesis that explains why certain viruses become immune to antibiotics, a project to which a whole research team dedicated the best part of a decade. Despite technology’s many risks and drawbacks, from a standpoint of pure innovation and progress, Leibniz would love to see us now.
Acknowledgement
A big thanks to Lloyd Strickland who, once again, helped improve this significantly with his immense knowledge of Leibniz and his insightful comments.