Zero began its life as an imprint on wet clay. Around 5,000 years ago in Mesopotamia, the Sumerian people devised a revolutionary method for number-writing. Instead of inventing new symbols for ever-increasing numbers, they designed a system whereby the position of a symbol inside a number corresponded to that symbol’s value. If this seems confusing, it’s probably because the idea is so familiar it becomes obfuscated by explanation. Consider the numbers 407 and 47. Both contain a ‘4’ yet, in each, ‘4’ represents different values (400 and 40, respectively). The way we interpret this symbol correctly is from the column it sits in within its number (the hundreds or tens, for example). While this may seem like a mere change in format, the consequences of such positional notation were vast: it allowed for rapid recording of large numbers and simple methods of calculation.
At some point, a problem emerged: what were the Sumerians to do when a particular column had no number in it, as in the number 407? It was here that zero was born: Sumerians placed a diagonal wedge between two numbers to signify ‘nothing in this place’.
Despite the power afforded by positional notation and a mathematical symbol for nothing, it met with resistance and even derision as it made its way out of the Middle East. Greek civilisations left limited records corresponding to zero’s use, and they maintained use of a non-positional numerical system, much like Roman numerals. In fact, the Greek aristocracy – those who studied mathematical frameworks – actively shunned the use of zero. Greece was a land of geometry, and its scholars sought to describe the world using lines, points and angles. The concept of ‘nothing’ had no obvious home. Their love of logic was equally obstructive: how could nothing be something? Aristotle concluded that nothingness itself did not – could not – exist.