Our understanding of mathematics is based in applied logic, just as our understanding of reality is based in applied mathematics. Thus, logic and mathematics are tools for furthering our understanding. Mathematics and logic are closely entwined; their intersection is the field of mathematical logic, the machinery by which we establish mathematical facts (logical truths).
Via our chosen Aristotelian deductive logic system, we consider our proven theorems to be true (factual). There are two understandings of mathematical truth: mathematical truth is a product of human mind; mathematical truth is part of our external reality.
In my naive conception, I stand between these philosophical stances. Mathematics is a pure construction of mind, but its derivation via our chosen applied logic will always produce equivalent theories of mathematics. In this sense, mathematical truth exists beyond mind, in a logical space that characterizes our intuition of a rational reality.
That reality behaves rationally becomes more and more established as time goes by and our developing mathematics still accurately describes and predicts its spatio-temporal workings. This is the logical space within reality that mathematical truth occupies, the behavioral or phase space that governs the why and how of its workings.
So, would I say mathematical truth is discovered or created? I choose created (constructed) in the ordinary sense of mathematical progress via applied logic. Discovery might be more accurate at a higher plane of understanding, that of metamathematics and metalogic, where choice of foundational precepts occurs.
The memeplex that is formal mathematics consists of chains of truth-preserving statements, linking each assertion to more primitive, proven hypotheses, and ultimately to the primitive notions and axioms to which we assign intuited truth. The logic of a proof (the links in the chain)is conceptually based on the simple deductive syllogism of Aristotle (we owe a lot to the classical Greek scholars). The statement types are:
- modus ponens inference rule (syllogism)
- major and minor premisses entail conclusion
- generalization inference rule
- proposition with variable x: p(x) entails, for all x, p(x)
In modern propositional logic, a hypothetical syllogism is written:
A → B A ⊢ B
where A and B are statements, each representing a logical value True or False.
The semantics associated with this syntax reads:
- major premise A → B: A implies B (if A then B)
- minor premise A: assert A (A is True)
- conclusion ⊢ B : therefore B (then B is True)
The logic of a proof consists of a chain of such arguments (the set of intermediate premises and conclusions) and a model (the underlying set of truth conditions of the premises). Most human-generated proofs in mathematics are stated in natural language, restricted to a certain style and regimentation that makes the flow of logic as transparent as possible. (It would be artificial and obscuring to construct mathematical proofs in the syntax of a chain of hypothetical syllogisms.) But the underlying concept of all proof is a sequence of just such arguments.
A syllogism is always a valid argument, regardless whether the premises are true or not. If the premises are true, then the syllogism is a sound argument and the conclusion is also true. But a valid argument will reach an invalid conclusion when the premises are not true statements. A proof is a set of sound arguments that establish a true statement derived from other true statements.
In a chain of syllogisms, the conclusion is the only result to pass on to a subsequent argument, where it will factor into the subsequent’s premises. The ability to chain arguments together in a useful logical ordering comes with mastery of the art the proof. In advanced mathematics, these chains can lead far away from the immediate concepts before returning with support, often highly abstract, for a pertinent conclusion.
Sequences of syllogisms of various flavors (modus ponens, modus tollens, modus tollendo ponens, etc.), have provided the arguments of all demonstrative mathematical proofs from Aristotle until the late 19th century, when the first order logic theory was devised to formalize deductive logic. The first order logical inference rules are modus ponens and generalization (if a statement depends on a variable, it is valid for all values of the variable).
This first order theory has become the formal logic system of our classical mathematics. Yet only a second order theory has the power to fully express some logical constructs involving sets with infinite cardinality. By restricting ourselves to first order theory, induction is framed by an axiom schema producing weaker conclusions, but weaker only in the sense that it becomes a non-categorical theory (having more than one model up to isomorphism).
First-order logic syntax is defined recursively as a set of statements comprised of combinations of constants, variables, connectors and qualifiers. Semantics are assigned via choice of domain of discourse (over which the variables range), and the syntactic rules of the first order theory (logical meanings of the statement tokens). These formalisms facilitate study of logic theory as well as automated proof systems. We also encounter such formalisms when they are used to make rigorous the axioms on which our mathematics is built.
Just as syllogisms are chained within a proof, proofs are themselves chained to create ever more complex layers of argument. Each proof is a transformation of established prior truth statements, together with some new premises, into a new truth statement. The memeplex that is mathematics is mechanically merely a chain of such transformations of the base axiomatic truths, associated with a model through which each link conserves truth.
Thus logic is our blueprint, not in the sense of specifying what we build, but rather specifying the methods telling us how to build it, the rules that must pertain. Once the ‘how’ is determined, the ‘what’ is inevitable (logical, as Spock would tell us). This inevitable ‘what’ is lifeless, however, mere logical symbols on a page, a meaningless exercise. The meaning is informed by our intuition, bringing all the memes to life.
Constructive and Non-Constructive Logic
There are proofs, call them direct, that illustrate why some facts lead to other facts. For instance, a proposition regarding the existence of an object could be proved by showing how to construct one, or by providing an example of one. These are the most satisfactory type of proof. (A few mathematicians, constructivists, insist that mathematics only admit this type proof; this likely makes their job harder.)
Here’s a link to a direct (constructive) proof I find attractive:
Here, the premise is provided a constructive meaning in terms of too many values (pigeons) for available containers (pigeon houses), thus invoking the pigeonhole principle).
Construction of classical mathematics, however, has always depended heavily on indirect proofs, where we end up not much closer to understanding why a statement is true than before we proved it so. For example, new statements can be established by showing that their falsity would produce a logical contradiction.
Here is a link to a short and sweet indirect proof:
Such ‘back door’ cleverness has appeal. Yet one might hope indirect proofs are just convenient truth place holders in our memeplex, awaiting eventual replacement via direct proofs that shed light on the reason why something needs to be true (other than that it can’t not be true). However, a perfect world with only direct proofs is not to be. It has been shown that, in our chosen logic, there can be no direct proof of some facts in the classical mathematics universe. For instance, Courant points out that there can be no direct proof of the non-denumerability of the reals.
Sometimes the constructive distinction is not clear, as in the proof that the ratio of a side to the diagonal of a square is irrational:
Here we are presented with an essentially constructive proof (words are not really necessary). We can visualize and intuit why the premise cannot be false. Yet by adding the words to describe the argument, we end up with a proof based on contradiction.
Indirect arguments by contradiction have a further dark side. They are based on a proposition of the type p ∨ ¬p (read p or not p) being a tautology (always true), thus assuming the law of the excluded middle. But if the domain of discourse includes certain infinite sets, it may not be valid to assume that p ∨ ¬p is always true. There are other logic theories, not based on Boolean algebra, that can account for the middle state where p ∨ ¬p is undecidable. The intuitionist school of mathematicians works in this other math universe.
Meanwhile, the overall classical construction can continue.
Primitive Notions and Axioms
Just as with our real world structures, the methodology of formal, axiomatic mathematics demands that before construction can begin, our edifice needs a foundation. In the case of mathematics, this foundation will provide a context, a minimal domain of discourse from which the remainder will evolve deductively. Our intuition provides this foundation.
Foundations begin with primitive notions, concepts which admit no further definition other than our intuition can provide. The primitive notions of mathematics include numbers, points, lines, and sets. They remain undefined, known to us through our intuition and through prescribed behaviors. There are also defined notions in our foundation, specifying synthetic entities that will be of utility in future reasoning. These can be considered a derived layer, overlaying our primitives.
Axioms are statements which provide for existence and behavior of our notions. Axioms are assumptions that are not deducible from any prior propositions. Axiomatic foundations require great skill in composition to ensure they can support an indefinite future construction.
To support future deduction, the chosen system of axioms must be construed as true statements. Whether they are true depends on our interpretation of the primitive notions, our undefined entities. When an interpretation of the primitive notions is found that renders all the axiomatic statements true, one has defined a model for the system. A model supports deductive reasoning chains.
By the middle of the twentieth century, the major axiom systems of our classical mathematics were established and widely accepted. They are still in use today: the Zermelo-Fraenkel Axiomatic Set Theory with Choice (ZFC), the Dedekind-Peano Postulates of arithmetic (PA: Peano Arithmetic, derivable from a set-theoretic definition of the natural numbers in ZFC), and Euclid’s five postulates for geometry (as modernized by Hilbert and expressed in terms of ZFC).
These were not the only axiom sets available, but these have served us well for many decades. These arithmetic and geometry axiom sets are expressed in terms of the ZFC language. This exemplifies our layered axiomatic basis: a collection of primitive notions for sets and their members (ZFC), in terms of which we can further assert universally useful notions pertaining to arithmetic and geometry.
Numbers are not defined in the PA axioms. The PA axioms assert the existence of entities that behave according to our intuition about the natural numbers, via rules involving a ‘successor’ relation. Sets are not defined or even mentioned in the formal set-theoretic axiom set, ZFC. The axioms of ZFC provide rules based on a relation ‘is a member of’, another rule for selecting a member, existence statements for the null and infinite sets, and rules for the standard set operations. Certain such sets and members are recognized by our intuition as lines and points in space. Other axioms define spatial notions: between, coincident, and equal/congruent. Our intuition informs us that geometric concepts are being established via set theory.
Mathematical foundation pioneers did not get their axioms perfected on the first try. The axioms have been made more sophisticated over time, usually by refining them to have certain ‘nice’ properties that maximize consistency and completeness while avoiding antinomy.
To this end, mathematicians imbue the axioms with certain essential properties, as they relate to one another. Axioms are independent because one cannot be deduced from the others. They are consistent because a contradiction cannot be derived from them. They are complete because the validity of all derivable statements can be established from only these assumptions.
As a practical note, Halmos, in the Introduction to his text on Naive Set Theory, advises students to understand the axiomatic underpinnings of their trade. But he goes further: “read it, absorb it, and forget it”. Fortunately, Halmos restricted his domain of discussion to formal set theory, so mathematicians were not being urged to forget mathematical induction! Mathematicians also find direct use of the foundational machinery useful in formal proofs of basic intuitive operations we take for granted, such as 1 + 1 = 2.
Continuing new challenges and refinements create an ongoing mathematical specialty, work on axiomatic and logical foundations. ZFC is a case in point. It exists at the core of classical mathematics, yet the formal completeness and consistency of ZFC is still open to debate. Its inadequacy is highlighted in the relationships of distinct transfinite sets, and most visibly with the Cantor Continuum Hypothesis (CH), an open mathematical question that now is known to be independent of the ZFC axioms (we cannot logically argue its validity at this time).
Mathematicians have put forth approaches for extending/revising ZFC, broadening its domain and enhancing its consistency. A typical goal has been to augment ZFC at least to a point where CH is decidable. Although CH seems to be accepted in spite of its current inaccessible basis, Conway states: “The prevailing opinion today is that CH should be considered false”.
The rest of mathematics is not much affected by any remaining uncertainties or weaknesses at the axiomatic level. Classical mathematics is completely supported under this system when certain types of infinite sets are excluded from the domain of reference. Doing so does not reduce the effectiveness or validity of the total of developed mathematics to date.
Looking to the future, formal necessity may cause mathematicians to go beyond the current state of the art in logic and axioms, in order to eliminate some apparent limitations at the boundaries of our mainstream mathematics. Such tinkering will need to be extremely well thought. One hopes such revised choices will not affect our mainstream body of work, which should be derivable from the revised axioms, and which in worst case would continue to be isolated in a ‘classical’ bubble whose esoteric limitations are well-understood.