Big Math and the One-Brain Barrier: The Tetrapod Model of Mathematical Knowledge

  • PDF / 1,292,742 Bytes
  • 10 Pages / 593.972 x 792 pts Page_size
  • 20 Downloads / 156 Views

DOWNLOAD

REPORT


Big Math and the One-Brain Barrier: The Tetrapod Model of Mathematical Knowledge JACQUES CARETTE, WILLIAM M. FARMER, MICHAEL KOHLHASE , AND FLORIAN RABE

The Viewpoint column offers readers of the Mathematical Intelligencer the opportunity to write about any issue of interest to the international mathematical community. Disagreement and controversy are welcome. The views and opinions expressed here, however, are exclusively those of the author. The publisher and editors-in-chief do not endorse them or accept responsibility for them. Articles for Viewpoint should be submitted to the editors-inchief.

â

n recent decades we have seen mathematics tackle problems whose solutions require increasingly large developments in proofs, computations, data sets, and document collections. This trend has led to intense discussions about the nature of mathematics, raising questions like these:

I

1. Is a proof that can be verified only with the help of a computer still a valid mathematical proof? 2. Can a collection of mathematical documents that exceeds what can be understood in detail by a single expert be a legitimate justification of a certain mathematical result? 3. Can a collection of mathematical texts—however large—adequately represent a large body of mathematical knowledge? The first question was first raised by Appel and Haken’s proof of the four color theorem [2], which in four hundred pages of regular proof text reduced the problem to approximately two thousand concrete configurations that had to be checked by a computer. It arose again from Thomas Hales’s proof of the Kepler conjecture [20], which contained substantial algebraic computations, carried out by computer as part of the proof. The second question is raised by, for example, the classification of finite simple groups (CFSG), which comprises the work of a large group of mathematicians over decades and which has resisted dedicated efforts even to be written down consistently; see the discussion below. The third question arises from the ongoing development of digital mathematics libraries (DMLs)—such as the huge collection of papers that constitute the CFSG—that fail to make explicit the abundant interconnections in mathematical knowledge that are needed to find information in these DMLs and reuse it in new contexts. Let us call such developments Big Math, by analogy with the big data/big everything meme but also alluding to the New Math movement of the 1960s that aimed to extend mathematical education by changing how mathematics was taught in schools. The emerging consensus of the mathematical community seems to be that while the methods necessary for dealing with Big Math are rather problematic, the results so obtained are too important to forgo by rejecting such methods. Thus, we need to take on board Big Math methods and understand the underlying mechanisms and problems. In what follows, we analyze the problems, survey possible solutions, and propose a unified high-level model that we claim computer support must take into account for scaling mathematics. We