Some books by Edsger W. Dijkstra:

The text below is a translation of the Dutch manuscript (PDF)
Home

EWD1056

How unimportant it is whether or not submarines can swim

It has already been a few years or so ago now since the editors of a somewhat obscure magazine asked me to contribute an article on the question whether or not computers could think. I did not feel like doing that and I explained my refusal with the remark that I found the suggested topic just as unimportant as the similarly burning question whether or not submarines could swim. (I had reckoned without my host: the editor —a sociologist— wrote me back, that he found that last question very interesting as well!)

The question of whether or not computers can think is of no use, because anyone who tries to answer this, is very soon on a one-way trip to philosophy. The question is also redundant in the sense that we can think about computers —their capabilities, their limitations and their impact on culture— without having to choose what we should, could or want the word "think" to mean.

I do not need to itemize the mystical nonsense, ranging from electronic brains to monsters of Frankenstein, with which popular belief and the tabloids surrounded computers, because that is well known. Much more important is the phenomenon of the abundance and the persistence of this nonsense, because as a symptom this phenomenon is explainable in only one way: it tells us that computers were, when they appeared, a radical novelty that could not be adequately understood in terms of already familiar phenomena and concepts. Comparisons with things of the past were all ill-suited, analogies to what we knew proved time and time again too superficial to be not misleading.

The radical novelty had two aspects. The best known of them was the incredible speed: computers calculated first a thousand times faster than previously possible, then a million, and finally a billion times faster. The problem is that such speed ratios, although easy to write, are in a very real sense, unimaginable, and it is very difficult to imagine how unimaginably unimaginable something is.

Even a single factor of ten is like night and day; once I asked a lady friend, to convince her of that, how many children she had — I knew there were six and she understood what I meant. And a baby who learns to crawl a thousand times as quickly will overtake fighter jets! For the sake of completeness I mention that also the capacity for information storage —in anthropomorphic terminology also called "memory size"— has increased unimaginably over the years. In short: at such drastic quantitative differences all analogies fail.

The other aspect of the radical novelty initially appeared slightly less in the foreground, but the repercussions of it are no less important. To get this aspect in the spotlight I must allow myself some poetic license —or mathematical abstraction— and I hope that the reader allows me. I want to refer to "formulae" which are constructed from "symbols" from a finite "alphabet". That "alphabet" is not limited to the 26 letters, we allow punctuation, all sorts of brackets and mathematical symbols and the digits 0 through 9 as well. The advantage of this poetic license is that it allows us to put an algebraic expression like (a+b)/c, a program fragment like x := x+1, and a decimal number like 729 all three under the same heading "formula".

The just outlined imagery enables me to indicate the second novelty: with computers, a universe is created where nothing else happens than deriving new formulae from existing ones, according to strict rules. This universe is theoretically of deep meaning because everything that is derivable can be derived in this way; the electronic version is of great practical significance because it works so fast that even the result of long derivations can become available in an acceptable time frame.

Such a formal universe is therefore as novelty radical because it blatantly goes against all our previously acquired intuitions. Because e.g. all formulae are constructed from the symbols of a finite alphabet, there is no such thing as continuous change. We could try to talk about "small" changes —e.g. the change of just one single symbol— but again our intuition fails us, because no metrics exist according to which "small" changes also have "small" effects.

The most salient feature of the formal universe is, however, that nothing other than formula manipulation takes place; even the smallest fraction of a verbal argument is thus excluded. Insofar as we identify "understanding" and "insight" with verbal reasoning, understanding and insight are therefore excluded from the formal universe. A small example may clarify this: if we need to establish in the framework of a larger discussion that 812 is a multiple of 7, then that is not something that we try to "understand" or "see", but something that we determine by just performing a simple long division, by —in our imagery— doing some formula manipulation.

Our traditional ways of arguing are mixed, viz. partly formal and partly verbal, and verbal reasoning is so strongly taught at an early age that total elimination of the verbal component appears to many at first sight unwanted, if not impossible. Further analysis has shown however, that it is always the verbal component that is vague, unintelligible and ambiguous. Traditionally, mathematics has limited itself to, for otherwise too subtle reasoning, reducing the verbal component by partial formalization; the formal universe, such as that embodied by computers, is challenge and incentive to go much further, until in the end, the verbal component is completely eliminated. This challenge has not remained unanswered, and the result is that in the area of mathematical methodology a quiet revolution is taking place in which full formalization increasingly becomes attractive.

How "important" is this all? Who initially sees science as an activity where only a negligible fraction of the population participates actively, and then sees the science as a dispersed whole in which the mathematical subculture has maneuvered itself into a kind of intellectual ghetto, will at best mumble "well, well, very interesting", shrug and get back to the usual daily routine.

To begin with, one can also consider science as one of the main pillars of our culture and then —one would not be the first!— argue that intellectual standing and weight of each scientific discipline is determined by its mathematical content. In that vision, a radical change of course in mathematics would leave in the long term footprints in the vast majority of our intellectual life.

I expect such a radical change of course. At this moment, the Concise Oxford Dictionary still defines mathematics as "abstract science of space, number, and quantity"; when the aforementioned quiet revolution has taken place, "art and science of effective reasoning" will be a more adequate definition of mathematics. I will not be around to see it happen, but with a bit of luck, in a hundred years from now, the question of whether or not computers can think will be no more than a historical curiosity from the twentieth century.

Nuenen, July 15th, 1989

prof. dr. Edsger W. Dijkstra
Department of Computer Sciences
The University of Texas at Austin
Austin, TX 78712-1188
USA

Translated by Henk-Jan van Tuyl
2012-10-27


Home


Some books by Edsger W. Dijkstra: