Advanced draft for the book "Philosophical Semantics: Reintegrating Theoretical Philosophy" (CSP 2018)
Chapter V
Verificationism Redeemed
There is no distinction
of meaning so fine as to consist in anything but a possible difference in practice.
—C. S. Peirce
Es ist schwer einem Kurzsichtigen
einen Weg zu beschreiben. Weil man ihm nicht sagen kann: ‘Schau auf dem Kirchturm
dort 10 Meilen von uns und geh’ in dieser Richtung.
[It is difficult to tell a near-sighted man how to get somewhere. Because
you cannot say to him: ‘Look at the Church tower ten miles away and go in that direction.’]
—Wittgenstein
Verificationism is now commonly
viewed as a relic of philosophy as practiced in the first half of the 20th
century. Although initially advocated by members of the Vienna Circle, it soon proved
unable to withstand an ever expanding range of opposing arguments,
which came from both within and outside of the Circle. My aim in this chapter is
to show that we can achieve an understanding of verifiability that is both intuitively
acceptable and resistant to the most widespread objections. In my view, the Vienna
Circle failed to successfully defend verificationism because it used the wrong approach
of beginning by formally clarifying the principle of verification initially proposed
by Wittgenstein without paying sufficiently detailed attention to what we really
do when we verify statements. When their arguments in favor of the principle were
shown to be faulty, most of them, along with their offspring, unwisely concluded
that the principle itself should be rejected. In my view, they were exhibiting
the same reaction of the proverbial fox in Aesop’s fable: unable to reach the grapes,
he consoled himself by imagining they were sour...
Returning to the methodology and assumptions
of the later Wittgenstein, my aim in this chapter is twofold: first to sketch a
plausible version of what I call semantic verificationism, which consists
in the proposal that the epistemic contents of declarative sentences, that is, the
e-thought-contents or propositions expressed by them are constituted by their
verifiability rules; second, to confirm and better explain semantic verificationism
by answering the main counter-arguments.
1. Origins of semantic
verificationism
The first point to be remembered
is that, contrary to a mistaken popular belief, the idea that a sentence’s meaning
is its method of verification didn’t stem from the logical positivists. The first to propose the principle
was actually Wittgenstein himself, as members of the Vienna Circle always acknowledged
(Cf. Glock: 354). Indeed, if we review
his works, we see that he formulated the principle in 1929 conversations with Waismann
and referred to it repeatedly in texts over the course of the following years. Furthermore,
there is no solid evidence that he abandoned the principle later, replacing it with
a merely performative conception of meaning as use, as some have argued. On the
contrary, there is clear evidence that from the beginning his verificationism and
his subsequent thesis that meaning is a function of use seemed mutually compatible
to him. After all, Wittgenstein did not hesitate to conflate the concept of meaning
as verification with meaning as use and even with meaning as calculus. As he said:
If you want to know the meaning of a sentence, ask
for its verification. I stress the point that the meaning of a symbol is its place
in the calculus, the way it is used.[1] (2001:
29)
It is always advisable to check what the original author of an
idea really said. If we compare Wittgenstein’s verificationism with the Vienna Circle’s
verificationism, we can see that there are some striking contrasts. A first one
is that Wittgenstein’s main objective with the principle always seems to have been
to achieve a grammatical overview (grammatische Übersicht), that is, to clarify
central principles of our factual language, even if this clarification could be
put at the service of therapeutic goals. On the other hand, he was always against
the positivistic-scientistic spirit of the Vienna Circle, which in its incipient
and precocious desire to develop a purely scientific philosophy had as the strongest
motivation to develop the verification principle to use it as a powerful reductionist
weapon, able to vanquish once and for all the fantasies of metaphysicians. Wittgenstein,
for his part, didn’t reject metaphysics in this way. For him, the metaphysical urge
was a kind of unavoidable dialectical condition of philosophical inquiry, and the
truly metaphysical mistakes have the character
of depth (Wittgenstein 1984c sec. 111, 119). Consequently, metaphysical
errors were intrinsically necessary for the practice of philosophy as a whole. As
he wrote:
The problems arising through a misinterpretation of
our forms of language have the character of depth. They are deep disquietudes; their
roots are as deep in us as the forms of our language and their significance is as
great as the importance of our language. (1984c, sec. 111)
It was this rejection of positivistic-scientistic
reductionism that gradually estranged him from the Logical Positivists.
In these aspects, Wittgenstein was much closer
to that great American philosopher, C. S. Peirce.
According to Peirce’s pragmatic maxim, metaphysical deception can be avoided
when we have a clearer understanding of our beliefs. This clarity can be
reached by understanding how these beliefs are related to our experiences, expectations
and their consequences. Moreover, the meaning of a concept-word was for Peirce inherent
in the totality of its practical effects, the totality of its inferential relations
with other concepts and praxis. So, for instance, a diamond, as the hardest material
object, can be partially defined as something that scratches all other material
objects, but cannot be scratched by any of them.
Moreover, in contrast to the positivists, Peirce
aimed to extend science to metaphysics, instead of reducing metaphysics to science.[2] So, he was of the opinion that
verifiability – far from being a weapon against metaphysics – should be elaborated
in order to be applicable to it, since the aim of metaphysics is to say extremely
general things about our empirical world. As Peirce wrote:
But metaphysics, even bad metaphysics, really rests
on observations, whether consciously or not; and the only reason that this is not
universally recognized is that it rests upon kinds of phenomena with which every
man’s experience is so saturated that he usually pays no particular attention to
them. The data of metaphysics are not less open to observation, but immeasurably
more so than the data, say, of the very highly observational science of astronomy… (1931,
6.2)[3]
Although overall Peirce’s views
were as close to Wittgenstein’s as those of both were distant from the logical positivists
and their theories, there is an important difference between both philosophers concerning
the analysis of meaning. Peirce was generally interested in the connection between
our concepts and praxis, including their practical effects, as a key to conceptual
clarification and a better understanding of their meaning. But by proceeding in
this way he risked extending the concept of meaning too far; he took a path that
can easily lead us to confuse the cognitive and practical effects of meaning with meaning itself. For as we already saw, the cognitive
meaning of a declarative sentence, seen as a combination of semantic-cognitive rules,
works as a condition for the production of inferential awareness, which consists
in the kind of systemic openness (allowing the ‘propagation of content’)
that can produce an indeterminate number of subsequent mental states and actions.[4] Meaning as a verifiability rule
is one thing; awareness of meaning and inferences that may result from this awareness,
together with the practical effects of such inferences, may be a very different
thing. Though they can be partially related, they should be distinguished. Hence,
within our narrow form of inferentialism, we first have the inferences that construct
meanings (like those of the identification rules of singular terms, the ascription
rules of predicates, and the verifiability rules of sentences); then we have something
usually beyond cognitive meaning, namely, the multiple inferences that enable us
to gain something from our knowledge of meaning, along with the multiplicity of
behavioral and practical effects that may result from them. Without this separation,
we may even have a method that helps us clarify our ideas, but we will lack a boundary
that can prevent us from extending the meanings of our expressions beyond a reasonable
limit. For instance, the fact that something cannot be scratched helps to verify
the assertion ‘This is a diamond’
(the hardest material), whereas the use of diamonds as abrasives will certainly
be of little if any relevance for the explanation of the assertion’s meaning. This
is why I think that Wittgenstein, restricting cognitive meaning to a method of verification,
that is, to combinations of semantic rules able to make a proposition true, proposed
a more adequate view of cognitive meaning and its truth.
Looking for a better example, consider the
statement: (i) ‘In October 1942 Chil Rajchman was arrested, put on a train, and
deported to Treblinka.’ This promptly leads us to the inference: (ii) ‘Chil Rajchman
died in a death camp.’ However, his
probable fate would not be part of the verifiability procedure of (i), but rather
of the verification of statement (ii). Thus, although (ii) is easily considered
a consequence of (i), its thought-content isn’t
a real constituent of the cognitive meaning,
the thought-content-rule expressed by (i). Statement (ii) has its own verifiability
procedures, even if its meaning is strongly associated with that of statement (i)
since it is our main reason for being interested in this last statement. So, we
could say that considering a statement S,
there is something like a cloud of meanings surrounding its cognitive meaning, this
cloud being formed by inferentially associated cognitive meanings of other statements
with their own verifiability rules. But it is clear that this cloud of meaning does
not properly belong to the cognitive meaning of S and should not be confused with
it. In short: only by restricting ourselves to the constitutive verifiability
procedures of a chosen statement are we able to restrict ourselves to the proper
limits of its cognitive meaning.
Opposition to a reductionist replacement of
metaphysics by science was also one reason why Wittgenstein didn’t bother to make his principle formally precise, unlike positivist philosophers from
A. J. Ayer to Rudolph Carnap. In saying this, I am not rejecting formalist approaches.
I am only warning that such undertakings, if not well supported
by a sufficiently careful pragmatic consideration of how language really works,
tend to put the logical cart before the semantic horse. In this chapter, I want
to show how unwise neglect of some very natural conceptual intuitions has frustrated most attempts by positivist philosophers to defend their own principle.
Having considered these differences, I want
to start by examining some of Wittgenstein’s remarks regarding the verifiability
principle, in order to find a sufficiently adequate and reasonably justified formulation.
Afterward, I will answer the main objections against the principle, demonstrating
that they are much weaker than they seem at first glance.
2. Wittgensteinian
semantic verificationism
Here are some of Wittgenstein’s
statements presenting the verifiability principle:
Each sentence (Satz) is a signpost for its verification.
(1984e: 150)
A sentence (Satz) without any way of verification has
no sense (Sinn). (1984f: 245)
If two sentences are
true or false under the same conditions, they have the same sense (even if they
look different). (1984f: 244)
To understand the sense
of a sentence is to know how the issue of its truth or falsity is to be decided.
(1984e: 43)
Determine under what
conditions a sentence can be true or false, then determine thereby the sense of
the sentence. (This is the foundation of our truth-functions.) (1984f: 47)
To know the meaning of
a sentence, we need to find a well-defined procedure to see if the sentence is true.
(1984f: 244)
The method of verification
is not a means, a vehicle, but the sense itself. Determine under what conditions
a sentence must be true or false, thus determine the meaning of the sentence. (1984f:
226-7)
The meaning of a sentence
is its method of verification. (1980: 29)[5]
What calls attention to statements
like these is their strongly intuitive appeal: they seem to be true. They satisfy our need for a methodological starting
point that accords with our common knowledge beliefs. To a great extent, they
even seem to corroborate Wittgenstein’s controversial view, according to which philosophical theses should be ultimately
trivial because they do no more than make explicit what we already know. They are
what he would call ‘grammatical sentences’ expressing the rules grounding the linguistic practices that
constitute our factual language. In the end the appeal to meaning verificationism
involves what we might call a ‘transcendental argument’: we cannot conceive a different
way to analyze the cognitive meaning of a declarative sentence, except by appealing
to verifiability; hence, if we assume that cognitive meaning is analyzable, some
form of semantic verificationism must be right.
There are some points we can add. The first
is terminological and was already extensively discussed in this book: we should not forget that the verifiability
rule must be identified with the cognitive content of a declarative
sentence. This cognitive content is what we could call, remembering our reconstruction
of Frege’s semantics, the e-thought-content-rule expressed by the declarative
sentence (being also called the descriptive, informative or factual content of the
sentence, if not its proposition or propositional content). A complementary point,
already noted, is that we should never confuse cognitive content with grammatical
meaning. If you do not know who Tito and Baby are, you cannot understand the cognitive
meaning of the sentence ‘Tito loves Baby,’ even if you are already able to understand
its grammatical meaning.
Another point to be emphasized is that the
verifiability rule correctly understood as e-thought-content or proposition
must include both, the verification and the falsification of the statement,
since this rule can in itself be either true or false.[6] Wittgenstein was explicit
about that: ‘The method of verification is not a means, a vehicle, but the
sense itself’ (1984f: 226-7). The reason is easy to see: the verifiability e-thought
rule either applies to the verifier as such – the truth-maker, which in the last
chapter we usually and unequivocally identified with some cognitively independent
fact in the world – which verifies the
rule – or it does not apply to any expected
verifier or fact in the world – which falsifies the rule. Consider, for example,
the statement ‘Frege was bearded.’ Here the verifiability e-thought rule applies
to a circumstantial fact that the rule is intended to apply to in a world that makes
the rule effectively applicable, which means that the verifiability e-thought rule
expressed by the statement is true. Consider, by contrast, the statement ‘Wittgenstein
was bearded’: here the verifiability e-thought rule does not apply to the intended
contextual fact in the world, since this fact does not exist, and that falsifies
the statement. But then it is because the verifiability rule expressed by this statement
is false, since it is inapplicable.
A final point concerns the reading of Wittgenstein’s
distinction between the verification of a sentence (Satz) and of a hypothesis
(Hypothese), which he made in the obscure
last chapter of his Philosophical Remarks. As he wrote:
A hypothesis is a
law for the building of sentences. One could say: a hypothesis is a law for the
building of expectations. A sentence is, so to speak, a cut in our hypothesis
in a certain place. (1984e XXII, sec. 228)
In my understanding, the hypothesis
is distinguished here mainly by being more distant from sensory-perceptual experience
than what he calls a sentence. As a consequence, only the verification of a sentence
(statement) is able to give us certainty. However, this does not mean that the verification
of this sentence is infallible. Hence, when Wittgenstein writes that we can verify
the truth of the sentence ‘Here is a chair’ by looking only at one side of the chair (1984e,
Ch. XXII sec. 225), it is clear that we can increase our degree of certainty by
adding new facets, aspects, modes of presentation, sub-facts. We could, e.g., look
at the chair from other angles, or make tests to show what the chair consists of,
whether it is solid enough to support a person, etc.
Thus, my take is that what he calls the certainty of a sentence is only postulated
as such after we consider it sufficiently verified in the context of some
linguistic practice. This is why things can be seen as certain and yet remain fallible,
as practical certainties. By contrast, the verification of hypotheses, like sentences
stating scientific laws, as this is realized only derivatively, gives us comparatively
lower degrees of probability, though they can also be assumed as true.
3. Verifiability rule
as a criterial rule
A more important point emphasized
by Wittgenstein and ignored by others is that we usually have a choice of ways to
verify a statement, each way constituting some different, more or less central aspect
of its meaning. As he noted:
Consideration of how the meaning of a sentence is explained
makes clear the connection between meaning and verification. Reading that Cambridge
won the boat race, which confirms that ‘Cambridge won,’ is obviously not the meaning,
but is connected with it. ‘Cambridge won’ isn’t the disjunction ‘I saw the race
or I read the result or...’ It’s more complicated. But if we exclude any of the
means to check the sentence, we change its meaning. It would be a violation of grammatical
rules if we disregarded something that always accompanied a meaning. And if you
dropped all the means of verification, it would destroy the meaning. Of course,
not every kind of check is actually used to verify ‘Cambridge won,’ nor does any
verification give the meaning. The different checks of winning the boat race have
different places in the grammar of ‘winning the boat race.’ (2001: 29)
Moreover:
All that is necessary for our sentences to have meaning
is that in some sense our experience would
agree with them or not. That is: the immediate experience should verify only something
of them, a facet. This picture is taken
immediately from reality because we say ‘This is a chair’ when we see only a side
of it. (1984f: 282, my italics)
In other words: one can verify
through the direct observation of facts, that is, by seeing a Cambridge racing boat
winning a race or by hearing the judge’s confirmation, or both. These forms of verification
are central to the meaning of ‘Cambridge won the boat race.’ It is worth remembering
that even this direct observation of the fact is aspectual: each person at the boat
race saw the fact from a different perspective, i.e., they saw and heard different
sub-facts: different aspects (facets) of the same event. However, we also say that
they all did see the grounding fact in the sense that
they inferred its totality in the most direct way possible; this is why we can say
that the fact-event of Cambridge winning, as a grounding fact, was also directly (that is, in the most direct
possible way) experienced. In the same way, we are allowed to say that we see a
ship on the sea (the inferred grounding fact), while what we phenomenally see is
only one side of a ship (a given aspectual sub-fact).
However, often enough the way we can know the
truth-value of a thought-content like that expressed by the sentence ‘Cambridge
won the boat race’ is more indirect: someone can tell us, we can read this in the
internet or in a magazine or we can see a trophy in the clubhouse… These ways are
secondary, and for Wittgenstein they participate only secondarily in the sentence’s
meaning. Finally, they are causally dependent on more direct ways of knowing the
truth-value, which are primary verifying criteria. If these first ways of verification
did not exist, these dependent forms, being secondary criteria or mere symptoms,
would lose their reliability and validity.
We can say that the verifiability rule applies
when we achieve awareness of a fact, which means that we are in a position that
allows us to make the relevant inferences from our factual knowledge. This awareness
is the most direct when the criterial configuration (a configuration of p-properties
or tropes) that satisfies the verifiability rule is at last partially constitutive
of the grounding fact, for instance, when we observe a competition being won. But
more often verification is indirect, namely, by means of secondary criteria or symptoms,
often making the verifiability e-thought-content rule probably or even very probably
true.
Criteria tend to be displayed in the form of
criterial configurations, and such conditions can vary indeterminately. Thus, the
verifiability rule is said to apply when a criterial configuration demanded by the
semantic-cognitive criterial rule is objectively given as belonging to objective
facts as their constitutive tropical combinations and arrangements. Furthermore,
concerning a basal e-thought-content, also a criterial rule seems to have as a minimum
condition for satisfaction of some kind of structural
isomorphism between, on the one hand,
the interrelated internal elements originating as constituents of the thought-content-criterial-rule,
and, on the other hand, the interrelated objective elements (objective tropical combinations) that make up the grounding
fact in the world. This is what would constitute the isomorphism with the grounding
fact. Since experience is always aspectual and often indirect, this also means that
the dependent criterial configurations belonging to the rule must also show a structural
isomorphism with aspectual configurations of independent or external criterial arrangements
of tropes (given in the world and experienced by the epistemic subject). This generates
what we could call isomorphic relations with a sub-fact (say, a ship on the sea
seen from one side), and enables us to infer the whole grounding fact (say, a whole
ship on the sea). I expect to say more about this complicated issue in the last
chapter.[7]
As this reconstruction of Wittgenstein’s views
shows, a sentence’s meaning should be constituted by a verifiability rule that usually
ramifies itself, requiring the actual or possible fulfillment of a multiplicity
of criterial configurations, allowing us to infer facts in more or less direct ways.
Hence, there are definitional criterial configurations (primary criteria) such as,
in Wittgenstein’s example, those based on direct observation by a spectator at a
boat race. But there are also an indefinite number of secondary criterial configurations
depending on the first ones. They are secondary criteria or even symptoms, allowing
us to infer that Cambridge (more or less probably) won the boat race, etc. Here
too, we can say that the primary criteria have a definitional character: once we
accept them as really given and we can agree on this, our verifiability rule should
apply with practical certainty by defining the arrangement of tropes (fact) accepted
as given. Moreover, we can treat secondary criteria (like reading about an event
in a magazine) as less certain, though still very probable, while symptoms (like
having heard about the event) make the application of a verifiability rule only
more or less probable. Thus, if an unreliable witness tells us that Cambridge won,
we can conclude that it is probable that Cambridge won. However, what makes this
probability acceptable is, as we noted, that we are assuming it is backed by some
observation of the fact by competent judges and eye-witnesses, that is, by primary
criterial configurations.
Investigating the structure of verifiability
rules has some consequences for the much-discussed traditional concept of truth-conditions.
The truth-condition of a statement S can be defined as the condition sufficient for its e-thought-content-rule actually be the
case. The truth condition for the statement ‘Frege had a beard’ is the condition
that he actually did have a beard. This means that the truth-condition of S
is the condition that a certain fact can be given as S’s truth-maker, that is, as
satisfying the verifiability rule for S. The given truth-maker, the fact, is
an objective actualization of the truth-condition. Thus, the so-called ‘realist’
view (in Michael Dummett’s sense) is mistaken, since according to it a truth-condition
of a statement could possibly be given
without at least some conception of criterial configurations (tropical configurations
that would possibly warrant its existence), and its related verifiability e-thought rules could to some extent be at least conceivable.
Now, considering our analysis of the identification
rules of proper names (Appendix of Chapter I) and of the ascription rules of predicative
expressions (Ch. II, sec. 6), we can consider the verifiability rule of a singular
predicative statement to be a combination of both in a more explicit way. We can
get an idea of this by examining a very simple predicative statement: ‘Aristotle
was bearded.’ For this we have first as the definitional identification rule for
Aristotle the same rule already presented at the beginning[8]:
IR-Aristotle: The name
‘Aristotle’ is applicable iff its bearer
is the human being who sufficiently and more than any other person satisfies the
condition(s) of having been born in Stagira in 384 BC, son of Phillip’s court physician,
lived the main part of his life in Athens and died in Chalcis in 322 BC and/or was
the philosopher who developed the main ideas of the Aristotelian opus. (Auxiliary
descriptions may be helpful, though they do not belong properly to the definition…)
And for the predicative expression
‘…was bearded’ we may formulate the following definitional ascription rule:
AR-bearded: The predicate
‘…is bearded’ is ascribable iff its bearer
is a human being who has the tropes (properties) of facial hair growth on the chin
and/or cheeks and/or neck.
Now, as we already know, we first
apply the identification rule of the singular term in order to identify the object,
subsequently applying the ascription rule of the general term by means of which
we select the tropical cluster of the object identified by the first rule. Not only
are there many possible ways in which
the identification rule and the ascription rule can be satisfied, there are still more ways of verification for the whole e-thought-content expressed by ‘Aristotle
was bearded.’ One of them is by examining the well-known marble bust of Aristotle
preserved in Athens, another is
by accepting the recorded testimony of his contemporaries that has come down to us, and still another is by learning that most ancient Greeks
(particularly among the peripatetics) customarily wore beards as a badge of manhood. All this makes possible the
satisfaction of AR-bearded for that human being (the criterial configurations on
the chin and cheeks are satisfied), in addition to the satisfaction of IR-Aristotle.
As we noted, we assume this criterially-based verification as practically certain,
which allows us to say we know that Aristotle was bearded, even if we are aware
that this is only indirectly established as highly probable. We can summarize the
applicability (or judgment or truth-attribution) of the basal e-thought-content
verifiability rule to the grounding fact that Aristotle was bearded by means of
the following schema:
├ [[IR-Aristotle is
applicable to its bearer]AR-bearded is applicable to this same bearer].
These brief comments on verificationism
à la Wittgenstein suggest the need for
more intensive pragmatic research on ways of verification. As we noted, the structure
of a verifiability rule is normally ramified, and its details should vary in accordance
with the kind of statement that expresses it. A detailed pragmatic investigation
of diversified forms of verifiability rules seems to me an important task that as
far as I know has not been attempted until now. In what follows, I will not try
to correct this limitation. I will restrict myself to answering the main objections
to the verifiability principle, showing that they are products of
misunderstanding.
4. Objection 1: The
principle is self-refuting
The first and most notorious objection
to the principle of verifiability is that it is self-defeating. The argument runs
as follows. The principle of verifiability must be either analytic or synthetic.[9] If it is analytic it must be
tautological, that is, non-informative. However,
it seems clearly informative in its task of elucidating cognitive meaning.
Furthermore, analytic statements are self-evident, and denying them is contradictory
or inconsistent, which does not seem to be the case with the principle of verifiability.
Therefore, the principle is synthetic. But if it is synthetic, it needs to be verifiable
in order to have meaning. Yet, when we try to apply the principle of verifiability
to itself we find that it is unverifiable. Hence, the principle is metaphysical,
which implies that it is devoid of meaning. The principle is meaningless by its
own standards; and one cannot evaluate meaningfulness based on something that is
itself meaningless.
Logical positivists tried to circumvent that
objection by responding that the principle of verifiability has no truth-value,
for it is nothing more than a proposal, a recommendation, or a methodological requirement.[10] A. J. Ayer advocated this view by challenging his readers to suggest
a more persuasive option (1992: 148). However, a reader with the opposite convictions
could respond that he simply doesn’t feel the need to accept or opt for anything
of the kind... Moreover, the thesis that the principle is only a proposal appears to be clearly
ad hoc. It goes against Wittgenstein’s assumption that all we are doing is
exposing the already given intuitions underlying our natural language, the general
principles embedded in it. Consequently, to impose on our language a methodological
rule that does not belong to it would be arbitrary and misleading as a means of
clarifying meaning.[11]
My suggestion is simply to keep Wittgenstein’s
original insight, according to which
a principle of verifiability is nothing but a
very general grammatical sentence
stating the way all our factual language must work to have cognitive content to
which a truth-value can be assigned. Once we understand that the principle should
make our pre-existing linguistic dispositions explicit, we are entitled to think
that it must be seen as an analytic-conceptual principle. More precisely, this principle
would consist in the affirmation of a hidden synonymy between the phrases ‘meaning as the cognitive content
(e-thought-content-rule or proposition) expressed by a declarative sentence’ and
‘the procedures (combinations of rules) by which we may establish the truth-value
of this same cognitive content.’ Thus, taking X to be any declarative sentence,
we can define the cognitive value of X
by means of the following analytic-conceptual sentence stating the verifiability
principle:
VP (Df.): Cognitive
meaning (e-thought-content) of a declarative sentence X = the verifiability
rule for X.
Against this, a critic can react
by saying that this claim to analytic identity lacks intuitive evidence. Moreover,
if the principle of verifiability were analytic, it would be non-informative, its
denial being contradictory or incoherent. However, it appears that VP says something
to the effect that in principle it
can be denied. It seems at least conceivable that the cognitive meaning of statement
X, the thought-content expressed by it, isn’t a verifiability rule.
My reaction to this objection is to recall that an analytic sentence does
not need to be transparent; it does not need to be immediately seen as necessarily
true, and its negation does not need to be clearly seen as contradictory or incoherent.
Assuming that mathematics is analytic, consider the case of the following sentence:
‘3,250 + (3 . 896) = 11,876 ÷ 2.’ At first glance, this identity neither seems to
be necessarily true nor does its negation seem incoherent; but a detailed presentation
of the calculation shows that this must be the case. We can regard it as a hidden analytic truth, at first view not graspable because of its derivative character and our inability
to see its truth on the spot.
This can be suggested by means of a thought-experiment.
We can imagine a person with a better grasp of arithmetic than ours. For a child,
2 + 3 = 5 can be analytically transparent, as it is for me. For me, 12 . 12 = 144
is also transparently analytic (or intuitively true), though not to a child who
has just started to learn arithmetic. But 144 . 144 = 20,736 isn’t transparently
analytic for me, although it may be so for a person with greater arithmetical skill.
Indeed, I would guess that some persons with great arithmetical skill (as in the
case of some savants) can recognize at a glance the truth of the identity
‘3,250 + (3 . 896) = 11,876 ÷ 2.’ This means that the boundary line between transparent
and derived or non-transparent (but deductively achievable) analytic truths is movable,
depending on our cognitive capacities and to some degree affected by training. Thus,
from an epistemically neutral point of view, the two types are on the same level
since for God (the only epistemic subject able to see all truths at a glance) analytic
truths must all be transparent.
In searching for a better-supported answer,
we can now distinguish between transparent
and non-transparent analytic-conceptual knowledge.[12] The sentences ‘A triangle has
three sides,’ ‘Red is not green’ and ‘Three is greater than two’ express transparent
analytic knowledge, since these relations are self-evident and their negation clearly
contradictory. But not all analytic sentences are so. Sentences about geometry such
as the one stating the Pythagorean Theorem express (I assume) analytic truths in
non-applied Euclidean geometry, although this isn’t transparent for me. Non-transparent
analytic knowledge is based on demonstrations whose premises are made up of transparent
analytic knowledge, namely, analytic truths we can intuitively grasp.
The arithmetical and geometrical examples
of analytic statements presented above are only elucidative, which can mislead us to think that they are informative
in the proper sense of the word. This leads us to the suggestion that the principle
of verifiability is nothing but
a non-transparent,
hidden analytic statement.
Against this last suggestion, one could still
object that the principle of verifiability cannot be stated along the same lines
as a mathematical or geometrical demonstration. After all, in the case of a proved
theorem it is easy to retrace the path that leads to its demonstration; but there
is no analogous way to demonstrate the principle of verifiability.
However, the key to an answer may be found
if we compare the principle of verifiability with statements that at first glance
do not seem to be either analytic or demonstrable. Close examination reveals that
they are in fact only non-transparent analytic truths. A well-known statement of
this kind is the following:
The same surface cannot
be simultaneously red all over and green all over (under the same conditions of
observation).
This statement isn’t analytically
transparent. In fact, it has been regarded by logical positivists and even contemporary
philosophers as a serious candidate for what might be called a synthetic a priori
judgment (Cf. Bonjour 1998: 100 f.). Nevertheless,
we can show that it is actually a hidden analytic statement. We begin to see this
when we consider that it seems transparently analytic that (i) visible colors can
occupy surfaces, (ii) different colors are things that cannot simultaneously occupy
the same surface all over, and (iii) red and green are different colors. From this,
it seems to follow that the statement (iv) ‘The same surface cannot be both red
and green all over’ must be true. Now, since (i), (ii) and (iii) seem to be intuitively
analytic, (iv) should be analytic too, even if not so intuitively clear.[13] Here’s how this argument can
be formulated in a standard form:
(1) Two different things cannot occupy the same
place all over at the same time.
(2) A surface constitutes a place.
(3) (1, 2) Two different things cannot occupy the
same surface all over at the same time.
(4) Colors are things that can occupy surfaces.
(5) (3, 4) Two different colors cannot occupy the
same surface all over at the same time.
(6) Red and green are different colors.
(7) (5, 6) Red and green cannot occupy the same
surface all over at the same time.
To most people, premises (1), (2),
(4) and (6) can be understood (preserving the intended
context) as definitely analytic. Therefore, conclusion (7) must also be analytic,
even if it does not appear to be so.
The suggestion that I want to make is that
the principle of verifiability is also a true, non-trivial and non-transparent analytic
sentence, and its self-evident character may be demonstrated through an elucidation
of its more transparent assumptions in a way similar to that of the above argument.
Here is how it can be made plausible by the following ‘cumulative’ argument:
(1) Semantic-cognitive rules are criterial rules
applicable to (or satisfied by) independent criteria that are tropical properties.
(2) Cognitive (descriptive, representational, factual…)
meanings (e-thought-contents) of statements are constituted by proper combinations
of (referential) semantic-cognitive rules applicable to real or only
conceivable arrangements of tropical properties and their combinations called facts.
(3) The truth-determination of cognitive meanings
or e-thought-content-rules of statements lies in the effective applicability of
the proper combinations of semantic-cognitive criterial rules constitutive of
them by means of their agreement (correspondence) with the arrangements and combinations
of those tropical properties called real facts able to satisfy their criteria.
(4) Combinations of semantic-cognitive criterial
rules expressible by statements are able to be true or false respectively by their
effective applicability or non-applicability to their corresponding real or
only conceivable facts, building in this way what we may call their e-thought-content
verifiability rules.
(5) (1-4) The cognitive meanings of statements consist
in their verifiability rules.
To my ears, at least, premises
(1), (2), (3), and (4) sound clearly analytic, though conclusion (5) does not seem
as clearly analytic. I admit that my view of these premises as analytic derives
from the whole background of assumptions gradually reached in the earlier chapters
of this book: it is analytically obvious to me that contents, meanings or senses
are constituted by the application of rules and their combinations. It is also analytically
obvious to me that the relevant rules are semantic-cognitive rules that can be applied
in combination to form cognitive meanings or thought-contents expressible by declarative
sentences. Moreover, once these combinations of rules are satisfied by the adequate
criterial configurations formed by real facts understood as tropical arrangements,
they allow us to see them as effectively applicable, that is, as having a verifying
fact as their referent and truth-maker. Such semantic-criterial combinations of
(normally implicit) cognitive rules, when judged as effectively applicable to their
verifying facts, are called true, otherwise they
are called false. And these semantic-criterial combinations of cognitive rules can
also be called e-thoughts (e-thought-content-rules), propositional contents
or simply verifiability rules.
I am aware that a few stubborn philosophers
would still vehemently disagree with my reasoning, insisting that they have different
intuitions originating from different starting points. After all I have said
until now, I confess to be unable to help. To make things easier, I prefer to avoid
discussion, invoking the words of an imaginary character from J. L. Borges: ‘Their
impurities forbid them to recognize the splendor of truth.’[14]
5. Objection 2: A formalist
illusion
Logic can be illuminating but also
deceptive. An example is offered by A. J. Ayer’s attempt to formulate a precise
version of the principle of verifiability in the form of a criterion of factual meaningfulness. In his first attempt to develop this kind of verifiability
principle, he suggested that:
…it is the mark of a genuine factual proposition… that
some experiential propositions can be deduced from it in conjunction with certain
other premises without being deducible from these other premises alone. (1952: 38-39)
That is, it is conceivable that
a proposition S is verifiable if together
with the auxiliary premise P1 it implies an observational result O, as follows:
1.
S
2.
P1
3.
O
Unfortunately, it was soon noted
that Ayer’s criterion of verifiability was faulty. As Ayer himself recognized, his
formulation was ‘too liberal, allowing meaning to any statement whatsoever.’ (1952:
11) Why? Suppose that we have as S the
meaningless sentence ‘The absolute is lazy.’ Conjoining it with an auxiliary premise
P1, ‘If the absolute is lazy, then snow
is white,’ we can – considering that the observation that snow is white is true
and that the truth of ‘The absolute is lazy’ cannot be derived from the auxiliary
premise alone – verify the sentence ‘The absolute is lazy.’
Now, the core problem with Ayer’s suggestion
(which was not solved by his later attempt to remedy it[15]) is this: In order to derive
the observation that snow is white, he assumes that a declarative sentence (which
he somewhat confusingly called a ‘proposition’) whose meaningfulness is questioned
is already able to attain a truth-value. But meaningless statements cannot attain
any truth-value: if a sentence has a truth-value, then it must also have a meaning,
or, as I prefer to say, it must also express a propositional content as an e-thought
verifiability rule that is true only as effectively applicable. By assuming in advance
a truth-value for the sentence under evaluation, Ayer’s principle implicitly begs
the question, because if a statement must already have a sense in order to have
a truth-value, it cannot be proven
to be senseless. Moreover, he does not allow the empirical statement in question
to reveal its proper method of verification or even if it has one.[16]
In fact, we cannot imagine any way to give
a truth-value to the sentence ‘The absolute is lazy,’ even a false one, simply because
it is a grammatically correct but cognitively meaningless word combination. As a
consequence, the sentence ‘If the absolute is lazy, then snow is white’ cannot imply
that the conclusion ‘Snow is white’ is true in conjunction with the sentence ‘The
absolute is lazy.’ To make this
obviously clear,
suppose we replace ‘The absolute is lazy’ with the equally meaningless symbols @#$,
producing the conjunction ‘@#$ & (@#$ → Snow is white).’ We
cannot apply a truth-table to show the result of this because @#$, just as much
as ‘the absolute is lazy,’ expresses no proposition at all. Even if the statement
‘Snow is white’ is meaningful, we cannot say that this formula allows us to derive
the truth of ‘Snow is white’ from ‘The absolute is lazy,’ because @#$, as a meaningless
combination of symbols, cannot even be considered false in order to materially imply
the truth of ‘Snow is white.’
A. G. Hempel committed
a similar mistake when he pointed out that a sentence of the form ‘S v N,’ in which S is meaningful,
but not N, must be verifiable, in this way making the whole disjunction meaningful
(1959: 112). Now, as we have seen, the real form of this
statement is ‘S v @#$.’ Obviously, we
cannot apply any truth-table to this. In this case, only the verifiable S has meaning
and allows verification, not the whole disjunction, because this whole cannot be
called a disjunction. The true form of this statement, if we wish to preserve this
title, is simply S.
I can develop the point further by giving a
contrasting suggestion as a criterion of cognitive meaningfulness, more akin to
Wittgenstein’s views. Consider the sentence ‘This piece of metal is magnetized.’
The question of its cognitive meaningfulness suggests verifiability procedures.
An affirmative answer results from the application of the following verification
procedure that naturally flows from the statement ‘This piece
of metal is magnetized’ conjoined with some additional information:
(1) This is a piece of metal (observational sentence).
(2) If a piece of metal is magnetized, it will attract
other objects made of iron (a criterion for the ascription rule of ‘…is magnetized’),
(3) This piece of metal has attracted iron coins,
which remained stuck to it (observational application of the ascription rule’s criterion to the object already criterially
identified by the identification rule).
(4) (From 1 to 3): It is certainly true that this piece of metal is magnetized.
(5) If the application of the combination of semantic-cognitive
rules demanded by a statement is able to make it true, then this combination must
be its cognitive meaning (a formulation of the verifiability principle).
(6) (4 to 6): The statement ‘[It is certainly true
that] this piece of metal is magnetized’ is cognitively meaningful (it expresses
an e-thought-content verifiability rule).
We can see that in cases like this the different possible verifying procedures flow naturally from our understanding of the
declarative sentence that we intend to verify, once the conditions for its verification
are given. However, in the case of meaningless sentences like ‘The absolute
is lazy’ or ‘The nothing nothings,’ we can find no verification procedure following
naturally from them, and this is the real sign of their lack of cognitive meaning.
Ayer’s statement ‘If the absolute is lazy, then snow is white’ does not follow naturally
from the sentence ‘The absolute is lazy.’ In other words: the multiple ways of verifying
a statement – themselves expressible by other statements – must contribute, in different
measures, to make it fully meaningful; but they do this by building its cognitive meaning and not by being arbitrarily attached
to the sentence, as Ayer’s proposal suggests. They must be given to us intuitively
as the declarative sentence’s proper ways
of verification. The neglect of real ways of verification naturally built into any
genuine declarative sentence is the fatal flaw in Ayer’s criterion.
6. Objection 3: Verificational
holism
A sophisticated objection to semantic
verificationism is found in W. V-O. Quine’s generalization of Duhem’s thesis, according
to which it is impossible to confirm a scientific hypothesis in isolation, that
is, apart from the assumptions constitutive of the theory to which it belongs. In
Quine’s concise sentence: ‘...our statements about the external
world face the tribunal of sense experience not individually but only as a corporate
body.’ (1951: 9)[17]
The result of this is Quine’s semantic
holism: our language forms a so interdependent network of meanings that it
cannot be divided up into verifiability procedures explicative of the meaning of
any isolated statement. The implication for semantic verificationism is clear: since
what is verified must be our whole system of statements and not any statement
alone, it makes no sense to think that each statement has an intrinsic verifiability
rule that can be identified with a particular cognitive meaning. If two statements
S1 and S2 can only be verified together with the system composed by {S1, S2,
S3… Sn}, their verification must always be the same, and if the verifiability
rule is the meaning, then all the statements should have the same meaning. This
result is so absurd that it leaves room for skepticism, if not about meaning, as
Quine would like, at least about his own argument.
In my view, if taken on a sufficiently abstract
level, on which the concrete spatiotemporal confrontations with reality to be made
by each statement are left out of consideration, the idea that the verification
of any statement in some way depends on the verification of a whole system of statements
– or, more plausibly, of a whole molecular sub-system – is very plausible. This
is what I prefer to call abstract or structural confirmational holism, and
this is what can be seriously meant in Quine’s statement. However, his conclusion
that the admission of structural holism destroys semantic verificationism, does
not follow. It requires admitting that structural holism implies what I prefer to
call a performative, concrete or procedural verificational holism, i.e., a holism regarding the concrete
spatiotemporal verification procedures of individual statements, which are the only
things really constitutive
of their cognitive meanings. But this just never happens.
Putting things in a somewhat different way:
Quine’s holism has its seeds in the fact, well known by philosophers of science,
that in order to be true the verification of an observational statement always depends
on the truth of an undetermined multiplicity of assumed auxiliary hypotheses
and background knowledge. Considered in abstraction from what we really do when
we verify a statement, at least structural molecularism is true: verifications are
interdependent. After all, our beliefs regarding any domain of knowledge are more
or less interdependent, building a complex network. But it is a wholly different
matter if we claim that from formal or abstract confirmational holism, a performative
procedural or verificational holism follows on a more concrete level. Quine’s thesis
is fallacious because, although at the end of the day a system of statements really
needs to confront reality as a whole, in their concrete verification, its individual statements do not confront reality either
conjunctively or simultaneously.
I can clarify what I mean with the help of
a well-known example. We all know that by telescopic observation Galileo discovered
the truth of the statement: (i) ‘The planet Jupiter has four moons.’ He verified
this by observing and drawing, night after night, four luminous points near Jupiter, and concluding that these points were constantly
changing their locations in a way that seemed to keep them close to the planet,
crossing it, moving away and then approaching it again, repeating these same movements
in a regular way. His conclusion was that these luminous points could be nothing
other than moons orbiting the planet... Contemporaries, however, were suspicious of the results
of his telescopic observation. How could two lenses magnify images without deforming
them? Some even refused to look through the telescope, fearing it could be bewitched…
Historians of science today have realized that Galileo’s contemporaries were not
as scientifically naive as they often seem to us.[18] As has been noted (Salmon 2002:
260), one reason for accepting the truth of the statement ‘Jupiter has four moons’
is the assumption that the telescope is a reliable instrument. But the reliability
of telescopes was not sufficiently confirmed at that time. To improve the telescope
as he did, Galileo certainly knew the law of telescopic magnification, whereby its
power of magnification results from the focal length of the telescope divided by
the focal length of the eyepiece. But in order to guarantee this auxiliary assumption,
one would need to prove it using the laws of optics, still unknown when Galileo
constructed his telescope. Consider, for instance, the fundamental law of refraction.
This law was established by Snell in 1626, while Galileo’s telescopic observations
were made in 1610. With this addition, we can state in an abbreviated way the structural
procedure of confirmation as it is known today and which I claim would be unwittingly
confused by a Quinean philosopher with the concrete verification procedure. Here
it is:
(I)
1. Repeated
telescopic observation of four points of light orbiting Jupiter.
2. Law
of magnification of telescopes.
3. Snell’s
law of refraction: sinθ1/sinθ2 = v1/v2 = l1/l2 =n2/n1.
4. A telescope cannot be bewitched.
5. Jupiter is a planet.
6. The Earth is a planet.
7. The Earth is orbited by a moon.
8. (All other related assumptions.)
9. Conclusion: the planet Jupiter has at least
four moons.
If Galileo did not have knowledge
of premise 3, this only weakens the inductive argument, which was still strong enough
to his lucid mind. From a Quinean verificationist holism, the conclusion, considering
all the other constitutive assumptions, would be that the concluding statement 9
does not have a proper verification method, since it depends not only on observation
1, but also on the laws expressed in premises 2 and 3, the well-known premises from
4 to 7, and an undetermined number of other premises constitutive of our system
of beliefs, all of them also having their verifiability procedures... As he wrote:
‘our statements should face the tribunal of experience as a corporate body.’ Indeed.
In this example, the problem with Quine’s reasoning
becomes clear. First, we need to remember that the premises belonging to confirmation
procedures are not simultaneously checked. The conclusion expressed
by statement 9 was actually verified only as a direct consequence of statement 1,
resulting from the daily drawings made by Galileo based on his observations of variations
in the positions of the four ‘points of light’ aligned near to Jupiter. However,
Galileo did not simultaneously verify statement 2 when he made these observations,
nor the remaining ones. In fact, as he inferred conclusion 9 from premise 1, he
only assumed a previous verification of the other premises, as was the case
with premise 2, which he verified as he learned how to build his telescope. Although
he didn’t have premise 3 as a presupposition, he had already verified or assumed
as verified premises 2, 4, 5, 6, 7 and 8. Now, because in general the verifications
of 2 to 8 are already made and presupposed during the verification of 9, it becomes
clear that these verifications are totally independent of the actually performed
verification of 9 by means of 1. The true form of Galileo’s concrete verification
procedure was much simpler than the abstract (holistic or molecularist) procedure
of confirmation presented above. In a summarized form, it was:
1. Repeated
telescopic observation of four points of light orbiting Jupiter.
2. Conclusion: the planet
Jupiter has at least four moons.
Generalizing: If we call the statement
to be verified S, and the statements of the observational and
auxiliary hypotheses O and A respectively, the structure of the concrete
verifiability procedure of S is not
O
A1 & A2…
& An
S
But simply:
O
(Assuming the prior verification of A1 & A2... & An)
S
This assumption of an anterior verification of auxiliary hypotheses
in a way that might hierarchically presuppose sufficient background knowledge is
what in practice makes all the difference, as it allows us to separate the verifiability
procedure of S from the verifiability
procedures of the involved auxiliary hypotheses and the many background beliefs
which have been already successfully verified.
The conclusion is that we can clearly distinguish
what verifies each auxiliary hypothesis. For example: the law of telescopic magnification was verified by very
simple empirical measurements; and the law of refraction was established and verified
later, based on empirical measurements of the relationship between variations in
the angle of incidence of light and the density of the transmitting medium. Thus,
while it is true that on an abstract level a statement’s verification depends on
the verification of other statements of a system, on the level of its proper cognitive
and practical procedures, the successful verification of auxiliary and background
statements is already assumed. This is what allows us to individuate the concrete
verifiability procedure appropriate for a statement as what is actually being verified,
identifying it with what we actually mean by the statement, thus with its proper
cognitive meaning.
In the same way, we are able to distinguish
the specific concrete modes of verification of each distinctive auxiliary or background
statement, whose truth is assumed as verified before employing the verification
procedure that leads us to accept S as
true. This allows us to distinguish and identify the concrete procedure or
procedures whereby each statement of our system is cognitively verified, making
the truth of abstract-structural holism irrelevant to the performative structure
of semantic verificationism.
By considering all that is formally involved
in confirmation, and by simultaneously disregarding the difference between what
is presupposed and what is performed in the concrete spatiotemporal verification
procedures, Quine’s argument gives us the illusory impression that verification
as such should be a holistic procedure. This seems to imply that the meaning of
the statement cannot be identified with a verifiability procedure, since the meanings
of the different statements are multiple
and diversified, while the holistic confrontation of a system of beliefs with
reality is unique and as such undifferentiated.
However, if we remember that each different
statement must have a meaning of its own, it again becomes perfectly reasonable
to identify the cognitive meaning of a statement with its verifiability rule! For both
the verifiability rule and the meaning are once more individuated together as belonging
univocally to each statement, and not to the system of statements or beliefs
assumed in the verification. Molecular holism is true regarding the ultimate structure
of confirmation. But it would be disastrous regarding meaning, since it would dissolve
all meanings into one big, meaningless mush.
The inescapable conclusion is that Quine’s
verificational holism is false. It is false because the mere admission of formal
holism, that is, of the fact that statements are in some measure inferentially intertwined
with each other is insufficient to lead us to conclude that the verifiability rules
belonging to these statements cannot be identified with their meanings because these
rules cannot be isolated, as Quine suggested. Finally, one should not forget that
in my example I gave only one way of verification for the statement ‘The planet
Jupiter has at least four moons.’ Other ways of verification can be added, also
constitutive of the meaning and enriching it and univocally related with the same
statement.
Summarizing my argument: an examination of
what happens when a particular statement is verified shows us that even assuming formal holism (which I think
is generally correct, particularly in the form of a molecularism of linguistic practices),
the rules of verifiability are distinguishable from each other in the same measure as the meanings of the
corresponding statements – a conclusion that only reaffirms the expected correlation
between the cognitive meaning of a statement and its method of verification.
7. Objection 4: Existential-universal
asymmetry
The next well-known objection is
that the principle of verifiability only applies conclusively to existential sentences,
but not to universal ones. To verify an existential sentence such as ‘At least one
piece of copper expands when heated,’ we need only observe a piece of copper that
expands when heated. To conclusively verify a universal claim like ‘All pieces of
copper expand when heated’ we would need to observe all the pieces of copper in
the entire universe, including everything future and past, which is impossible.
It is true that absolute universality is a fiction and that, when we talk about
universal statements, we are always considering some limited domain of entities
– some universe of discourse. But even in this case, the problem remains. In the
case of metal expanding when heated, for instance, the domain of application remains
much broader than anything we can effectively observe, making conclusive verification
equally impossible.
A common reaction to this finding – mainly
because scientific laws usually take the form of universal statements – is to ask
whether it wouldn’t be better to admit that the epistemic meaning of universal statements
consists of falsifiability rules instead of verifiability rules… However, in this
case existential sentences like ‘There is at least one flying horse’ would not be
falsifiable, since we would need to search through an enormously vast domain of
entities in the present, past and future in order to falsify it. Nonetheless, one
could suggest that the meanings of universal statements were given by falsifiability
rules, while the meanings of existential and singular statements would be given
by verifiability rules. Wouldn’t this be a more reasonable answer? (Cf. Hempel 1959)
Actually, though, I am inclined to think it
would and could not do. We can, for example, falsify the statement ‘All ravens are
black’ simply by finding a single white raven. In this case, we must simply verify
the statement ‘This raven is white.’ In this way, the verifiability rule of this
last statement is such that, if applied, it falsifies the statement ‘All ravens
are black.’ But if the meaning of the universal statement may be a falsification
rule, a rule able to falsify it, and the verifiability rule of the statement ‘That
raven is white’ is the same rule that when applied falsifies the statement ‘All
ravens are black,’ then – admitting that verifiability is the cognitive meaning
of singular statements and falsifiability the meaning of the universal ones – it
seems that we should agree that the statement ‘All ravens are black’ must be synonymous
with ‘That raven is white.’ However, this would be absurd: the meaning of ‘This
raven is white’ has almost nothing to do with the meaning of ‘All ravens are black.’
The best argument I can think against falsifiability
rules, however, is that they do not exist. As already noted, there seems to be no proper falsifiability rule for a statement,
as there certainly is no counter-assertoric
force (or a force proper to negative judgments, as once believed), no rule of dis-identification of a name,
and no rule for the dis-ascription or dis-application
of a predicate. This is because what satisfies a rule is a criterion and not its
absence. – This is so even in those cases in which, by common agreement, the criterion
is the absence of something normally expected, as in the case of a hole, e.g., if
someone says: ‘Your shirt has a hole in it,’ or in the case of a shadow, in the
statement ‘This shadow is moving.’ In such cases the ascription rule for ‘…has a
hole’ and the identification rule for ‘This shadow’ have what could be called ‘negative
criteria.’ However, what needs to be satisfied or applied is the verifiability rule
for the existence of a hole in the shirt, and not the falsifiability rule for the
socially presentable shirt without a hole, since this would be the verifiability
rule of a shirt that has no hole. And we use the verifiability rule for a moving
shadow and not the falsifiability rule for the absence of a shadow. If I notice
a curious moving shadow on a wall, I am verifying it; I am not falsifying the absence
of moving shadows on the wall, even if the first observation implies the second.[19]
It seems, therefore, that we should admit that
the cognitive meaning of a statement can only be its verifiability rule, applicable
or not. But in this case, it seems at first view inevitable to return to the problem
of the inconclusive character of the verification of universal propositions,
leading us to the admission of a ‘weak’ together with a ‘strong’ form of verificationism
as Ayer attempted to argue (1952: 37).
However, I doubt if this is the best approach
to reach the right answer. My suggestion is that the inconclusiveness objection
is simply faulty, since it emerges from a wrong understanding of the true logical
form of universal statements; a brief examination shows that these statements are
in fact both probabilistic and conclusive.
Consider again the universal statement:
1. Copper expands when heated.
It is clear that its true logical
form is not, as it seems:
2. [I affirm that] it is absolutely certain
that all pieces of copper expand when heated,
whereby ‘absolutely certain’ means
‘without possibility of error.’ This logical pattern would be suitable for formal truths such as
3. [I affirm that] it is absolutely certain
that 7 + 5 = 12,
because here there can be no error
(except procedural error, which we are leaving out of consideration). However, this
same form is not suitable for empirical truths, since we cannot be absolutely certain
about their truth. The logical form of what we mean with statement (1) is a different
one. This form is that of practical certainty, which can be expressed
by
4. [I affirm that] it is practically certain
that every piece of copper expands when heated,
where ‘practically certain’ means
‘with a probability that is sufficiently high to make us disregard the possibility
of error.’ In fact, we couldn’t rationally mean anything different from this. Now,
if we accept this paraphrase, a statement such as ‘Copper expands when heated’ becomes
conclusively verifiable, because
we can clearly find inductive evidence protected by theoretical reasons that become
so conclusive that we can be practically certain, namely, that we can assign the statement ‘All pieces of copper expand when heated’ a probability that is sufficiently high to make us very sure about it: we can affirm that
we know its truth. In short: the logical
form of an empirical universal statement – assuming there is some domain of application – is not that of a universal
statement like ‘├ All S are P,’ but usually:
5. [I affirm that] it is practically certain
that all S are P.
Or (using a sign of assertion-judgment):
6. ├ It is practically certain that all
S are P.
The objection of asymmetry has
its origins in an internal transgression of the limits of language, in the case,
the equivocal assimilation of the logical form of empirical universal statements in the logical form of formal universal statements (Chap. III, sec.
11). If the claims of empirical universal statement is nothing beyond a sufficiently
high probability, this is enough to make them conclusively verifiable. Hence, the
cognitive meaning of an empirical universal statement can still be seen as its verifiability
rule. Verification allows judgment; judgment must be treated as conclusive, and
verification likewise.
8. Objection 5: Arbitrary
indirectness
Another common objection is that
the rule of verifiability of empirical statements requires taking as a starting
point at least the direct observation of facts that are objects of a virtually
interpersonal experience. However, many statements do not depend on direct observation
to be true, as is the case with ‘The mass of an electron is 9.109 x 10 kg raised
to the thirty-first negative power.’ Cases like this force us to admit that many
verifiability rules cannot be based on more than indirect observation of
the considered fact. As W. G. Lycan has noted, if we don’t accept this, we will
be left with a grotesque form of instrumentalism in which what is real must be reduced
to what can be inter-subjectively observed and in which things like electrons and
their masses do not exist anymore. But if we accept this, he thinks, admitting
that many verifiability rules are indirect, how do we distinguish between direct and indirect observations?
‘Is this not one of those desperately confusing distinctions?’ (2000: 121 f.)
Here again, problems only emerge if we embark
on the narrow formalist canoe of logical positivism, paddling straight ahead, only to tramp against the barriers of natural language
with unsuitable requirements. Our assertoric
sentences are inevitably uttered or thought in the contexts of language-games, practices,
linguistic regions... The verification procedure must be adapted to the linguistic
practice in which the statement is uttered. Consequently, the criterion to distinguish
direct observation from indirect observation should always be relative to the
linguistic practice that we take as a model. We can be misled by the fact that
the most common linguistic practice is (A): our
wide linguistic practice of everyday direct
observational verification. The standard conditions for singling out this practice
are:
A possible interpersonal
observation made by epistemic subjects under normal internal and external conditions
and with unbiased senses of solid, opaque and medium-sized objects, which are close
enough and under adequate lighting, all other things remaining the same.
This is how the presence of my
laptop, my table and my chair are typically checked. Because it is the most usual
form of observation, this practice is seen as the archetypal candidate for the title
of direct observation, to be contrasted with, say, indirect observation through
perceptually accessible secondary criteria, as might be the case if we used mirrors,
optical instruments, etc. However, it is an unfortunate mistake that some insist
on using the widespread model (A)
to evaluate what happens in other, sometimes very different, linguistic practices.
Let us consider some of them.
I begin with (B): the bacteriologist’s linguistic practice. Usually, the bacteriologist
is concerned with the description of micro-organisms visible under his microscope.
In his practice, when he sees a bacterium under a
microscope, he says he has made
a direct observation; this
is his model
for verification. But the bacteriologist can also say, for example, that he has
verified the presence of a virus indirectly,
due to changes he found in the form of the cells he saw under a microscope, even
though for him viruses are not directly observable except under an electron microscope. If
he does
not possess one, he cannot make a direct observation of
a virus.
Almost nobody would say that the bacteriologist’s procedures are all indirect unless
they have in mind a comparison with our everyday linguistic practices (A). Anyway,
although unusual, this would be possible. In any case, the right context and utterances
clearly show what the speaker has in mind.
Let us consider now (C) the linguistic practices of archaeology
and paleontology. The discovery of fossils is seen here as a direct way to verify
the real existence of extinct creatures that died out millions of years ago, such
as dinosaurs, since live observation is impossible, at least under any known conditions. But the archaeologist
can also speak of indirect verification by comparison and contrast within his practice.
So, consider the conclusion that hominids once lived in a certain place based only
on damage caused by stone tools to fossil bones of animals that these early hominids
once hunted and used for food or clothing. This finding may be regarded as resulting
from an indirect verification in archaeological practice, in contrast to finding
fossilized remains of early hominids, which would be considered a direct form of
verification. Of course, here again, any of these verifications will be considered
indirect when compared with verification by the most common linguistic observational
practice of everyday life, that
is (A). However, the context
can easily show what sort of comparison we have in mind. A problem would arise only
if the language used were vague enough to create doubts about the model of comparison
employed.
If the practice is (D) one of pointing to linguistically describable
feelings, the verification of a sentence will be called direct, albeit subjective,
if made by the speaker himself, while the determination of feelings by a third person,
based on behavior or verbal testimony, will generally be taken as indirect (e.g.,
by non-behaviorists
and many who accept my objections
to the private-language argument). There isn’t any easy way to compare practice
(D) with the everyday practice (A) of observing medium-sized physical objects in
order to say what is more direct, since they belong to two categorically different
dimensions of verification.
My conclusion is that there is no real difficulty
in distinguishing between direct and indirect verification, insofar as we have clarity
about the linguistic practice in which the verification is being made, that is,
about the model of comparison we have chosen (See Ch. III, sec. 7). Contrasted with
philosophers, speakers normally share the contextually bounded linguistic assumptions
needed for the applicability and truth-making of verifiability rules. To become
capable of reaching agreement on whether a verificational observation or experience
is direct or indirect, they merely need to be aware of the contextually established
model of comparison that is being considered.
9. Objection 6: Empirical
counterexamples
Another kind of objection concerns
insidious statements that only seem to have meaning, but lack any effective verifiability
rule. In my view, this kind of objection demands consideration on a case-by-case
basis.
Consider, to begin with, the statement ‘John
was courageous,’ spoken under circumstances in which John died without having had
any opportunity to demonstrate courage, say, shortly after birth. (Dummett 1978:
148 f.) If we add the stipulation that the only way to verify that John was courageous
would be by observing his behavior, the verification of this statement becomes practically
(and very likely physically) impossible. Therefore, in accordance with the verifiability
principle, this statement has no cognitive meaning, however, it still seems more than just grammatically
meaningful.
The explanation is that under the described
circumstances
the statement ‘John was courageous’ only appears to have a meaning. It belongs to
the sizable set of statements whose cognitive meaning is only apparent. Although
the sentence has an obvious grammatical sense, given by the combination of a non-empty
name with a predicate, we are left without any criterion for the application or
non-application of the predicate. Thus, such a statement has no function in language,
since it is unable to tell us anything. It is part of a set of statements such as
‘The universe doubled in size last night’ and ‘My brother died the day after tomorrow.’
Although these statements may at first glance appear to have a sense, what they possess is no
more than the expressive force of suggesting images or feelings in our minds. But
in themselves, they are devoid of cognitive meaning since we cannot test or verify
them.
Wittgenstein discussed an instructive case in his work On Certainty.
Consider the statement ‘You are in front of me right now,’ said under normal circumstances
for no reason by someone to a person standing before him. He notes that this statement
only seems to make sense, given that we are able to imagine situations in which
it would have some real linguistic function, for example, when a room is completely
dark, so that it is hard for a
person
to identify another person in the room (1984a, sec. 10). According to him, we are inclined to imagine
counterfactual situations in which the statement would or would not be true, and this invites us to project
a truth-value into these possible situations and thus we will get the mistaken impression that the
statement has some workable epistemic sense. Against this one could in a Gricean way
still argue that even without any
practical use the sentence has
a literal
assertoric sense, since it states something obviously true. However, this would
be nothing but a further illusion: it seems
to be obviously true only insofar as we are able to imagine situations in which
it would make sense (e.g., exemplifying
the evidential character of a perceptual assertion).
Finally, many statements are mediated and are
only indirectly verifiable. Because of this, it is easy to make statements like
‘The core of Jupiter is made of marschmallow,’ and say that it is meaningful although
unverifiable. However, we know that this statement is obviously false, and the method
by which we falsify it is indirect since we cannot make a voyage to the center of
Jupiter. We refute ramifications of the verification rule, which would deny our
scientific conclusion that the core of this planet is made of water and helium and
our awareness that marschmallow is made of milk and that there is no cow in
Jupiter… These things show that the verifiability rule is inapplicable.[20]
What can we say of statements about the past
or the future? Here too, it is necessary to examine them on a case-by-case basis.
Suppose an expert says: ‘Early Java man
lived about 1 million years ago,’ and this statement was fully verified
by a reliable carbon dating applied to the fossilized skull. The direct verification
of past events in the same way that we observe present events is practically (and
it would seem physically) impossible.
However, there is no reason to worry, since we are not dealing with the kind of
verifiability rule adopted in standard practice (A). Here the linguistic practice
assumed is (C), the archaeological, in which direct verification is made on the basis of verifiable empirical traces
left by past events.
There are other, more indirect ways to verify past events. The sentence ‘The planet Neptune existed before it was discovered’
can be accepted as certainly true. Why? Because our knowledge of physical laws (which
we trust as sufficiently verified), combined
with information about the origins of our solar system, enables
us to conclude
that Neptune certainly existed a long time before it was discovered, and this inferential
procedure is suitable as a form of verification. Finally, it is simply fallacious
to say that since we can know about the past only by means of presently available
evidence, we cannot say anything about the past, but only about our present, since
the resource of present evidence can be the only natural and reliable way to speak
about the past.
Very different is the case of statements about
the past such as:
1. On that rock, an eagle landed exactly ten-thousand
years ago.
2. Napoleon sneezed more than 30 times while he was
invading Russia.
3. The number of human beings alive exactly 2,000
years ago was an odd number.
For such supposed thought-contents
there are no empirical means of verification. Here we must turn to the old distinction
between practical, physical and logical verifiability. Such verifications are not practically or technically
achievable, and as far as I know, they are not even physically realizable (we will
probably never be able to visit the past in a time-machine or travel through a worm-hole into the past in a spaceship). The possibility of
verification of such statements seems to be only logical. But it is hard to believe
that an empirical statement whose verifiability
is only logical can be considered as having a non-logical cognitive sense (Cf. Reichenbach 1953: sec. 6).
To explain this point better: it seems that
the well-known distinction between logical,
physical and practical forms of verifiability exerts influence on meaningfulness
depending on the respective fields of verifiability to which the related statements
belong. Statements belonging to a formal field need only be formally verifiable
to be fully meaningful: the tautology (P
→ Q ) ↔ (~P ˅ Q), for instance, is easily
verified by the truth-table applying the corresponding logical operators. But statements
belonging to the empirical domain (physical and practical) must be not only logically,
but also at least in principle empirically verifiable in order to have real cognitive meaning. As a consequence,
an empirical statement that is only logically verifiable must be devoid of cognitive
significance. This seems to be the case with a statement such as ‘There is a nebula
that is moving away from the earth
at a speed greater than the speed of light.’ Although logically conceivable,
this statement is empirically devoid of sense, insofar as it is impossible according to relativity theory. Similarly, in examples
(1), (2) and (3), what we have are empirical statements whose verification is empirically
inconceivable. Consequently, although having grammatical and logical meaning and
eliciting images in our minds, these statements lack any distinctive cognitive value, for we don’t know what to make of
them. Such statements aren’t able to perform the specific function of an empirical
statement, which is to be able to truly
represent an actual state of affairs. We do not even know how to begin the construction
of their proper verifiability rules. All that we can do is to imagine or conceive
the situations described by them, but we know of no rule or procedure to link the
conceived situation to something that possibly exists in the real world. Although endowed with grammatical
and some expressive meaning, they are devoid of genuine cognitive meaning. Finally,
we must remember that we are free to reformulate statements (1), (2) and (3) as
meaningful empirical possibilities. For instance: (2’) ‘Maybe (it is possible that)
Napoleon sneezed more than 30 times when he was invading Russia.’ Although not
very dissimilar to (2), this modal statement is verifiable as true by means of its
coherence with our belief-system.
Also unproblematic is the verificational analysis
of statements about the future. The great difference here is that in many cases direct verification
is practically possible. Consider the sentence (i) ‘It will rain in Caicó seven
days from now.’ When a person seriously says something of this sort, what he usually
means is (ii) ‘Probably it will rain in Caicó seven days from now.’ And this
probability sentence can be conclusively verifiable, albeit indirectly, by a weather
forecast. Thus, we have a verifiability rule, a cognitive meaning, and the application
of this rule gives the statement a real degree of probability. However, one
could not in anticipation affirm (iii) ‘It certainly
will rain within seven days.’ Although
there is a direct verifiability rule – watch the sky for seven days to determine if the thought-content is true or false – it has
the disadvantage that we will only be able to apply it if we wait for a period of
time, and we will only be able to affirm its truth (or deny it) within the maximal
period of seven days. It is true that we could also use this sentence in certain situations, for example, when making
a bet about the future. But in this case, we would not affirm (iii) from the start
since we cannot apply the rule in anticipation. In this case, what we mean with
sentence (i) can in fact only be (iv) ‘I bet that it will rain in Caicó seven days from now.’ Lacking any empirical
justification, the bet has again only an expressive-emotive meaning and no truth-value.
A similar statement is (v) ‘The first baby
to be born on Madeira Island in 2050 will be female,’ which has a verifiability
rule that can only be applied at a future point in time. This sentence lacks a practical
meaning insofar as we are unable to verify and affirm it at the present moment;
right now this sentence, though expressing a thought-content – since it has a verifiability
rule whose application can be tested in the future – is able to have a truth-value, but cannot receive it until later. Nonetheless, in a proper context
this sentence may also have the sense
of a guess:
(vi) ‘I guess that the first baby
to be born…’ or (vii) a statement of possibility regarding the future ‘It is possible that the first baby to be
born…’ In these cases, we are admitting that the sentence has a cognitive meaning
since all we are saying is that it has an observational verifiability rule that
can be applied (or not), although only in the future. Sentence (v) will only be
meaningless if understood as an
affirmation of something that is not now the case but will be the case in the year 2050,
for in order to be judged to be
true this
affirmation requires awareness of the effective applicability of the verifiability
rule generally based on its real application. (Cf. Ch. IV, sec. 36) When we consider what is really meant by statements
regarding future occurrences, we see that even in these cases verifiability and
meaning go together.
Now consider the statement (viii): ‘In about
eleven billion years the Sun will expand and engulf Mercury.’ This statement in
fact only means ‘Very probably
in about eleven billion years the Sun will expand and engulf Mercury,’ This probabilistic
prediction can be inferentially verified today, based on what we know of the fate
of other stars in the universe that resemble our Sun but are much older, and this
inferential verification constitutes its cognitive meaning.
Jeopardizing positivist hopes, I conclude that
there is no general formula specifying
the general
form of verifiability procedures. Statements about the future can be physically
and to some extent also practically verifiable. They cannot make sense as warranted
assertions about actual states
of affairs since
such affirmations require the possibility of present verification. Most of them
are concealed probability statements. The kind of verifiability rule required depends on the utterance and its insertion
in the linguistic practice in which it is made, only then showing clearly what it
really means. Such things are what may lead us to the mistaken conclusion that there are unverifiable statements
with cognitive meaning.
Finally, a word about ethical statements. Positivist
philosophers have maintained that they are unverifiable, which has led some to adopt
implausible emotivist moral theories. Once again, we find the wrong attitude. I
would rather suggest that ethical principles can be only more or less
plausible, like metaphysical statements and indeed like any philosophical statement.
They have the form: ‘It is plausible that p,’
and as such they are fully verifiable. They cannot be decisively affirmed because
we are still unable to state them in adequate ways or make them sufficiently precise,
since we lack consensual agreement regarding their most adequate formulation
and verifiability rules.
10. Objection 7: Formal
counterexamples
The verificationist thesis is naturally
understood as extendable to the statements of formal sciences. In this case, the
verifiability rules or procedures that demonstrate their formal truth constitute a form of cognitive content deductively, within the assumed formal system in which they are considered.
A fundamental difference with respect to empirical verification is that in the case
of formal verification, to have a verifiability
rule is the same thing as being
definitely able to apply it, since the criteria ultimately to be satisfied are the
own axioms already assumed as such by the chosen system.
A much discussed counterexample is Goldbach’s conjecture.
This conjecture (G) is usually formulated as:
G: Every even number greater than 2 can be expressed as the sum of two prime
numbers.
The usual objection is that this
mere conjecture has cognitive meaning. It expresses a thought-content
even if we never manage to prove it, even if a procedure for formal verification
of G has not yet been developed. Therefore, its significance cannot be equated with
a verifiability procedure.
The answer to this objection is quite simple and stems from the perception
that Goldbach’s conjecture is what its name says: a mere conjecture. Well,
what is a conjecture? It’s not an affirmation, a proven theorem, but rather the
recognition that an e-thought-content-rule has enough plausibility
to be taken seriously as possibly true. One would not make a conjecture if it seemed
fundamentally improbable. Thus, the true form of Goldbach’s conjecture is:
It is plausible that
G.
But ‘It is plausible that G,’ that
is, ‘[I state that] it is plausible that G,’ or (using a sign of assertion) ‘├It
is plausible that G,’ is something other than
I state that G (or ├G),
which is what we would be allowed
to say if we wanted to state Goldbach’s proved theorem. If our aim were to support the statement ‘I state that G,’
namely, an affirmation of the truth of Goldbach’s theorem as something cognitively
meaningful, the required verifiability rule would be the whole procedure for proving the theorem, and this we simply do not
have. In this sense, G is cognitively devoid of meaning. However, the verifiability
rule for ascribing mere plausibility is far less demanding than the verifiability
rule able to demonstrate or prove G, and we have indeed applied this rule many
times.
The plausibility ascription is ‘[I state that]
it is plausible that G,’ whereby the verifiability rule consists in something much
weaker, namely, a verification procedure able to suggest that G could
be proved. This verification procedure does in fact exist. It consists simply in
considering random examples, such as the numbers 4, 8, 12, 124, etc., and showing
that they are always the sum of two prime numbers. This verifiability rule not only
exists, up until now it has been confirmed
without
exception for every even natural number
ever considered! This is the reason why we really do have enough support for Goldbach’s
conjecture: it has been fully verified as
a conjecture. If an exception had been found, the conjecture would have been
proved false, for this would be incompatible with the truth of ‘[I state that] it
is plausible that G’ and would from the start be a reason to deny the possibility
of Goldbach’s conjecture being a theorem.
Summarizing: in itself the conjecture is verifiable
and – as a conjecture – has been definitely
verified: It is simply true that G is highly plausible. And this justifies its cognitive meaningfulness.
What remains beyond verification is the statement affirming the necessary truth
of G. And indeed, this statement doesn’t really make sense; it has no cognitive
content since it consists in a proof, a mathematical procedure to verify it, which
we do not have. The mistake consists in the confusion of the statement of a mere
conjecture that is true with the ‘statement’ of a theorem that does not exist.
A contrasting case is Fermat’s last theorem.
Here is how this theorem (F) is usually formulated:
F: There are no
three positive integers x, y and z
that satisfy the equation xⁿ + yⁿ = zⁿ, if n is greater
than 2.
This theorem had been only partially
demonstrated up until 1995 when Andrew Wiles finally succeeded in working out a
full formal proof. Now, someone could object here that even before Wiles’ demonstration,
F was already called ‘Fermat’s theorem.’ Hence, it is clear that a theorem can make
sense even without being proved!
There are, however, two unfortunate confusions
in this objection. The first is all
too easy
to spot. Of course, Fermat’s last theorem has a grammatical sense: it is syntactically
correct. But it would be an obvious mistake to confuse the grammatical meaning of
F with its cognitive meaning as a theorem. Also an absurd identity, for instance,
‘Napoleon is the number 7,’ has a grammatical sense.
The second confusion concerns the fact that
the phrase ‘Fermat’s theorem’ isn’t appropriate at all. We equivocally used to call
F a ‘theorem’ because before his death Fermat wrote that he had proved it, but couldn’t
put this proof on paper since the margins of his notebook were too narrow…[21] For these reasons, we have
here a misnamed opposite of ‘Goldbach’s theorem.’ Although F was called a
theorem, it was in fact only a conjecture of the form:
[I state that] it is
plausible that F.
It was a mere conjecture
until Wiles demonstrated F, only then effectively making it a true theorem. Hence, before 1995 the
cognitive content that could be given to F was actually ‘[I state that] it is plausible
that F,’ a conjecture that was initially demonstrated by the fact that no one had
ever found numbers x, y and z that could satisfy the equation. Indeed,
the cognitive meaning of the real theorem F, better expressed as ‘I state that F’
or ‘├ F’ (a meaning that very few really know in its entirety), should include the
demonstration or verification found by Wiles, which is no more than the application
of an exceptionally complicated combination of mathematical rules.
Some would complain that if this is the case,
then only very few people really know the cognitive meaning of Fermat’s last theorem.
I agree with this, though seeing no reason to complain. The cognitive content of
this theorem, its full thought-content, like that of many scientific statements,
is really known by very few people indeed. What most of us know is only the weak
conjecture falsely called ‘Fermat’s last theorem’. We have applied F to some numbers without finding
any exception.
Finally, there are phrases like (i) ‘the less
rapidly convergent series.’ For Frege, this definite description has sense but not
reference (1892: 28). We can add that there is a rule that allows us to always find
series that are less convergent than any given one, making them potentially infinite.
We can state this rule as L: ‘For any given convergent series, we can always find
a less rapidly convergent one.’ Since L implies the truth of statement (ii) ‘There
is no less rapidly convergent series,’ we conclude that (i) has no referent. Now,
what is the identification rule of (i)? What is the sense, the meaning of (i)? One
answer would be to say that it is given by failed attempts to create a less rapidly convergent series
ignoring L. It would be like the meaning of any mathematical falsity. For instance,
the identity (iii) 321 + 427 = 738 is false. Now, what is its meaning? A temptation
is to classify it as senseless. But if it were senseless, it would not be false.
Consequently, I suggest that its sense resides in the failed usual ways to verify
it, which leads to the conclusion that this is a false identity. It seems reasonable
to conclude that it is such an external operation that gives a kind of cognitive
sense to a false identity. The same holds regarding false statements like 3 >
5. They express misrepresentations, incongruities demonstrating failed attempts
to apply rules in the required ways.
11. Objection 8: Skepticism
about rules
In his Philosophical Investigations, Wittgenstein formulated a skeptical paradox (1984c, sec.
201) that endangers the possibility of an ongoing common interpretation of rules
and, consequently, the idea that our language may work as a system of rules responsible
for meaning. Solving this riddle interests us here because if the argument is correct,
it seems to imply that it is a mistake to accept that there are verifiability rules
consisting in the cognitive meanings of sentences.
Wittgenstein’s paradox results from the following
example of rule-following. Let’s say that a person learns a rule to add 2 to natural
numbers. If you give him the number 6, he adds 2 and writes the number 8. If you
give him the number 173, he adds 2, writing the number 175... But imagine that for
the first time he is presented with a larger number, say the number 1,000, and that
he then writes the number 2,004. If you ask why he did this, he responds that he
understood that he should add 2 up to the number 1,000, 4 up to 2,000, 6 up to 3,000,
etc. (1984c, sec. 185).
According to Saul Kripke’s dramatized version
of the same paradox, a person learns the rule of addition, which works well for
additions with numbers below 57. But when he performs additions with larger numbers,
the result is always 5. So for him 59 + 67 = 5… Afterward, we discover that he understood
‘plus’ as the rule ‘quus,’ according to which ‘x quus y = x + y
if {x, y} < 57, otherwise 5’ (1982: 9). If questioned why he understood
addition in this strange way, he answers that he found this the most natural way
to understand the rule.
Now, what these two examples suggest is that
a rule can always be interpreted differently from the way it was intended, no matter
how many specifications we include in our instructions for using the rule, since
these instructions can also be differently interpreted… As Kripke pointed out, there
is no fact of the matter that forces us
to interpret a rule in a certain way rather than in any other. The consequence is
that we cannot be assured that everyone will follow our rules in an expected similar
way, or that people will continue to coordinate their actions based on them. And
as meaning depends upon following rules, we cannot be certain about the meanings
of the expressions we use. How could we be certain, in the exemplified cases, of
the respective meanings of ‘add two’ and ‘plus’? However, if we accept that there
can be no rules and therefore no meanings, then there could be no riddle since we
would not be able to meaningfully formulate the riddle.
Wittgenstein and later Kripke attempted to
find a solution to the riddle. Wittgenstein’s answer can be interpreted as saying
that we follow rules blindly, as a result of training (custom) regarding the conventions
of our social practices and institutions belonging to our way of life (1984c sec. 198, 199, 201,
219, 241). Kripke’s answer follows a similar logic: according to him, following a rule isn’t justified by
truth-conditions derived from their correct interpretation in a correspondential
(realist) way, a solution that
Wittgenstein
tried in his Tractatus. Instead, Kripke
thinks that for the later Wittgenstein correspondence is replaced by verification,
so that instead of truth-conditions what we have are assertability conditions justified
by practical interpersonal utility (1982: 71-74, 77, 108-110). These assertability
conditions are grounded on the fact that any other user in the same language community
can assert that the rule follower ‘passes the tests for rule following applied to
any member of the community’ (1982: 110).
Notwithstanding, both answers are clearly wanting.
They offer a description of how rules
work, leaving unexplained why they must
work. Admittedly, the simple fact that
in our community we have so far openly coordinated our linguistic activity according
to rules does not imply that this
coordination
has to work this way, nor does it imply that it should continue to
work this way. Kripke’s answer has in my view an additional burden. It overlooks the fact that assertability conditions must
include the satisfaction of truth-conditional correspondential-verificational conditions,
only adding to the explanation
of the common interpretation of rules an interpersonal social layer.
For my part, I have always believed that the
‘paradox’ should have a more satisfactory solution. A central point can be seen
as in some way already disclosed by Wittgenstein, namely, that we learn rules in
a similar way because we share a similar
human nature modeled in our form of life. It
seems clear that this makes it easier for us to interpret the rules we are
taught in the same manner, suggesting that we must also be naturally endowed with
innate, internal corrective mechanisms able to reinforce consistent, conforming
behavior. (Costa 1990: 64-66)
Following this path, we are led to the decisive
solution of the riddle, which I think we owe to Craig DeLancey (2004). According to him, we are biologically predisposed
to construct and interpret statements in
the most economical (or
parsimonious) way possible. Or, as I prefer to
say, we are innately disposed to put in practice the following principle of simplicity:
PS: We should formulate
and interpret our rules as the simplest ones.
Because of this shared principle
derived from our inborn nature as rule followers,
we prefer to maintain the interpretation of the rule ‘add 2’ in its usual form,
instead of complicating it with the further condition that we should add twice two
after each thousand. And because of the same principle, we prefer to interpret the
rule of addition as a ‘plus’ instead of a ‘quus’ addition, because with the ‘quus’ addition we would complicate the
interpretation by adding the further condition that any sum with numbers above 57
would give as a result the number 5. Indeed, it is the application of this principle
of simplicity that is the ‘fact of the matter’ not found by Kripke, which leads
us to interpret a rule in one way instead of another. It allows us to harmonize
our interpretations of semantic rules, thus solving the riddle. Furthermore, DeLancey
clarifies ‘simplicity’ by remarking that non-deviant interpretations are formally
more compressible than deviant interpretations like those considered by Wittgenstein
and Kripke. Moreover, a Turing machine would need to have a more complex and longer
program in order to process these deviant interpretations...
One might ask: what warrants assuming the long-term consistency of human nature across the entire population or that we are innately
equipped to develop such a heuristic principle of simplicity? The obvious answer
lies in the appeal to Darwinian evolution. Over long periods of time, a process
of natural selection has harmonized our learning capacities around the principle
of simplicity and eliminated individuals with deviant, less practical dispositions.
Thus, we have a plausible explanation of our capacity to share a sufficiently similar
understanding and meaning of semantic rules. If we add to this the assumption that
human nature and recurring patterns in the world will not change in the future,
we can be confident in the expectation that people will not deviate from the semantic
rules they have learned. Of course, underlying this last assumption is Hume’s much
more defiant criticism of induction, which might remain a hidden source of concern. But this is a further issue that
goes beyond our present concerns (for a plausible approach see the Appendix of the
present chapter).[22]
Summarizing: Our shared interpretation of learned
rules only seems puzzling if we insist on ignoring the implications of the theory
of evolution, which supports the principle of simplicity. By ignoring considerations
like these, we tend to ask ourselves (as Wittgenstein and Kripke did) how it is
possible that these rules are and continue to be interpreted and applied in a similar
manner by other human beings, losing ourselves within a maze of philosophical perplexities.
For a similar reason, modern pre-Darwinian philosophers like Leibniz wondered why our minds are such that we are
able to understand each other, appealing to the Creator as producing the necessary
harmony among human souls. The puzzle about understanding how to follow rules arises from this same old perplexity.
12. Quine’s objections
to analyticity
Since I am assuming that the verifiability
principle is an analytic-conceptual statement, before finishing I wish to say a
word in defense of analyticity. I am satisfied with the definition of an analytic
proposition as the thought-content expressed
by a statement whose truth derives from the combination of its constitutive unities
of sense. This is certainly the most common and intuitively acceptable formulation.
However, W. V-O. Quine would
reject it because it seems to be based on an overly vague and obscure concept of
meaning.
The usual answer to this criticism is that there is really nothing overly vague or obscure in the concept
of meaning used in our definiens, except from Quine’s own scientistic-reductionist
perspective, which tends to confuse expected vagueness with lack of precision and
obscurity (See Grice & Strawson 1956:
141-158; Swinburne 1975: 225-243). Philosophy works with concepts such as meaning,
truth, knowledge, good… which are in some measure polysemic and vague, as much so as the concepts used in countless
attempts to define them. In my judgment, the effort to explain away such concepts
only by reason of their vagueness (or supposed obscurity) betrays an impatient positivist-scientistic
mental disposition, which is anti-philosophical par excellence (which doesn’t
mean to indulge the opposite: a methodology of hyper-vagueness or unjustified obscurity).
Having let out of consideration the above definition,
Quine tried to define an analytic sentence in a Fregean way, as a sentence that
is either tautological (true because of its logical constants) or can be shown to
be tautological by the replacement of its non-logical terms with cognitive synonyms.
Thus, the statement (i) ‘Bachelors are unmarried adult males’ is analytic, because
the word ‘bachelor’ is a synonym of the phrase ‘unmarried adult male,’ which allows
us by the substitution of synonyms to show that (i) means the same thing as (ii):
‘Unmarried adult males are unmarried,’ which is a tautology. However, he finds the
word ‘synonym’ in need of explanation. What is a synonym? Quine’s first answer is
that the synonym of an expression
is another expression that can replace the first in all contexts salva veritate.
However, this answer does not work in some cases. Consider the phrases ‘creature
with a heart’ and ‘creature with kidneys.’ They are not synonymous, but are interchangeable
salva veritate, since they have the same extension. In a further attempt
to define analyticity, Quine makes an appeal to the modal notion of necessity: ‘Bachelors
are unmarried males’ is analytic if and only if ‘Necessarily, bachelors are unmarried
males.’ But he also sees that
the usual notion of necessity does not cover all cases. Phrases like ‘equilateral
triangle’ and ‘equiangular triangle’ necessarily have the same extension, but are
not synonyms. Consequently, we must define ‘necessary,’ in this case, as the specific
necessity of analytic statements, in order for the concept to apply in all possible
circumstances... However, the ‘necessity of analyticity’ is an obscure notion, if
it really exists. Dissatisfied, Quine concludes that his argument to explain
analyticity ‘has the form, figuratively speaking, of a closed curve in space.’ (Quine
1951: 8)
A problem emerges from Quine’s implicit assumption
that a word should be defined with the help of words that do not belong to its specific
conceptual field. Thus, for him, the word ‘analyticity’ should not be defined by
means of words like ‘meaning,’ ‘synonymy,’ ‘necessity’… which just as much as ‘analyticity’ seem too
near and unspecific in their meaning to be trusted in the construction of an adequate
definition. Nonetheless, when we consider the point more carefully, we see that
the words belonging to a definiens should be sufficiently close in their
meanings to the definiendum, simply because
in any real definition the terms of a definiens
must belong to the same semantic field as its definiendum, notwithstanding
the element of vagueness. This is why, in order to define a concept-word from ornithology,
we would not use concepts from quantum mechanics, and vice-versa. These conceptual
fields are too distant from each other. Because of this, we define ‘arthropod’ as
an invertebrate animal having an exoskeleton, all these terms being biological,
which does not compromise the definition. And considering the abstractness of the
semantic field, a kindred level of vagueness can be expected. Hence, there is nothing
especially wrong in defining analyticity using correspondingly vague words belonging
to the same conceptual field, like ‘meaning’ and ‘synonymy,’ refraining from
further elucidation.
A more specific and more serious objection
is that Quine’s attempt to define synonymy simply took a wrong turn. Since there is probably
no proper necessity of analyticity, the lack of synonymy of expressions that necessarily
have extensions like ‘equilateral triangle’ and ‘equiangular triangle’ remains unexplained.
My alternative proposal consists simply in
beginning with the dictionary definition according to which:
Two words or phrases
are synonymous when they have the same or nearly the same meaning as another word
or phrase in the same language.[23]
Translating this into our terms,
this means that any expressions A and B are
(cognitively) synonymous if their semantic-cognitive rules (their expressed concepts)
are the same or almost the same. This can be tested by adequate definitions (analyses)
expressing the criteria for the application of those rules so that when these rules
are really the same, the synonymous expressions
will be called precise synonyms. However, precise synonyms
are difficult to find. Consider, for instance, the words ‘beard’ and ‘facial hair.’
These words are called synonymous because they express a similar semantic-cognitive
rule. A ‘beard’ is defined by dictionaries as ‘a growth of hair on the chin and
lower cheeks of a man’s face’ and this is considered sufficiently similar to the
expression ‘facial hair.’ However, the two terms are not precisely synonymous,
because a human being with hair on the forehead has facial hair but no beard. Diversely,
the word ‘chair’ and the expression ‘a non-vehicular seat provided with a backrest
and made for use by only one person at a time’ can be seen as precise synonymous,
because the latter is simply the real definition
of the former. The expressions ‘creature with a heart’ and ‘creature with a kidney,’
on the other hand, are not synonymous, because
they express different semantic-cognitive rules, the first defined as a creature
with an organ used to pump blood, the second
defined as a creature with an organ used to clean waste and impurities from blood. Even if approximate in meaning, the expressions ‘equilateral triangle’
and ‘equiangular triangle’ are surely not precisely synonymous for the reason already
considered: the first is defined as a triangle whose three sides are equal, while
the second is defined as a triangle whose three internal angles are congruent with
each other and are each 60°. Hence, we can replace
Quine’s flawed definition of analyticity with the following more adequate definition
using the concept of precise synonymy:
A statement S is analytic (Df): It can generate a tautology
by means of substitution of precise cognitive synonyms, namely, of real definitions
expressing the same semantic-cognitive criterial rules.
The statement ‘The cognitive meaning
(e-thought-content) of a declarative sentence X = the verifiability rule
for X’ is analytic because the semantic-cognitive rules on each side of
the identity sign are identical.
A complementary point supported by Quine is
that, contrary to what is normally asserted, there is no definite distinction between
empirical and formal knowledge. What we regard as analytic sentences can always
be falsified by greater changes in our more comprehensive system of beliefs. Even
sentences of logic such as the excluded middle can be rejected, as occurs in some interpretations of quantum physics.
Regarding this point, it would not be correct
to say that in itself a formal or analytic proposition could be proved false or be falsified by new experience or knowledge. What more precisely can occur is that its domain of application can be restricted or
even lost. For example: since the development of non-Euclidean geometries, the
Pythagorean Theorem has lost part of its theoretical domain; it is not the only
useful geometry anymore. And
since the theory of relativity has shown that physical space is better described
as Riemannian, this theorem has lost its monopoly on describing physical space.
However, this is not the same as to say that the Pythagorean Theorem has been falsified
in a strict sense. This theorem remains perfectly
true within the theoretical framework of Euclidean geometry, where we can prove
it, insofar as we assume the basic rules that constitute this geometry. This remains
so, even if Euclidean geometry’s domain of application has been theoretically restricted
with the rise of non-Euclidean geometries and even if it has lost its full applicability
to real physical space after the development of general relativity theory.
The case is different when a law belonging
to an empirical science is falsified. In this case, the law definitely loses its truth together with
the theory to which it belongs, since its truth-value depends solely on its
precise empirical application. Newtonian gravitational law, for instance, was falsified
by general relativity. It is true that it still has valuable practical applications that do not require the highest
level of accuracy. The best one could say in its favor is that it has lost some
of its truth, trying to make this idea clear by appealing to multi-valued logic.
13. Conclusion
There is surely much more that
can be said about these issues. I believe, however, that the few but central considerations
that were offered here were sufficient to convince you that semantic verificationism,
far from being a useless hypothesis, comes close to being rehabilitated when investigated
with a methodology that does not overlook and therefore does not violate the delicate
tissue of our natural language. The fundamental questions of philosophy are as
fascinating as difficult because of their subjacent complexity and wideness. Inventing
ways to make them easy is to be relieved by illusory answers.
Appendix to Chapter V
The Only Key to Solving the Humean Problem of Induction
It would be impossible
to say truly that the universe is a chaos, since if the universe were genuinely
chaotic there could not be a language to tell it. A language depends on things and
qualities having enough persistence in time to be identified by words and this same
persistence is a form of uniformity.
—J. Teichman & C. C. Evans
Here I will first reconstruct in
the clearest possible way the essentials of Hume’s skeptical argument against the
possibility of induction (Hume 1987 Book I, III; 2004 sec. IV, V, VII), viewing it separately from his amalgamated analysis of causality. My aim in
doing this is to find a clear argumentative
formulation of his argument that allows me to outline what seems to be the only adequate way to react
to it in order to re-establish the credibility of inductive reasoning.
1. Formulating a Humean
argument
According to Hume, our inductive
inferences require support by metaphysical principles
of the uniformity of nature. Although
induction can move not only from the past to the future, but also from the future
to the past and from one spatial region to another, for the sake of simplicity I
will limit myself here to the first case. A Humean principle of uniformity from
the past to the future can be stated as:
PF: The future will resemble the past.
If this principle is true, it ensures
the truth of inductive inferences from the past to the future. Consider the following
very simple example of an inductive argument justifying the (implicit) introduction
of PF as a first premise:
1. The future will resemble
the past. (PF)
2. The Sun has always
risen in the past.
3. Hence, the Sun will
rise tomorrow.
This seems at first glance a natural
way to justify the inference according to which if the Sun rose every morning in the past then it will
also rise tomorrow, an inference which could be extended as a generalization, ‘The
Sun will always rise in the future.’ We make these inferences because we unconsciously
believe that the future will be like the past.
It is at this point that the problem of induction
begins to delineate itself. It starts with the observation that the first premise
of the argument – a formulation of the principle of the uniformity of nature from
the past to the future – is not a truth of reason characterized by the inconsistency
of its negation. One could say it is not an analytic thought-content. According
to Hume, it is perfectly imaginable that the future could be very different from
the past, for instance, that in the future trees could bloom in the depths of winter
and snow taste like salt and burn like fire (1748, IV).
We can still try to ground our certainty that
the future will resemble the past on the past permanence of uniformities that once
belonged to the future, that is, on past futures. This is the inference that at
first glance seems to justify PF:
1. Already past futures were always similar to their own
pasts.
2. Hence, the future of the present will also resemble
its own past.
The problem with this inference
is that it is also inductive. That is,
in order to justify this induction we need to use PF, the principle that the future
will resemble the past; but PF itself is the issue. Thus, when we try to justify
PF, we need to appeal once more to induction, which will require PF again... Consequently,
the above justification is circular.
From similar considerations, Hume concluded
that induction cannot be rationally justified. The consequences are devastating:
there is no rational justification either for expectations created by the laws of
empirical science or for our own expectations of everyday life, since both are grounded
on induction. We have no reason to believe that the floor will not sink under us
when we take our next step.
It is true that we are almost always willing
to believe in our inductive inferences. But for Hume, this disposition is only due
to our psychological constitution. We are by nature inclined to acquire habits of
having inductive expectations. Once we form these expectations, they force us to
obey them almost like moths flying towards bright lights. This is an extremely skeptical
conclusion, and it is not without reason that only a few philosophers have accepted Hume’s conclusion. Most
think that something somewhere must be wrong.
There have been many interesting attempts to
solve or dissolve Hume’s problem; all of them in some way unsatisfactory.[24] I believe my approach, although
only sketched out, has the virtue of being on the right track. I want to first present
a general argument and then show how it could influence PF.
2. The basic idea
My basic idea has a
mildly Kantian flavor, but without its indigestible synthetic a priori. We can sum
it up in the view that any idea of a world
(nature, reality) that we are able to have must be intrinsically open to induction.
I see this as a conceptual truth in the same way as, say, the truth of our view
that any imaginary world must in principle be accessible to perceptual experience.
Before explaining it in more detail, I should note that my view is so close to being self-evident that
it would be strange if no one had thought of it earlier, as the citation at the
start of this appendix proves. More technically, Keith Campbell followed a similar
clue in developing a short argument to show the inevitability of applying inductive
procedures in any world-circumstances (1974: 80-83). As he noted, in order to experience
a world cognitively – as an objectively structured
reality – we must continually apply empirical concepts, which, in turn –
if we are to postulate, learn from and use them – require a re-identification
of the designata of their applications
as identical. However, this is only possible if there is a degree of uniformity
in the world that is sufficient to allow these re-identifications. Indeed, if the
world were to lose all the regularities implicitly referred to, no concept would
be re-applicable and the experience of a world would be impossible.
Coming back to my basic idea, and understanding
the concept of world minimally as any set of empirical entities compatible with
each other[25],
this idea can be unpacked as follows. First, I consider it an indisputable truism
that a world can only be experienced and said to exist if it is at least conceivable.[26] However, we cannot conceive
of any world without some degree of uniformity or regularity. Now, since we can
only experience what we are able to conceive, it follows that we cannot experience
any world completely devoid of regularity. This brings us to the point where it
seems reasonable to think that the existence of regularity is all that is necessary
for at least some inductive procedure
to be applicable. However, if this is the case, then it is impossible for us to
conceive of any world of experience that is not open to induction. Consequently,
it must be a conceptual truth that if a world is given to us, then some inductive
procedure should be applicable to it.
There is a predictable objection to this idea:
why should we assume that we cannot conceive the existence of a chaotic world – a world devoid of regularities
and therefore closed to induction? In my view, the widespread belief in this possibility
has been a deplorable mistake, and I am afraid that David Hume was chiefly responsible
for this.[27]
His error was to choose causal regularity as the focus of his discussion, strengthening
it with carefully selected examples like those of trees blooming in winter and snow
burning like fire. This was misleading, and in what follows, I intend to explain
why.
Causal regularity is what I would call a form
of diachronic regularity, that is, one in which a given kind of phenomenon is regularly
followed by another kind. We expect the ‘becoming’ (werden) of our world to include regular successions.
However, induction applies not only to diachronic
regularities, but also to something that Hume, with his fixation on causality, did
not consider, namely, synchronic regularities.
Synchronic regularities are what we could also call structures: states of affairs that endure over time in the constitution
of anything we can imagine. The world has not
only a ‘becoming’ (werden), but also a ‘remaining’ (bleiben), with its multiple patterns of permanence. And this remaining
must also be inductively graspable.
We can make this last view clear by conceiving
of a world without any diachronic regularity, also excluding causal regularities.
This world would be devoid of change, static, frozen. It still seems that we could
properly call it a world, since even a frozen world must have regularities to be
conceivable; it must have a structure filled with synchronic regularities. However,
insofar as this frozen world is constituted by synchronic regularities, it must
be open to induction: we could foresee that its structural regularities would endure
for some time – the period of its existence – and this already allows a very strong
degree of inductive reasoning!
Considerations like this expose the real weakness
in Hume’s argument. By concentrating on diachronic patterns and thinking of them
as if they were the only regularities that could be inductively treated, it becomes
much easier to suppose the possibility of the existence of a world to which induction
does not apply or cannot be applicable, a world that nevertheless continues to exist.
To clarify these points, try to imagine a world
lacking both synchronic and diachronic regularities. Something close to this can
be grasped if we imagine a world made up of irregular, temporary, random repetitions
of a single point of light or sound. However, even if the light or sound occurs
irregularly, it will have to be repeated at intervals (as long as the world lasts),
which demonstrates that it still displays at least the regularity of a randomly
intermittent repetition open to recognition. But what if this world didn’t have
even random repetitions? A momentary flash of light… Then it would not be able to
be fixed by experience and consequently to be said to exist. The illusion that it
could after all be experienced arises from the fact that we already understand points
of light or sounds based on previous experiences.
My conclusion is that a world absolutely deprived
of both species of regularity is as such inconceivable, hence inaccessible to experience
– a non-world, an anti-world. We cannot conceive of any set of empirical elements
without assigning it some kind of static or dynamic structure. But if that’s the
case, if a world without regularities is unthinkable, whereas the existence of regularities
is all we need for some kind of inductive inference to be applicable, then it is
impossible that there is for us a world closed to induction. And since the concept
of a world is nothing but the concept of a world for us, there is no world at all that is closed to induction.
Summarizing the argument: By focusing on causal
relationships, Hume invited us to ignore the fact that the world consists of not
only diachronic, but also synchronic regularities. If we overlook this point, we
are prone to believe that we could
conceive
of a world inaccessible to inductive inference. If, by contrast, we take into account
both general types of regularity to which induction is applicable, we realize that
a world which is entirely unpredictable,
chaotic, devoid of any regularity is impossible, because any possible world is conceivable
and any conceivable world must contain regularities, which makes it intrinsically
open to some form of induction.
One could insist on thinking that at least
a world that is chaotic but not entirely chaotic
could exist, with a minimum of
structure or uniformity, so that it would exist but be insufficient for the application of inductive
procedures. However, this is a theoretical impossibility, for induction has a self-adjusting nature, that
is, its principles are such that they are always conceivably able to be calibrated to match any degree of uniformity
that is given in its field of application.
The requirement of an inductive basis, of repeated and varied inductive attempts,
can always be further extended, the greater the improbability of the expected uniformity.
Consequently, even a system with a minimum of uniformity requiring a maximum of
inductive searching would always end up
enabling successful induction.
These general considerations suggest a variety
of internal conceptual inferences, such as the following:
Conceivable cognitive-conceptual
experience of a world ↔ applicability of inductive procedures ↔ existence of regularities
in the world ↔ existence of a world ↔ conceivable cognitive-conceptual experience
of a world…
These phenomena are internally
related to each other in order to derive each other at least extensionally, so that
their existence already implies these relations. But this means, contrary to what
Hume believed, that when properly understood the principles of uniformity should
be analytic-conceptual truths, that is, truths of reason applicable in any possible
world.
3. Reformulating PF
To show how I would use the just
offered proposal to reformulate the principles of uniformity or induction, I will
reconsider in some detail PF, the principle that the future will resemble the past. If my suggestion is correct,
then it must be possible to turn this principle into an analytic-conceptual truth
constituting our only possibilities of conceiving and experiencing the world. –
I understand an analytic-conceptual thought-content to be simply one whose truth
depends only on the combination of its semantic constituents; its truth isn’t ampliative
of our knowledge, in opposition to synthetic propositions, and is such that its
denial implies a contradiction or inconsistency (Cf. Ch. V, sec. 12).
To show how the aforementioned suggestion could
be applied to reformulating the principles of uniformity or induction, it is necessary to reformulate PF. If my general thesis
is correct, then it must be possible to turn this principle into an analytic-conceptual
truth constituting a way of conceiving and experiencing the world. Here is a first
attempt to reformulate PF in a clearly analytic form:
PF*: The future must
have some resemblance to its past.
Unlike PF, PF* can easily
be accepted as expressing an analytic-conceptual truth, for PF* can be clearly seen
as satisfying the above characterization of analyticity. Certainly, it belongs to
the concept of the future that
it is the
future of its own past. It cannot be the future of another past belonging to some
alien world. If a future had nothing to do with its past, we could not even recognize
it as being the future of its own past, because it could be the future of anything,
what seems incoherent. In still clearer words: the future of our actual world W,
as FW, can only be the future of the
past of W, that is, PW. It cannot be the future of infinitely
many possible worlds, W1, W2, W3...
that have as their past respectively PW1,
PW2, PW3... Thus, it is necessary that there must be something that identifies FW as being the future of PW,
and this something can only be some degree of resemblance in the transition.
Against this proposal, we can try to illustrate
by means of examples the possibility of complete changes in the world, only to see
that we will always be unsuccessful. Suppose we try to imagine a future totally different from its past, a
‘complete transformation of the world’ as described in the Book of Revelations.
It is hard to imagine changes more dramatic than those described by St. John, since he intends to describe the end of the world as we know it.
Here is the biblical passage describing the locusts sent by the fifth angel:
In appearance the locusts
were like horses equipped for battle. And on their heads were what looked like golden
crowns; their faces were like human faces and their hair like women’s hair; they
had teeth like lions’ teeth and they wore breastplates like iron; the sound of their
wings was like the noise of horses and chariots rushing to battle; they had tails
like scorpions with stings in them, and in their stings lay their power to plague
mankind for five months.[28]
At first glance, these changes
are formidable. Nonetheless, there is nothing in this report that puts PF* at risk.
In fact, closer reflection on the example demonstrates that even PF isn’t seriously
challenged. Although these biblical locusts are indeed very strange creatures, they
are described as combinations of things already familiar to us. These things are
horses, women, hairs, men, heads, teeth, scorpions’ tails with stings, human faces, etc. Both internally and externally, they include
a vast quantity of synchronic regularities, of permanent structural associations,
together with familiar diachronic associations, like the causal relationship between
the noise produced and the movement of wings or the sting of the scorpion and the
effects of its poison on humans…
In fact, were it not for these uniformities,
the apocalypse
as described by St. John would not be conceivable, understandable
and able to be the subject of any linguistic description. The future, at least in
proportion to its greater proximity to the present, must maintain sufficient similarity
to its past to allow an application of inductive procedures to recognize the continuity
of the same world we know today.
Now one could object that maybe it is possible
that at some time in a remote future we could find a dissimilarity so great between
the future and our past that it invalidates any of our reasonably applicable inductive
procedures – a remote future that would be radically different from its past. Indeed,
it seems conceivable that a continuous sequence of small changes could in the course
of a very long period of time lead to something, if not completely different, at
least extremely different. Nevertheless, this would not discredit PF*, because its formulation is
too weak, requiring only that some similarity
must remain. However, it also seems that this weakness of PF*, even if not robbing it of its analytic-conceptual character,
exposes PF* to charges of disproportionate poverty
as a way to assure the reliability
of our
inductive projections.
However, precisely this weakness of PF* suggests a way to improve it. It leads us
to see that the closer we get to the point of junction between the future and the
past, the greater must be the similarity between future and past, both becoming
identical at their limit, which is the present. We can approximate this issue by
remembering the Aristotelian analysis of change
as always assuming the permanence of something that remains identical in a continuous
way, without gains or losses (Aristotle 1984, vol 1: Physics, 200b, 33-35); in other words, the intuitive idea is that every
change must occur on some basis of permanence.
This leads us to create another variant of
PF, namely, the principle according to which in a process of change the amount of permanence must be inversely
proportional to the period of time in which the change occurs. In other words: if there is a sequence of
changes that are parts of a more comprehensive change,
the changes that belong to a shorter sequence typically presuppose a greater number
of permanent structural (and sequential) associations than the sequence constitutive
of the more comprehensive change.
This principle can be illustrated with numerous examples. Consider a simple one:
the changes resulting from heating a piece of wax. The change from the solid state
to the liquid state presupposes
the permanence
of the same wax-like material. However, the next change, from liquid wax to carbon
ash, presupposes only the permanence of carbon atoms. If the heat then becomes much more intense, carbon will
lose its atomic structure, giving place to a
super-heated plasma of subatomic particles. We have here a sequence of four time periods: regarding the shortest period of time from t1 to t2, we assume that we will be left
with (i) the same wax, made up of (ii) the carbon molecules and atoms, which in
turn are composed of (iii) their same subatomic constituents. In the longer period
of time from t1 to t3 we assume the identity of only (ii) and (iii): carbon atoms
and subatomic particles. And in the still longer period of time from t1 to t4 the
only things that remain the same are (iii): subatomic constituents.
Note that this model is not restricted to changes
in the physical material world! As Leibniz saw: Natura non facit saltus. The same examples repeat in every domain that
one can imagine, chemical, biological, psychological, social, economic, historical…
with the same patterns: the closer the future is to its junction with its past,
the more structural identities must be in some way assumed. For example: the process
of industrialization. The Industrial Revolution was a period of social and economic
changes from an agrarian society to an industrialized society, which suffered an
upheaval in the mid-19th century. As a whole, after its second period
it included the refinement of the steam engine, invention of the internal combustion
engine, harnessing of electricity, construction of infrastructure such as railways…
and, socially, the more complete exodus of families from rural areas to large cities
where factories were constructed… However, when we choose to consider a short period
in this process, for instance, at the end of the 18th century, the only outstanding changes were probably the invention of a simple piston engine and a minor exodus
from the countryside, most characteristics of society
otherwise remaining essentially
the same.[29]
We conclude that it is intrinsic to the very structure of the world
of experience – and of possible experience
– that changes taking place in a shorter period of time tend to presuppose more
permanence than the most comprehensive long-term changes within whose course they occur. Consequently, the future
closer to its present should as a rule inevitably be more similar to its past in more aspects than more distant
future will be (as already noted, the far distant future may be almost unrecognizably different from the present). At
the point
of junction between future and past (the present), no difference will be available.
Regarding induction, this principle assures
that inductive predictions will become more likely the closer the future is to the
present. On this basis, we can improve the principle PF* as:
PF**: As a rule, the
closer the future is to the junction point with its own past, the more it will tend
to resemble its past, the two being indistinguishable at the point of junction (the
present).
For a correct understanding of
PF**, we must add two specifying sub-conditions:
(i)
that this principle should be applied to a future that
is sufficiently close to its past and not to an indefinitely distant future.
(ii) to safeguard the possibility
of anomalous but conceivable cases in which we find shorter sequential periods where states of affairs of a more distant
future are closer to the present than those of the near future.
Although I admit that PF** deserves
more detailed and precise consideration, it seems to me intuitively obvious that so understood this principle
already meets a reasonable standard of analyticity.
Moreover, it is the truth of PF** which explains
why it is natural for us to think that the more distant the future, the less probable
our inductive forecasts will be. This is the very familiar case of weather forecasts:
they are presently reliable for two or three days, less so for a week or more...
It also explains why our inductive generalizations about the future cannot be applied
to a very distant future. For instance, through induction we can infer that the Sun will ‘always’ rise,
but always must be placed in quotation
marks. On the basis of induction, it makes sense to affirm that the sun will rise
tomorrow morning or even a thousand years
from now. But it defies common
sense (and
is for cosmological reasons false) to use
the same inductive basis to claim
that the
Sun will still rise every morning in seventeen billion years.
How PF** applies is circumstantially
determined. If the future is sufficiently close to its junction with the past, then
the future will be unavoidably similar
to its past. The problem, of course, is that we need to establish criteria for judging how close in time the future must be to its past so that PF will
still apply. We can speculate as to whether the answer does not depend on the background
represented by the domain of regularities in which we are considering the change
– a domain of regularities to which a whole system of sufficiently well-entrenched
beliefs applies.
For example: the inductive conclusion that
the Sun will rise tomorrow belongs to a domain of regularities that may someday undergo changes predicted by
contemporary cosmology. This may include a very distant future
in which dramatic changes, such as the death of the Sun, are also predictable based
on the astronomically observed fates of similar stars in our
universe.
Of course, it is always possible that the Sun
will not rise tomorrow! However, this is only conceivable at the price of an immense
loss of other well-entrenched beliefs about astronomical regularities and, subsequently,
the loss of the current intelligibility of a considerable portion of the physical
world around us. Still, what makes us consider as highly likely the future occurrence
of regularities such as that the Sun will rise tomorrow?
The ultimate answer seems to be based on the inevitable
assumption that our world will continue to exist as a system of regularities, at
least in the form prescribed by PF**. However, this assumption seems to be a
blind gamble! After all, there is
nothing preventing our whole world from suddenly disappearing. However, the impression
of a paradox evaporates as soon as we consider that this hypothesis is
completely unverifiable. If our whole world suddenly disappear and there is no
other, how can we know this after we have also disappeared with the world? Now,
if the hypothesis is unverifiable, it must be senseless.[30] In contrast, the
hypothesis that our world will continue to exist can be verified in the future,
hence it is meaningful. Because of this asymmetry, we are free to accept that
since we cannot really think that there will be no future at all, the
regularities of our world will need to take the form prescribed by PF**, that is, we are inevitably
led to admit that certain domains of cohesive regularities will have some
permanence.
The above outlined argument concerns just a single form of induction: from the past to the future. Nevertheless, the
attempt to better specify it and to generalize about further
developments would be worthwhile, since it suggests a path free of insurmountable
hindrances. This may be of some
interest regarding a problem that from any other angle seems to remain disorienting
and intangible.
[1] Wittgenstein’s best reader at the
time, Moritz Schlick, echoes a similar view: ‘Stating the meaning of a sentence
amounts to stating the rules according to which the sentence is to be used, and
this is the same as stating the way in which it can be verified. The meaning of
a proposition is the method of its verification.’ (1938: 340)
[2] See, for a contrast, Carnap’s unfortunate
definition of philosophy as ‘the logic of science’ in his 1937, § 72.
[3] C. S. Peirce’s view of metaphysics
agrees with what is today the most accepted
one (Cf. Loux 2001, ix). On Peirce’s verificationism
see also Misak 1995, Ch. 3. As I do, and following Peirce, Cheryl Misak favors a
liberalized form of verificationism, opposed to the narrow forms advocated by the
Vienna Circle.
[4] See my analysis of the form of
semantic-cognitive rules in Chapter III, sec. 12,
and considerations regarding the nature of consciousness in Chapter II, sec. 11.
[5] I believe that the germ of the
verifiability principle is already present in aphorism 3.11 of the Tractatus Logico-Philosophicus
under the title ‘method of projection.’ There he wrote: ‘We use the perceptible
sign of a sentence (spoken or written) as a projection of a possible state of affairs.
The method of projection is the thinking of the sentence’s sense.’
[6] This is why there is no falsifiability
rule, as some authors like Michael Dummett have suggested (1993: 93).
[7] A justified explanation of the resource to structural isomorphism
will be given only in Chapter VI, sec. 2-5.
[8] Appendix of Chapter I, sec. 1.
[9] For my account of analyticity, see sec. 12 of the
present chapter.
[10] This position was supported by
A. J. Ayer, Rudolf Carnap, Herbert Feigl and Hans Reichenbach (Cf. Misak 1995: 79-80).
[11] Ayer’s view wasn’t shared by all
positivists. Moritz Schlick, closer to Wittgenstein, defended the view according
to which all that the principle of verifiability does is to make explicit the way
meaning is assigned to statements, both in our ordinary language and in the languages
of science (1936: 342 f.).
[12] This distinction is inspired by
Locke’s original distinction between intuitive and demonstrative knowledge. I do
not use Locke’s distinction because, as is well known, he questionably applied it
to non-analytic knowledge. (Cf. Locke
1975, book IV, Ch. II, § 7)
[13] Obviously, such an example can
be decontextualized and therefore cheated in many ways. One could say: red and blue,
for instance, can be blended to produce purple on the same surface, which is a bit like both colors… Like everything,
examples can also be stolen and then used in the most inappropriate ways.
[14] From his magnificent short story, ‘El Tintorero Enmascarado Hákim de Merv.’
[15] The difficulty made him propose
a more complicated solution that the logician Alonzo Church proved to be equally
faulty (Cf. Church 1949).
[16] I am surely not the first to notice this flaw.
See Barry Gower 2006: 200.
[17] Later Quine corrected this thesis,
advocating a verifiability molecularism restricted to sub-systems of language, since
language has many relatively independent sub-systems. However, our counter-argument
will apply to both cases.
[18] I think Galileo’s judges unwittingly
did science a great favor by sentencing him to house arrest, leaving him with nothing
to do other than concentrate his final intellectual energies on writing his scientific
testament, the Discorsi intorno a due nuove
scienze.
[19] Michael Dummett viewed the falsification rule
as the ability to recognize under what conditions a proposition is false (Cf. 1996: 62 f.). But this must be the same
as the ability to recognize that the proposition isn’t true, namely, that its verifiability
rule isn’t applicable, which presupposes that we know its criteria of applicability,
being consequently able to recognize their absence.
[20] Another case is the verification of other minds. For
an explanatory attempt, see my 2011, Ch. 4.
[21] Today we know that Fermat was
only joking since the mathematics of his time did not provide the means to prove
his conjecture.
[22] Curiously, in his book Kripke considers
the criterion of simplicity, but repudiates it almost fortuitously for the reason
that ‘although it allows us to choose between different hypotheses, it can never
tell us what the competing hypotheses
are’ (1982: 38). However, what the competing hypotheses – call them the rules x and y – ultimately are, is a metaphysically idle question, only answerable
by God’s omniscience, assuming that the concept of omniscience makes any sense.
The real paradox appears only when we can state it in the form of comparable hypotheses
like ‘plus’ versus ‘quus,’ and it is to
just such cases that we apply the principle of simplicity.
[23] Oxford Dictionaries.
[24] For example, Hans Reichenbach (1938), D. C. Williams (1942),
P. F Strawson (1952), Max Black (1954), Karl Popper (1959)... Original as they may
be, when faced with the real difficulties, all these attempts prove disappointing. (For critical evaluation see W. C. Salmon
1966 and Laurence Bonjour 1998, Ch. 7.)
[25] For the sake of the argument, I
am abstracting here the subject of experience... Anyway, this would demand an
addition of assumed regularities.
[26] After all, conceivability belongs to the grammatical structure
of what we understand with the term ‘world.’ The sentence ‘There are worlds that
cannot be conceived’ is contradictory, for to know the existence of any inconceivable
worlds, we must already have conceived them, at least in some vague, abstract sense.
[27] Strangely enough, the idea of a
chaotic world to which induction isn’t applicable has been uncritically assumed
as possible in the literature on the problem, from P. F. Strawson to Wesley C. Salmon.
This exposes the weight of tradition as a two-edged sword.
[28] Revelation of St. John 9, 7.
[29] One could still object with cases like that of someone
who suddenly awakes from a dream… But one forgets the remaining fact that it is
the very same person who was dreaming that is now awakened.
[30] In its lack of sense, the question remembers the anthropic
principle. The question, ‘Why is possible that we are able to think the world?’
loses its sense as soon as we consider that under infinitely many possible
worlds this is one under the few able to produce conscious beings able to pose this
pseudo-question.
Nenhum comentário:
Postar um comentário