This is only a DRAFT. The final version will be published in the book PHILOSOPHICAL SEMANTICS, by Cambridge Scholars Publishig, in 2018/1
AGAINST
THE METAPHYSICS OF REFERENCE:
METHODOLOGICAL
ASSUMPTIONS
Eine
Art zu philosophieren steht nicht neben anderen wie eine Art zu Tanzen neben
anderen Tanzarten ... Die Tanzarten schließen sich nicht gegenseitig aus oder
ein … Aber man kann nicht ernsthaft auf eine Art philosophieren, ohne die
anderen verworfen oder aber einbezogen zu haben. In der Philosophie geht es
demgegenüber wie in jeder Wissenschaft um Wahrheit.
[A way of philosophizing is not one way among
others, like one way of dancing among others … Ways of dancing are not mutually
exclusive or inclusive … But no one can seriously philosophize in one way
without having dismissed or incorporated others. In philosophy as in every
science, the concern is with truth.]
Ernst Tugendhat
Philosophy has no other roots but
the principles of Common Sense; it grows out of them, and draws its nourishment
from them. Severed from this root, its honours wither, its sap is dried up, it
dies and rots.
Thomas Reid
Given the commonsense
assumptions involved when we take the social role of language as a starting
point, at least part of this book must be critical. The reason is clear. The
new orthodoxy that dominates much of contemporary philosophy of language is
based on what we could call a metaphysics
of reference and meaning. Its views often focus on reference more than on
meaning, or on something like reference-as-meaning, displaying a strong version
of semantic externalism, hypostatized causalism and anti-cognitivism. I call
these views metaphysical not only because they oppose modest common sense, but
mainly because, as will be shown, they arise from sophisticated attempts to
unduly ‘transcend’ the limits of what can be meaningfully said.
One example of the metaphysics of reference is the position of
philosophers like Saul Kripke, Keith Donnellan and others on how to explain the
referential function of proper names and natural species terms. According to
them, it is not our cognitive access to the world but rather the mere appeal to
external causal chains beginning with acts of baptism that really matters. On
the other hand, what we may have in mind when using a proper name is for them
secondary and contingent.
Another example is the strong externalist view of Hilary Putnam, John
McDowell, Tyler Burge and others, according to whom the meaning of an
expression, its understanding, thought, and even our own minds (!) in some way
belong to the external (physical, social) world. Using a metaphor always hinted
at but never spelt out, it is as if these things were floating outside,
determined by the entities referred to with words, in a way that recalls
Plotinus’ emanation, this time not from the ‘One’, but in some naturalistic
fashion, from the ‘Many.’ By writing this, I am not mocking, but only
trying to supply the right images for what is explanatorily wanting. In fact,
externalism is an unclean concept.
After refinements, externalism is defined in a vague way as the general idea
that ‘certain types of mental contents must be determined by the external
world’ (Lau & Deutsch 2014). This is an obvious truism, insofar as we
understand the expression ‘determined by the external world’ as saying that any
mental content referring to the external world is in some way or the other causally
associated with things belonging to an external world. As Leslek Kolakowski
noted, ‘if there is nothing outside myself, I am nothing’ (2001). But this is
trivial enough to be accepted by a reasonable internalist like myself (or by a
very weak externalist, which in my view amounts to the same thing).
Nonetheless, externalists have proposed in their most central and radical
writings to read ‘determined’ as suggesting that the locus of meanings, beliefs, thoughts and even minds is not in our
heads, but somewhere in the external world… However, this sounds very much like
a genetic fallacy.
A third example is the view accepted by David Kaplan, John Perry, Nathan
Salmon and others, according to whom many of our statements have as their
proper semantic contents structured propositions, whose constituents
(things, properties, relations) belong to the external world alone, as if the
external world had any proper meaning beyond the meaning we give to it. As a
last example – which I examine in the present chapter – we can take the views
of John McDowell and Gareth Evans. According to them, we cannot sum up most of
the semantics of our language in tacit conventional rules that can be made
reflexively explicit, as has been traditionally assumed. Consistent with causal
externalism, their semantics tends to take the form of things that can be
understood chiefly in the third person, like the neuronal machinery responsible
for linguistic dispositions unable to become objects of reflexive
consciousness.
Notwithstanding the fact that most such
ideas are contrary to the semantic intuition of any reasonable human being who
hasn’t yet been philosophically indoctrinated, they have become the mainstream
understanding of specialists. Today many theorists still view them as ‘solid’
results of philosophical inquiry, rather than crystallized products of
ambitious formalist reductionism averse to cognitivism. It is true that they
have in the meantime rhetorically softened their extreme views, though still
holding them in vaguer, more elusive terms. However, if taken too seriously,
such ideas can both stir the imagination of unprepared thinkers and,
more seriously, limit their scope of inquiry.
In the course of this book, I intend to make plausible the idea that the
metaphysics of reference is far from having found the ultimate truth of the
matter. This is not the same, I must note, as to reject the originality and
philosophical relevance of its main arguments. If I did reject them on this
ground, there would be no point in discussing them here. Such philosophical
arguments usually cover insight under their suggested illusion, and remain of
interest even if they are in the end-effect flawed. If so, they would
ultimately require not additional support, but careful critical analysis. In
the process of disproving them, we could develop views with greater explanatory
power, since philosophical progress is very often dialectical. For this reason,
I think we should judge the best arguments of the metaphysics of
reference in the same critical way we value McTaggart’s argument against the
reality of time or Berkeley’s remarkable arguments against materialism.
Consider Hume’s impressive skeptical arguments to show there is nothing in the
world except flocks of ideas – an absurd conclusion that was first countered by
Thomas Reid. What all these arguments surely did, even if we are unable to
agree with them, was to draw illusory consequences from insufficiently known
conceptual structures, presenting in this way real challenges to philosophical
investigation, useful insofar as they force us to answer them by more deeply
analyzing the same structures, as they really are. Indeed, without the
imaginative and bold revisionism of the metaphysicians of reference, without
the challenges and problems they presented, it is improbable that corresponding
competing views would ever acquire enough intellectual fuel to get off the
ground.
1. Common sense and meaning
To contend with the metaphysics of
reference, some artillery pieces are essential. They are methodological in
character. The first concerns the decision to take seriously the so often
neglected fundamental principles of common sense and natural language
philosophy, respectively assumed by analytic philosophers like G. E. Moore and
the later Wittgenstein. According with philosophers of this lineage, we
should seek the starting point of our philosophical arguments as much as
possible in pre-philosophical commonsense intuitions usually reflected in our
natural language. The link between common sense and natural language is easy to
understand. We should expect that commonsense intuitions – often due to
millennia of cultural sedimentation – will come to be strongly mirrored in our
linguistic forms and practices.
As Noah Lemos wrote, we can characterize commonsense knowledge as:
...a set of truths that we know
fairly well, that have been held at all times and by almost everyone, that do
not seem to be outweighed by philosophical theories asserting their falsity,
and that can be taken as data for assessing philosophical theories (2004: 5).
Indeed, commonsense truths seem to
have always reconfirmed themselves, often approaching species wisdom. Examples
of commonsense statements are: ‘Black isn’t white,’ ‘Fire burns,’ ‘Material
things exist,’ ‘The past existed,’ ‘I am a human being,’ ‘I have feelings,’ ‘Other
people exist,’ ‘The Earth has existed for many years,’ ‘I have never been very
far from the Earth,’… (e.g., Moore
1959: 32-45). Philosophers have treasured some of these commonsense statements
as particularly worthy of careful analytical scrutiny. These include: ‘A thing
is itself’ (principle of identity), ‘The same thought cannot be both true and
false’ (principle of non-contradiction), ‘I exist as a thinking being’ (version
of the cogito), ‘The external world
is real’ (expressing a realist position on the external world’s existence), and
even ‘A thought is true if it agrees with reality’ (adequation theory of
truth).
The most influential objection to the validity of commonsense principles
is that they are not absolutely certain. Clearly, a statement like ‘Fire burns’
isn’t beyond any possibility of falsification. Moreover, science has truly
falsified many commonsense beliefs. Einstein’s relativity theory decisively
refuted the commonsense belief that the length of a physical object remains the
same independently of its velocity. But there was a time when people regarded
this belief as a self-evident truth!
This latter kind of objection is particularly important in our context,
because metaphysicians of reference have made this point to justify philosophy
of language theories that contradict common sense. Just as in modern physics
new theories often conflict with common sense, they feel emboldened to advance
a new philosophy whose conclusions depart radically from common sense and
natural language. As Hilary Putnam wrote to justify the strangeness of his
externalist theory of meaning:
Indeed, the upshot of our discussion
will be that meanings don’t exist in quite the way we tend to think they do.
But electrons don’t exist in quite the way Bohr thought they did, either.
(Putnam 1978: 216)
One answer to this kind of
comparison emphasizes the striking differences between philosophy of meaning
and physics: the way we get meanings is much more direct than the way we
discover the nature of subatomic particles. We make meanings; we don’t make
electrons. We find subatomic particles by empirical research; we don’t find
meanings: we establish them. We don’t need to read Plato’s Cratylus to realize that the meanings of our words are dependent on
our shared semantic customs and conventions.
2. Critical common-sensism
Nonetheless, a key question remains
unanswered: how certain and indisputable are our commonsense intuitions? C. S.
Peirce, undermining Thomas Reid’s unshakeable belief in common sense and based
on his own thesis of the inevitable fallibility
of human knowledge, proposed replacing traditional common-sensism with what he
called critical common-sensism. According to this theory, slow changes
really do take place in commonsense beliefs, even if not in our most central
beliefs. This change can occur particularly as a response to scientific
progress. Consequently, common sense in general, though highly reliable, is not
beyond any possibility of doubt. Still, for heuristic reasons we should
maintain a critical attitude and always be ready to submit commonsense views to
the scrutiny of reasonable doubt (cf. Peirce
1905: 481-499).
The idea that our commonsense views are open to revision has been
attacked from various standpoints. As we will see, one argument against it is
that scientific progress has not altered the most proper forms of commonsense
views. Another is that we cannot use falsification to disprove the claims of
common sense, because this would require a criterion
to distinguish true sentences from false ones. This criterion, however, could not
itself rest on common sense, since this would involve circularity…
The answer to this last objection is that it isn’t necessarily so.
First, because commonsense beliefs have different levels of reliability and
form a hierarchy (for instance, ‘I exist’ is clearly more reliable than ‘fire
burns’). Thus, it seems possible to employ the most trustworthy commonsense
beliefs, possibly in combination with scientific beliefs, to falsify or at
least restrict the domain of application of some less reliable commonsense
beliefs. Moreover, it could very well be that the most fundamental commonsense
beliefs can be so intrinsically justified that an analytical justification may
be all that philosophy requires.[1] Anyway, insofar as some
commonsense beliefs seem vulnerable to refutation, it seems advisable to
preserve the attitude of critical common-sensism.
3. Ambitious versus Modest Common Sense
I do not have the ambition to end
debates over the ultimate value of common sense. However, I believe I can
demonstrate that two deeply ingrained objections against the validity of
commonsense principles are seriously flawed, one based on the progress of
science and the other based on changes in our worldviews (Weltanschauungen). The first is that science defeats common sense.
This can be illustrated by the claim attributed to Albert Einstein that common
sense is a collection of prejudices acquired by the age of eighteen… (Most
physicists are philosophically naïve.) Changes in worldviews are
transformations in our whole system of beliefs, affecting deeply settled ideas
like moral values and religious beliefs. In my view, these two charges against
common sense are faulty because they arise from confusion between misleading ambitious formulations of commonsense
truths and their authentic formulations, which I call modest ones.
I intend to explain my point by beginning with a closer examination of
objections based on the progress of science. With regard to empirical science,
consider the sentences:
(a) The Earth is a flat disk with land in the
center surrounded by water.
(b) The sun is a bright sphere that revolves
around the Earth daily.
(c) Heavy bodies fall more rapidly than light
ones, disregarding air resistance.
(d) Time flows uniformly, even for a body moving
near the speed of light.
(e) Light consists of extremely small particles.
According to the objection, it is
widely known that science has disproved all these once commonsense statements.
Already in Antiquity, Eratosthenes of Alexandria was able not only to disprove
the Homeric view that (a) the Earth is a flat disk rimmed by water, but was
even able to measure the circumference of the Earth with reasonable precision.
Galileo showed that (b) and (c) are false statements, the first because the Earth
circles the sun, the second because in a vacuum all bodies fall with the same
acceleration. And Einstein’s relativity theory predicted that time becomes
exponentially slower as a body approaches the speed of light, falsifying
statement (d). Bertrand Russell once pointed out that the theory of relativity
showed that statement (d), like some other important commonsense beliefs,
cannot withstand precise scientific examination (cf. Russell 1925, Ch. 1; Popper 1972, Ch. 2, sec. 2). Finally,
statement (e), affirming the seemingly commonsense corpuscular theory of light
(defended by Newton, but already suggested in the Antiquity), has been
judged to be mistaken, since light consists of transverse waves (Huygens-Young
theory), even though under certain conditions it behaves as though it consisted
of particles (wave-particle theory).
A point I wish to emphasize, however, is that none of the four
above-cited statements legitimately belongs to correctly understood common
sense – a sense I call ‘modest.’ If we examine these statements more closely,
we see they are in fact extrapolations
grounded on statements of modest common sense. These extrapolations are of
speculative interest and were made in the name of science by scientists and
even by philosophers who projected ideas of common sense into new domains that
would later belong to science. In my view, true statements of common sense –
the modest statements for which (a), (b), (c), (d) and (e) could be the
corresponding non-modest extrapolations – are respectively the following:
(a’) The Earth is flat.
(b’) Each day the sun crosses the
sky.
(c’) Heavier bodies fall more
rapidly than lighter ones.
(d’) Time flows uniformly for all
bodies around us, independently of their motion.
(e’) Light consists of rays.
Now, what is at stake is that these
statements have been made for thousands years and have always been confirmed by
everyday observation. It is obvious that (a’) is a true statement if we
understand it to mean that when we look at the world around us without having
the ambition to generalize this observation to the whole Earth, we see that the
landscape is obviously flat (discounting hills, valleys and mountains).
Statement (b’) is also true, since it is anterior to the distinction between
the real and the apparent motion of the sun. Because of the distinction between
the apparent and the real motion of the sun, we know that the sentence ‘The sun
crosses the sky each day’ can be true without implying that the sun revolves
around the Earth. All it affirms is that in equatorial and sub-equatorial
regions of the Earth we see that each day the sun rises in the East, crosses
the sky, and sets in the West, which no sensible person would ever doubt.[2] Even after science proved
that bodies of different masses fall with the same acceleration in a vacuum,
statement (c’) remains true for everyday experience. After all, it only affirms
the commonplace notion that under ordinary conditions a light object such as a
feather falls much more slowly than a heavy one such as a stone... Statement
(d’) also remains true, since it concerns the movements of things in our
surroundings, leaving aside extremely high speeds or incredibly accurate
measurements of time. (In everyday life, one would never need to measure time
dilation, which is detectable only when a body approaches the speed of light, which
has nothing to do our daily experience. In everyday life, no one ever comes
home from a two-week bus trip to discover that family members are now many
years older than before). Finally, (e’) has been accepted, at least since
Homer, as is shown by his poetic epithet ‘rosy-fingered dawn.’ And we often see
sunrays at dawn or dusk or peeping through gaps in the clouds on an overcast
day.
But then, what is the point in comparing statements (a)-(b)-(c)-(d)-(e)
with the corresponding statements (a’)-(b’)-(c’)-(d’)-(e’), making the first
set refutable by science, while the latter statements remain true? The answer
is that scientifically or speculatively motivated commonsense statements
exemplified by (a)-(b)-(c)-(d)-(e) have very often been viewed equivocally, as
if they were legitimate commonsense statements. However, statements of modest
common sense like (a’)-(b’)-(c’)-(d’)-(e’) are the only ones naturally
originating from community life, being omnipresent in the most ordinary
linguistic practices. They continue to be perfectly reliable even after Galileo
and Einstein, since their truth is independent of science. The contrast between
these two kinds of example shows how injudiciously mistaken the claim is that
many or most commonsense truths have been refuted by science.[3] What science has refuted
are extrapolations of commonsense truths by scientists and philosophers who
have projected such humble commonsense truths beyond the narrow limits of their
original context. If we consider the aforementioned distinction, we find a lack
of conflict between the discoveries of science and the claims of commonsense
wisdom, also including ones used as examples by philosophers like G. E. Moore.
I do not claim modest commonsense truths are in principle irrefutable,
but only that no one has managed to refute them. Nothing warrants, for
instance, asserting that from now on the world around us will be different in
fundamental ways. A statement like (b’) can be falsified. Perhaps for some
unexpected reason the Earth’s rotation on its axis will slow down so much that
the sun will cease its apparent movement across the sky. In this case, (b’)
would also be refuted for our future expectations. But even in this case, (b’) remains
true concerning the past, while the corresponding ambitious extrapolation (b)
has always been false. In fact, all I want to show is that true commonsense
statements – modest ones – are much more sensible than scientifically oriented
minds believe, and science has been unable to refute them, insofar as we take
them at their proper face value.
Similar reasoning applies to the a
priori knowledge of common sense. To justify this new claim, consider first
the case of statements like (i) ‘Goodness is praiseworthy,’ which is
grammatically identical with statements like (ii) ‘Socrates is wise.’ Both have
the same superficial subject-predicate grammatical structure. Since in the
first case the subject ‘Goodness’ does not designate any object
accessible to the senses, Plato would have concluded that this subject must
refer to ‘goodness in itself’: the purely intelligible idea of goodness, existing in an eternal and immutable non-visible
realm only accessible to the intellect. Plato reached his conclusion based on
the commonplace grammatical distinction between subject and predicate found in
natural language. Under this assumption, he was likely to see a statement like
(iii) ‘Goodness in itself exists’ as a commonsensical truth. In fact,
according to his doctrine, an a priori
truth.
However, we know that with Frege’s innovation of quantificational logic
at the end of the 19th century, it became clear that statements like (i) should
have a deep logical structure that is much more complex than the
subject-predicate structure of (ii). Statement (i) should be analyzed as saying
that all good things are praiseworthy, or (iv) ‘For all x, if x is good, then x is
praiseworthy,’ where the supposed proper name ‘Goodness’ disappears and is
replaced by the predicate ‘… is good.’ This new kind of analysis reduced
considerably the pressure to countenance the Platonic doctrine of ideas.
However, the suggestion that the subject ‘Goodness’ refers to an
abstract idea clearly does not belong to modest common sense, and statement (iii),
‘Goodness in itself exists,’ isn’t even inscribed in our natural language. It
again belongs to ambitious common sense. Statement (iii) was a speculative
extrapolation by a philosopher based on an implicit appeal to the superficial
grammar of natural language, and though it was probably a bad choice, it would
be unjust to blame modest common sense and our ordinary language intuitions on
subject-predicate grammar. Finally, it is wise to remember that
quantificational logic has not undermined the (commonsensical) grammar
of our natural language; it has only selected and made us conscious of vastly
extended fundamental patterns underlying the representative function of natural
language.
What all these examples do is to undermine the frequently made claim
that scientific progress contradicts common sense. Scientific discoveries only
refute speculative extrapolations of common sense and natural language made by
scientists and philosophers, such as the idea that the Sun revolves around the
Earth or that there is a purely intelligible world made up of abstract ideas
like that of Goodness in itself. But nothing of the sort has to do with the
explanations given by modest common sense, the only ones long established by
the shared practical experience of mankind over the course of history.
4. Resisting changes in worldviews
Finally, I wish to consider
commonsense ideas that are challenged by changes in our worldviews. This is, for
instance, the case with the belief that a personal God exists or that we
have minds independently of our bodies. The objection is the following. The
overwhelming majority of cultures accept a God (or gods) and the soul as
undeniably real. In Western Civilization for the last two-thousand years,
society has even sanctioned denial of these beliefs with varying degrees of
severity, sometimes even resorting to capital punishment. Although they were
once commonsense beliefs, today no one would say that they are almost
universally accepted. On the contrary, few scientifically educated persons would
agree with them. Consequently, it seems that commonsense ideas can change in
response to changes in our worldviews...
My reaction to this does not differ very much from my reaction to the
objection contrasting common sense with the progress of science. Beliefs
regarding our worldviews lack universality, not really belonging to what
I call modest common sense. There are entire civilizations, particularly in
Asia, where the idea of a personal God is foreign to the dominant religion.
Regarding the soul, I remember a story told by an anthropologist who once asked
a native Brazilian what happens after people die. The native answered: – ‘They
stay around.’ – ‘And later?’ asked the anthropologist. – ‘They go into some
tree.’ – ‘And then?’ – ‘Then they disappear’...[4] The lack of concern was
evident. And the unavoidable conclusion is that belief in a personal God and an
eternal soul do not enjoy the kind of universality that would be expected of
modest common sense; if they are said to belong to common sense, must be an
ambitious one. In fact, these beliefs seem to result from the distortion of
ordinary views through wishful thinking, which has often happened in
Western culture.[5]
Natural language also supports the view that these beliefs are not
chiefly commonsensical: a person holding religious beliefs usually does
not say he knows that he has a soul
independent of her body… He prefers to claim he believes in these things. And even this belief has a particular
name: ‘faith,’ which is belief not supported by reason and observation (against
faith there are no arguments). On the other hand, the same person would never
deny that he knows there is an
external world and that he knows this
world existed long before he was born… Modest commonsense knowledge is not a
question of wishful thinking or non-rational faith.
What all these arguments suggest is that modestly understood commonsense truths – together with the very
plausible discoveries of real science – can reasonably be said to form the
basis of our rationality, the highest tribunal of reason. Furthermore,
since science itself can only be constructed starting from a foundation of
accepted modest commonsense beliefs, it does not seem possible, even in
principle, to deny modest common sense as
a whole on the authority of science without also having to deny the very
foundations of rationality.
Not only do science and changes in our worldview seem unable to refute
modest common sense, even skeptical hypotheses cannot do this in the highly
persuasive way one could expect. Suppose, for instance, that radical skeptics
are right, and you discover that until now you have lived in what was just an
illusory world… Even in this case, you would be unable to say that the world
where you lived until now was unreal
in the most important sense of the word. For that world would still be fully
real in the sense that people perceived it with maximal intensity, and it was
independent of the will, was interpersonally accessible and obeyed natural
laws… These are criterial conditions that when satisfied create our
conventional sense of reality, a sense indefeasible by skeptical scenarios (see
Ch. VI, sec. 29).
5. Primacy of Established Knowledge
The upshot of the comparison between
modest common sense and science is that we can see science as not opposed to
modest common sense, but rather as its
proper extension, so that both are mutually supportive. According to this
view, science is expanded common sense. Contrary to Wilfrid Sellars (1962:
35-78), the so-called ‘scientific image of the world’ did not develop in
opposition to or even independently of the old ‘manifest image of the world,’
for there is no conflict between them. This conclusion reinforces our
confidence that underlying everything we can find commonsense truths, insofar as
they are satisfactorily identified and understood.
In endorsing this view, I do not claim that unaided modest commonsense
truth can resist philosophical arguments, as philosophers like Thomas Reid have
assumed. One cannot refute Berkeley’s anti-materialism by kicking a stone, or
answer Zeno’s paradox of the impossibility of movement by putting one foot in
front of the other. These skeptical arguments must be wrong, but to disprove
them, philosophical arguments are needed to show why they seemingly make sense, again grounding their rejection at
least partially in other domains of common sense if not science, something
reached only by the comprehensiveness of philosophical reasoning. So, what
I wish to maintain is that the principles of modest common sense serve as the
most reliable assumptions and that some fundamental modest commonsense
principles will always be needed, if we do not wish to lose our footing in
everyday reality.
I am not proposing that a philosophy based on modest common sense and
its effects on natural language intuitions would be sufficient. It is
imperative to develop philosophical views compatible with and complementing
modern science. We must construct philosophy on a foundation of common sense informed by science. That is: insofar as
formal reasoning (logic, mathematics…) and empirical science (physics, biology,
psychology, sociology, neuroscience, linguistics...) can add new extensions and
elements beyond modest commonsense principles, and these extensions and
elements are relevant to philosophy, they should be taken into account. As we
saw above, it was through the findings of predicate calculus that we came to
know that the subject ‘goodness’ in the sentence ‘Goodness is praiseworthy’
should not be logically interpreted as a subject referring to a Platonic idea,
since what this sentence really means is ‘For all x, if x is good, x is praiseworthy.’
I will use the term established
knowledge for the totality that includes modest commonsense knowledge and
all the extensions the scientific community accepts as scientific knowledge.
Any reasonable person with the right information would agree with this kind of
knowledge, insofar as he was able to properly understand and evaluate it. It is
in this revised sense that we should reinterpret the Heraclitean dictum that we must rely on common
knowledge as a city relies on its walls.
The upshot of these methodological remarks is that we should judge the
plausibility of our philosophical ideas against the background of established
knowledge, i.e., comparing them with the results of scientifically informed
common sense. We may call this the principle of the primacy of established knowledge, admonishing us to make our
philosophical theses consistent with it. Philosophical activity, particularly
as descriptive metaphysics,[6] should seek reflexive equilibrium with the widest possible range of established
knowledge, the knowledge mutually supported by both modest common sense and
scientific results. This is the ultimate source of philosophical credibility.
Finally, if we find inconsistencies between our philosophical theories
and our established knowledge, we should treat them as paradoxes of thought, even
if they are very instructive, and should search for arguments that
reconcile philosophical reflection with established knowledge. Lacking
reconciliation, we should treat philosophical theses only as proposals, even if they are stimulating
from a speculative viewpoint, as is the case of revisionary metaphysics
superbly exemplified by Leibniz, Berkeley and Hume and in considerable
measure also by most American theoretical philosophers since W. V-O. Quine.
This does not mean that their results require acceptance as ‘solid’
discoveries, but rather that they deserve attentive consideration, the sort we
grant to the best cases of expansionist scientism. To proceed otherwise can
lead us down the slippery slope to dogmatism.
6. Philosophizing by examples
We must complement our
methodological principle of the primacy of established knowledge with what
Avrum Stroll called the method of
philosophizing by examples. He himself used this method to construct
relevant arguments against Putnam’s externalism of meaning (Stroll 1998, x-xi).
Stroll was a Wittgenstein specialist, and Wittgenstein’s therapeutic
conception of philosophy directly inspired his approach. According to
Wittgenstein, at least one way of doing philosophy is by performing
philosophical therapy. This therapy consists in comparing the speculative use
of expressions in philosophy – which is generally misleading – with a variety
of examples, most of them of their everyday usage – where these
expressions earn their proper meanings, using a method of contrast to clear up
confusion. He thought this therapy was only possible through meticulous
comparative examination of various real and imaginary concrete examples of
intuitively correct (and even incorrect) uses of expressions. This would
make it possible to clarify the true meanings of our words, so that the hidden
absurdities of metaphysics would become evident... Since contemporary
philosophy of language tends to be unduly metaphysically
oriented, and in this way diametrically opposed to the kind of philosophy
practiced by Wittgenstein, a similar critique of language, complemented by
theoretical reflection, is what much of contemporary philosophy needs to
find its way back to truth.
I intend to show that today’s metaphysics of reference and meaning
suffers from a failure to consider adequately, above all, the subtle nuances of
linguistic praxis. It suffers from an accumulation of potentially obscurantist
products of what Wittgenstein called ‘conceptual houses of cards’ resulting
from ‘knots of thought’ – subtle semantic equivocations caused by a pressing
desire for innovation combined with a lack of more careful attention to
nuanced distinctions of meaning that expressions receive in different contexts
where they are profitably used.
One criticism of Wittgenstein’s therapeutic view of philosophy is that
it would confine philosophy to the limits of the commonplace. Admittedly, there
is no good reason to deny that the value of philosophy resides largely in its
theoretical and systematic dimensions, in its persistence in making substantive
generalizations. I tend to agree with this, since I also believe that in its
proper way philosophy can and should be theoretical, even speculatively
theoretical. Nonetheless, I think we can to a great extent successfully
counter this objection to Wittgenstein’s views, first interpretatively and then
systematically.
From the interpretative side, we have reason to think that the objection
misunderstands some subtleties of Wittgenstein’s position. The most
authoritative interpreters of Wittgenstein, G. P. Baker and P. M. S. Hacker,
insisted that he did not reject philosophical theorization tout court. In rejecting
philosophical theorizing, he was opposing scientism:
the kind of philosophical theorization that mimics science. Scientism
tries to reduce philosophy itself to science in its methods, range and
contents, as he already saw happening in logical positivism.[7] Instead, he would
countenance a different sort of theorization, particularly the ‘dynamic,’[8] the ‘organic’ instead of
‘architectonic’ (Wittgenstein 2001: 43) – a distinction he seems to have
learned from Schopenhauer (Hilmy 1987: 208-9). This helps explain why, in a
famous passage of Philosophical
Investigations, he argued that it is both possible and even necessary to
construct surveillable representations (übersichtliche Darstellungen). These can
show the complex logical-grammatical structure of the concepts making up the
most central domains of understanding. As he wrote:
A main source of our failure to
understand is that we do not command a clear view of the use of our words – Our
grammar is lacking in this sort of surveillability. A surveillable
representation produces just that understanding which consists in ‘seeing
connections’; hence the importance of finding and inventing intermediate cases.
The concept of surveillable representation is of fundamental significance for
us. It earmarks the form of account we give, the way we look at things (Is this
a ‘Weltanschauung’?). (Wittgenstein
1984c, sec. 122)
Now, in a sense a surveillable
representation must be theoretical, since it must contain generalizations, and this constitutes the ultimate core of what
we might call a ‘theory.’ [9] If we agree that all
generalizations are theoretical, any surveillable representation, as it must
contain generalizations, must also be theoretical.
Moreover, the addition of intermediate
connections already existent
but not explicitly named by the expressions of ordinary language enables us to
make explicit previous conventions that serve as links connecting a multitude
of cases. It is possible that because of the generality and function of these
links, they never need to emerge in linguistically expressible forms (consider,
for instance, our MD-rule for proper names). Expositions of these links are
properly called ‘descriptive’, insofar as they are already present under the
surface of language. But it is acceptable to call them ‘theoretical’ – in the
sense of a description of general principles inherent in natural language – if
they are intended to be the right way to assure the unity in diversity that our
use of expressions can achieve.
The
addition of intermediary connexions helps to explain why normal language
philosophy, as initially developed by Gilbert Ryle and J. L. Austin gradually
transformed itself into far more liberal and theoretical forms of philosophy
inspired by natural language that we can already find in works of P. F.
Strawson and later in H. P. Grice[10] and John Searle. It also
helps to justify the introduction of new technical terms to fill the gaps in natural
language. Terms like ‘language-game,’ ‘grammatical sentence’ and even
‘surveillable representation’ support this point in Wittgenstein’s own
writings. In fact, even Austin, the chief defender of a quasi-lexicographical
ordinary language analysis didn’t eschew the creation of new technical terms. Expressions
like ‘locutionary act’ (composed of ‘phonetic’, ‘phatic’ and ‘rhetic acts’),
‘illocutionary act’ and ‘perlocutionary act’ (1962, Lect. VIII) were created as
the only way to express – guided by reasoning on interactive linguistic
activity – fundamental deep structures totally unexpressed in our normal usage.
Now, from the systematic argumentative side, we can say that
independently of the way we interpret Wittgenstein, there are good
reasons to believe theoretical considerations are indispensable. An important
point is that philosophy can only be therapeutic or critical because its work
is inevitably based on theoretical (i.e., generalized) assumptions that make
possible its therapeutic efficacy. Usually Wittgenstein did not explicitly
stated or developed the assumptions needed to make his conceptual therapy
convincing. He was an intuitive thinker in the style of Heracleitus or
Nietzsche who all too often did not develop his insights beyond the
epigrammatic level. In any case, such assumptions are inevitable, and the
result is the same: The critical (therapeutic) and the more constructive
(theoretical) searches for surveillable representations can be understood as
two complementary sides of the same analytical coin (Costa 1990: 7 f.).
Theoretical assumptions were the indispensable active principle of his
therapeutic potions.
Recapitulating, we have found two main methodological principles for
orienting our research in this book:
A. The principle of the primacy of established knowledge (our
principle of all principles), according to which modest common sense,
complemented by scientific knowledge, constitutes the highest tribunal of
reason in judging the plausibility of philosophical views.
B. The method of philosophizing by examples, according to which the best way to orient
ourselves in the philosophical jungle is to test our ideas in all possible
cases by analyzing a sufficient number of different examples. If we do not use
this method, we risk losing ourselves in a labyrinth of empty if not fallacious
abstractions.
Oriented by the two above-considered
methodological principles, I intend to perform tasks. The first one
is to revive some old and unjustly dismissed philosophical ideas, like
descriptivism, the role of facts as the only proper truthmakers, the view of
existence as a higher-order property, the verificationist view of meaning, the
correspondence theory of truth… The second is to offer some linguistic
criticism. I intend to show that the most positive and challenging
theses of the metaphysics of reference – even if original and illuminating –
are no more than sophisticated conceptual illusions.
7. Tacit knowledge of meaning:
traditional explanation
I will assume the practically
indisputable notion that language is a system of signs basically governed by
conventionally grounded rules, including semantic ones. Linguistic conventions
are rules obeyed by most participants in the linguistic community. These
participants expect other participants to comply with similar or complementary
rules and vice-versa, even if they aren’t really aware of them (cf. Grice 1989, Ch. 2; Lewis 2002: 42).
According to this view, the sufficiently shared character of language
conventions is what makes possible the use of language to communicate thoughts.
One of the most fundamental assumptions of the old orthodoxy in
philosophy of language is that we lack
awareness of the effective structures of semantically relevant rules
governing the uses of our language’s most central conceptual expressions. We
know how to apply the rules, but the rules are not available for explicit
examination. Thus, we are unable to command a clear view of the complex network
of tacit agreements involved. One reason is the way we learn expressions in our
language. Wittgenstein noted that we learn the rules governing our linguistic
expressions by training (Abrichtung), that is, through informal
practice, imitation and correction by others who already know how to use the
words properly. Later analytic philosophers, from Gilbert Ryle to P. F.
Strawson, Michael Dummett and Ernst Tugendhat, have always insisted that we do
not learn the semantically relevant conventions of our language (i.e., the
semantic-cognitive rules determining referential use of expressions)
through verbal definitions, but rather in non-reflexive, unconscious ways.
Tugendhat wrote that we learn many of these rules in childhood through
ostension by means of positive and negative examples given in interpersonal
contexts: other speakers confirm them when correct and disconfirm them when
incorrect. Hence, the final proof that we understand these rules is
interpersonal confirmation of their correct application. (Tugendhat & Wolf
1983: 140) For this reason, it is often so hard if not impossible to obtain an
explicit verbal analysis of the meaning of an expression that reveals its
meaning-rules. Using Gilbert Ryle’s terms, with regard to these meaning-rules
we have knowing how, skill,
competence, an automatized ability that enables us to apply them correctly; but
this is insufficient to warrant knowing
that, namely, the capacity to report
what we mean verbally (1990: 28 f.).
This non-reflexive learning of semantic rules applies particularly to
philosophical terms like ‘knowledge,’ ‘consciousness,’ ‘understanding,’
‘perception,’ ‘causality,’ ‘action,’ ‘free will,’ ‘goodness,’ ‘beauty,’ which
are central to our understanding of the world (Tugendhat 1992: 268). Because of
their more complex conceptual structure and internal relationships with
other central concepts, these concepts are particularly elusive and resistant
to analysis. This insight certainly also applies to conceptual words from
philosophy of language, like ‘meaning,’ ‘reference,’ ‘existence’ and ‘truth,’
which will be examined later in this book. Finally, complicating things still
more, relevant concepts are also in some sense empirically grounded and not
completely immune to additions and changes resulting from the growth of our
knowledge. For instance: until recent advances in neuroscience, bodily movement
was considered essential to the philosophical analysis of the
concept of action. Now, with sensitive devices able to respond to electrical
discharges in our motor-cortex, we can even move external objects using sheer
willpower. Intentions unaided by bodily movements are now sufficient to produce
external physical motions intended by the agent (see neuroprosthetics and BCIs).
However, lack of semantic awareness can become a reason for serious
intellectual confusion when philosophers try to explain what these terms mean. Philosophers are very often under
the pressure of some generalizing purpose extrinsic to that required by the
proper nature of their object of investigation. Consider theistic purposes in
the Middle Ages and scientist purposes in our time, which can easily produce
startling but erroneous magnifications hinged on minor real findings.
Wittgenstein repeatedly these metaphilosophical views throughout his
entire philosophical career. Here are some of his best quotes, in chronological
order, beginning with his Tractatus
Logico-Philosophicus and ending with his Philosophical Investigations:
Natural language is part of the
human organism and not less complicated than it. ... The conventions that are
implicit for the understanding of natural language are enormously complicated
(Wittgenstein 1984g, sec. 4.002).
Philosophers constantly see the method of science before their eyes, and
are irresistibly tempted to ask and answer questions the way science does. This
tendency is the real source of metaphysics, and leads the philosopher into
complete darkness. (1958: 24)
We can solve the problems not by giving new information, but by
arranging what we have always known. Philosophy is a battle against the
bewitchment of our intellect by language (Wittgenstein 1984c sec. 109).
The aspects of things that are most important for us are hidden because
of their simplicity and familiarity. (One is unable to notice something –
because it is always before one’s eyes.) The real foundations of his enquiry do
not strike a person at all. Unless that fact has at some time struck him. – And
this means: we fail to be struck by what, once seen, is most striking and most
powerful. (Wittgenstein 1984c, sec.129).
Contrary to empirical statements, rules of grammar describe how we use
words in order to both justify and criticize our particular utterances. But as
opposed to grammar book rules, they are not idealized as an external system to
be conformed to. Moreover, they are not appealed to explicitly in any
formulation, but are used in cases of philosophical perplexity to clarify where
language misleads us into false illusions … (A whole cloud of philosophy is
condensed into a drop of grammar.) (Wittgenstein
1984c, II xi).
Around the mid-twentieth century, a
number of analytical philosophers were in significant ways directly or
indirectly influenced by Wittgenstein. They thought clarification resulting
from the work of making explicit the tacit conventions that give meaning to our
natural language was a kind of revolutionary procedure: We should identify most
if not all philosophical problems with conceptual problems that could be solved
(or dissolved) by means of conceptual analysis.
Notwithstanding, except for the acquisition of new formal analytical
instruments and a new pragmatic concern leading to more rigorous and systematic
attention to the subtleties of linguistic interaction, there was nothing truly
revolutionary in the philosophy of linguistic analysis and the critique of
language associated with it. Analysis of the meaning of philosophically
relevant terms as an attempt to describe the real structure of our thinking
about the world is no more than the resumption of a project centrally present
in the whole history of Occidental philosophy. Augustine wrote: ‘What, then, is time? If no one asks me, I know; if I wish
to explain it to him who asks, I know not.’ (Augustine, 2008, lib. XI,
Ch. XIV, sec. 17). In fact, we find the same concern already voiced by
Plato. If we examine questions posed in Plato’s Socratic dialogues, they all
have the form ‘What is X?’, where X takes the place of philosophically relevant
conceptual words like ‘temperance,’ ‘justice,’ ‘virtue,’ ‘love,’ ‘knowledge’…
What always follows are attempts to find a definition able to resist objections
and counterexamples. After some real progress, discussion usually ends in an
aporetic way due to merciless conceptual criticism. Philosophy based on
analysis of conceptual meaning has always been with us. It is the main
foundation of our philosophical tradition, even when it is hidden under its most
systematic and speculative forms.[11]
Finally, by defending the view that
philosophy’s main job is to analyze implicit conceptual knowledge I am not
claiming that philosophy in this case cannot be about the world, as some have
objected (Magee 1999, Ch. 23). Even if through our conceptual network,
philosophy continues to be about the world because the concepts analyzed by
philosophy are in one way or the other about the world. Moreover, in a
systematic philosophical work central concepts of our understanding of the
world are analysed in their internal relations with other central concepts,
with the same result that philosophy is indirectly also about the world – about
the world as it is synthetically reflected by the central core of our
conceptual network.[12]
Indeed, even if the philosophical analysis of our conceptual structures does
not depend on empirical experience as such, empirical experience has already in
one way or the other entered in the production and change of such conceptual
structures.
8. A very simple model of a
semantic-cognitive rule
We
urgently need to clarify the form of semantic-cognitive rules as it is meant
here. However, it is not very helpful if we begin by attempting to analyze
a conceptual rule constitutive of a philosophical concept-word. Not only are
these concept-words usually polysemic, but the structures of central
meaning-rules expressed by them are much more complex and harder to analyze and
in this way to elucidate or define. Anyway, although philosophical
definitions can be extremely difficult to achieve, the skeptical conclusion
that they are impossible can well be too hasty.
To get a glimpse into a semantic-cognitive rule, I strategically begin
with a very trivial concept-word that can be used as a model, since its
logical grammar is correspondingly easier to grasp. Thus, I wish to scrutinize
here the standard nominal meaning of the concept-word ‘chair,’ using it
as a simple model that can illustrate my approach to investigating the much
more complicated philosophical concepts that shape our understanding of the
world. We all know the meaning of the word ‘chair,’ though it would be not
so easy to give a precise definition if someone asked for one. Now,
following Wittgenstein’s motto, according to which ‘the meaning of a word is
what the explanation of its meaning explains’ (1984g, sec. 32), I offer a
perfectly reasonable definition (explanation) of the meaning of the word
‘chair.’ You can even find something not far from it in the best
dictionaries. This definition expresses the characterizing ascription
rule of this concept-word, which is the following:
(C) Chair (Df.) = a moveable seat with a backrest, designed for use by only
one person at a time (it usually has
four legs, sometimes has armrests, is sometimes upholstered, etc.).[13]
In
this definition, the conditions stated outside of parentheses are necessary and together sufficient: a chair must be a seat with
a backrest designed for a single person. These criterial conditions form an essential (indispensable) condition,
also called the definitional or primary
criterion for the applicability of the concept-word, to use Wittgenstein’s
terminology. What follows in parentheses are complementary (dispensable) secondary criteria or symptoms: usually a chair has four legs,
often it has armrests, and sometimes it is upholstered. These indications can
be helpful in identifying chairs, even though they are irrelevant if the
definitional criterion isn’t satisfied. A chair need not have armrests, but
there cannot be a chair with armrests but no backrest (this would be a bench).
Thus, with (C) we have an expression of the conventional ascription rule for
the general term ‘chair,’ which should belong to the domain of what Frege calls
sense (Sinn).[14]
I find it hard to oppose this definition.
Table-chairs, armchairs, easy chairs, rocking chairs, wheelchairs, beach
chairs, kneeling chairs, electric chairs, thrones… all conform to the
definition. Car, bus and airplane seats are not called ‘chairs’ because they
are made to be fixed inside a proper place and in this way are not free to be
moved, though they are quasi-chairs.
It can be difficult to remove electric chairs and thrones from their places,
but it is not impossible. Moreover, we can always imagine borderline cases. There could be a seat
whose backrest is only 20 cm. high (is it a stool or a chair?), a chair with a
seat raised only 10 cm. above the floor (is it even a seat?), a chair whose
backrest was removed for some hours (did it become a backless chair or
provisionally a stool?). Suppose we find a tree trunk in a forest with the form
of a chair that, with some minor carving and painting, is now being used as a
chair (it was not manufactured as a chair, but minor changes turned it into
something we could call a real chair, depending on the relevance of the
changes). Nevertheless, our definition is still reasonable despite vague
borderline cases. Empirical concepts all have some degree of vagueness, and one
can even argue that vagueness is a metaphysical property of reality. Indeed, if
our definition of a chair had overly sharp boundaries, it would be even inadequate,
since it would not reflect the desired flexibility of application belonging
of our normal concept-word ‘chair,’ tending to diminish the extension of
the concept. An often overlooked point is that what really justifies a semantic-cognitive rule is its practical
applicability to common cases. That is, what really matters are cases to
which we can apply the ascription rule without much hesitation and not
those rare borderline cases where we cannot know if the ascription rule is
definitely applicable, since the rarity of these cases makes them irrelevant
from a practical point of view. Accordingly, the function of a concept-word is
far from being discredited by a few borderline cases where we are at a loss to
decide whether it is still applicable.
Furthermore, we need to distinguish real
chairs from ‘so-called chairs,’ because in such cases we are making an extended or even a metaphorical use of the word. A child’s toy chair, like a sculptured
chair, is a chair in an extended sense of the word. In Victor Hugo’s novel Toilers of the Sea, the main character
ends his life by sitting on a ‘chair of rock’ on the seashore, waiting to be
swept away by the tide... But it is clear from our definition that this use of
the word is metaphorical: a real chair must be made by someone, since it is an artifact; but the immoveable stone
chair was only a natural object accidentally shaped by erosion into the rough
form of a chair and then used as a chair.
There
are also cases that only seem to contradict the definition, but that on closer
examination do not. Consider the following two cases, already presented as
supposed counterexamples (Elbourne 2011, Ch. 1). The first is the case of a
possible world where some people are extremely obese and sedentary. They
require chairs that on the Earth would be wide enough to accommodate two or
three average persons. Are they benches? The relevant difference between a
bench and a chair is that chairs are artifacts made for only one person to sit
on, while benches are wide enough for more than one person to sit on at a time.
Hence, in this possible world what for us look like benches are in fact chairs,
since they are constructed for only one sitter at a time. If these chairs were
‘beamed’ over to our world, we would say that they remained chairs, since the
makers intended them to be chairs, even if we used them as
benches. The second counterexample is that of a social club with a rule that
only one person at a time can use each bench in its garden. In this case, we
would say they continue to be benches and not chairs, since they are still
artifacts designed for more than one person to sit on, even if they are now
limited to single sitters. Elbourne also asked if a chair must have four legs.
Surely, this would be a rough mistake, since according to our definition having
four legs isn’t a defining feature: there are chairs with no legs, like an
armchair, chairs with three legs, and we can imagine a chair with a thousand legs.
The property of having four legs is what we have called a symptom or a
secondary criterion of ‘chair-ness,’ only implying that a randomly chosen chair
will probably have four legs.
One can always imagine new and more
problematic cases that do not seem to fit the definition, but if we look at the
definition more carefully we discover that the difficulty is only apparent or
that these ‘exceptions’ are borderline cases or that they are extensions or
metaphors, or even that the definition indeed deserves some refinement, remembering
that refinement isn’t a change to something other.
Finally, the boundaries of what we call a
‘chair’ can also undergo changes from language to language and over time; in
French an armchair (easy chair) is called a ‘fauteuil’ in contrast to a
‘chaise’ (chair), though a French speaker would agree that it is a kind of
chair. I suspect that thousands of years ago, in most societies one could not
linguistically distinguish a stool from a chair, since a seat with a backrest
was a rare piece of furniture until some centuries ago.
9.
Criteria versus symptoms
To
make things clearer, it is already worthwhile to broaden our consideration of
Wittgenstein’s distinction between criteria
and symptoms. A symptom or a secondary criterion is an entity E that – assuming it is really given –
only makes our cognitive awareness A
of E more or less probable. In contrast, a definitional or primary criterion is an entity E (usually appearing as a complex criterial configuration) that –
assuming it is really given – makes our cognitive awareness A of E
beyond reasonable doubt (Wittgenstein 1958: 24; 2001: 28).[15]
For instance, if we assume I can see four
chair legs under a table, this is a symptom of a chair, since it greatly
increases the probability that a chair is behind the table. But if we assume
that what is visually given to me is ‘a moveable seat with a backrest made for
only one person to sit on,’ this puts my cognitive awareness of a chair beyond
doubt. The definition (C) expresses a definitional
criterion, understood as such because
its assumed satisfaction leaves no possibility to doubt that we can apply the
ascription rule for the concept-word ‘chair.’
We cannot guarantee with absolute certainty
that entity E (criterion or symptom)
is ‘really given’ because I accept that the products of human experience are
inevitably fallible. Nonetheless, using this ‘assumed given-ness’ based on
experience and an adequate informational background, we can find a probability when a symptom is satisfied
and a practical certainty when a criterion is satisfied. In this last case, we
claim there is a probability so close to 1 that we can ignore the possibility
of error in the cognitive awareness A
that entity E is given.
(Correspondingly, one could also speak in this sense of a presumed necessity.)
Symptoms or secondary criteria can help us
identify entity E using cognitive
awareness A, even if we cannot regard
E as necessary. However, symptoms are
of no use unless definitional criteria are also met. Four legs and armrests
that do not belong to a chair would never make a chair.[16]
Terms like ‘criteria’ and ‘symptoms,’ as much as ‘conditions’ have
so-called process-product ambiguity.
We can see them as (a) elements of the rule that identifies what is given, but
we can also see them as (b) something satisfying the rule that is really
given in the world. Our semantic-cognitive rules are also criterial rules, able
with the help of imagination to generate criterial configurations belonging to
them internally as (a). Hence, we could say that definition (C) is the
expression of a semantic-criterial rule with the form: ‘If we accept that E is really given, we must conclude A,’ where the conclusion A is our awareness with practical certainty that E is given.
One problem here is to know what this awareness means. My suggestion
will be that we can equate this cognitive awareness with our acceptance of the
existence and applicability of a network of external inferential relations once
a semantic-cognitive rule is satisfied. The concept of chair, for instance,
consists of internal relations expressed by a definitional rule (C). But our
awareness of the application of this concept arises as a maze of external
relations resulting from the satisfaction of (C). For example, if I am aware
that a chair exists, I can infer that it has a particular location, that I can
sit on it or ask someone to sit on it, that I could possibly damage it, borrow
it, loan it, etc. I can do this even if I have no real consciousness of the
structure of the rule I applied to identify the chair.
10. Challenges to the traditional
explanation (i): John McDowell
Supporters of semantic externalism
have challenged the idea that the meanings of expressions consist in our
implicit knowledge of their constitutive rules or conventions. According to
their view, the meanings of expressions are predominantly related to physical
and social-behavioral worlds, depending in this way only on objects of
reference and supposedly also on neurobiological processes involving autonomous
causal mechanisms. In this context, there is little room for discussing the
conventionality of meaning.
As evidence for the externalist view, we can adduce our lack of
awareness of the structure of semantic rules determining the linguistic uses of
our words. If we lack awareness of senses or meanings, it might be that they
could as meanings be instantiated to a greater or lesser extent in a
non-psychological domain. If this is so, in principle cognitive (also called
pre-cognitive) participation in meaning could be unnecessary. Meaning could
result solely from autonomous causal mechanisms, not recoverable by
consciousness. In opposition to Michael Dummett’s ‘rich’ view of implicit
meaning, John McDowell illustrated the externalist position on the referential
mechanism of proper names, observing that:
We can have the ability to say that
a seen object is the owner of a familiar name without having any idea of how we
recognize it. The assumed mechanisms of recognizing can be neural machinery
[and not psychological machinery] – and its operations totally unknown to
whoever possesses them. (McDowell 2001: 178)[17]
Some pages later, McDowell
(following Kripke) asserts that the referential function of proper names would
not be explained by conventionally based implicit identification rules that can
be descriptively recovered, because:
The opinions of speakers on their
divergent evidential susceptibilities regarding names are products of
self-observation, as much as this is accessible, from an external point of view. They are not intimations coming from the
interior, from a normative theory implicitly known, a receipt for the correct
discourse which guides the behaviour of the competent linguist. (McDowell 2001:
190)
This view is in direct opposition to
the one I defend in this book, not because it never can be justified, but
because it isn’t the standard case. In what follows, I intend to show
that usually the implicit application of internal semantic-cognitive rules
based on criteria is indispensable for the referential function. Moreover, we
have already begun to see that to have reference, a usually tacit and
unconscious cognitive element must be associated with our expressions and
should be instantiated at least in some measure and at some moment in the language
user’s head. For in no case is this clearer than with McDowell’s main focus:
proper names (see my Appendix to Chapter I).
Here is how we could argue against McDowell’s view. If he were correct,
an opinion about the given criterial evidence for the application of a proper
name found through external observation of our referring behavior should be
gradually reinforced by the cumulative consideration of new examples, that is, inductively. Even repetition of the same
example would be inductively reinforcing! However, this is far from the
case. Consider our characterizing semantic-cognitive rule (C) for applying the
concept-word ‘chair.’ We can see from the start that (C) seems correct. We naturally tend
to agree with (C), even if we have never considered any examples of the word’s
application. And this shows that speakers are indeed only confirming a recipe
for correct application that comes from inside, as a matter of tacit agreement
among speakers… Admittedly, after we hear this definition, we can test
it. Thus, we can imagine a chair without a backrest but see that it is really a
stool, which isn’t properly a chair. If we try to imagine a chair designed so
that more than one person can sit on it, we will conclude that we should call
it a sofa or a garden bench... We can understand supposed counterexamples only
as means to confirm and possibly correct or improve the definition, thereby
discovering its extensional adequacy in a non-inductive way. This specification
of meaning seems to be simply a contemporary formulation of something Plato
identified as reminiscence (anamnesis): the recalling to mind of his
ideas. We do not need to go beyond this, imagining all sorts of chairs (rocking
chairs, armchairs, wheelchairs…) in order to reinforce our belief in the basic
correctness of our intuitive definition.
Now consider the same issue from McDowell’s perspective. Suppose he is
right and our knowledge of the meaning of a common name like ‘chair’ were the
result of self-observation from an external
viewpoint. We could surely acquire more certainty that chairs are seats with
backrests made for one person to sit on by observing the similarities among
real chairs that we can see, remember or imagine. Inductively, the results
would then be increasingly reinforced, possibly by agreement among observers
about an increasing number of examples. As we already noted, even examples of
people reaching shared agreement by singling out thousands of identical
classroom chairs would not enable us to increase our conviction that we have
the factually true evidential conditions for applying the concept-word ‘chair.’
Moreover, it is clear that one does not need much reflection to recognize that
the idea is absurd of definition (C) capturing a neuronal mechanism and not
resulting from an implicit shared agreement. Furthermore, I am sorry to say, the
explanation of the implicitly conventional identification rule for the proper
name Aristotle investigated in the Appendix of the last chapter is sufficient
to make this whole discussion idle.
We conclude, therefore, that the ascription rule made explicit in
definition (C) does in fact have the function of rescuing for consciousness the
tacit convention governing the referential use of the word ‘chair’ (as with our
earlier definition of Aristotle). It seems from the start intuitive and may
only require the help of confirmatory, corrective and improving examples. And
what is true for a general term should presumably also be true for other
expressions, as we already saw regarding proper names.
Indeed,
if all we have in these cases is a shared convention, then a psychological
element needs to be involved, even if only in an implicit way, constituting
what could be called a non-reflexive cognitive application of the
rule. Definition (C) makes explicit a convention normally instantiated in
our heads as an (implicit) non-reflexive application, whenever we make
conscious use of the word ‘chair,’ which only confirms the traditional standard
explanation.
11. Challenges to the traditional
explanation (ii): Gareth Evans
There is another argument against
the claim that we have tacit cognitive access to semantic conventions that
govern our use of expressions. This argument comes from the philosopher
Gareth Evans, who directly influenced McDowell. Evans invites us to contrast a
person’s belief that a substance is poisonous with a mouse’s disposition not to
consume it. In the case of a human being, it is a genuine belief involving
propositional knowledge; in the case of a mouse, it is a simple instinctive disposition
to react in a certain way to a certain smell, not a true belief. Proof of the
difference is the fact that:
It is of the essence of a belief
state that it be at the service of many distinct projects, and that its
influence on any project is mediated by other beliefs. (Evans 1985: 337).
If someone believes a certain
substance is poisonous, he can do many different things based on that belief.
He can test his belief by feeding it to a mouse, or if he is depressed, he can
try to commit suicide by swallowing a dose. He can also relate his belief that
the substance is poisonous to a variety of other beliefs. For instance, he
might believe he will become immune to a poison by consuming small amounts
every day, gradually increasing the dose... As our knowledge of semantic rules
is not susceptible to such inferences, thinks Evans, it consists not of actual
belief states, but rather of isolated
states, not very different from those
of the mouse. Therefore, they are not cognitive (or pre-cognitive) psychological
states in a proper sense of the word. (Evans 1985: 339)
The characterization of belief proposed by Evans is really interesting
and in my view correct, but his conclusion does not follow. Certainly, it
agrees with many of our theories of consciousness, according to which a belief
is only conscious if it isn’t insular, while an unconscious belief is insular –
though there are degrees of insularity. But the crucial point is that Evans’
argument blinds us to the vast gulf between our semantic uses of language and
the mouse’s behavioral disposition to avoid consuming poison.
As a weak but already useful analogy, consider our knowledge of simple
English grammar rules. A child can learn to apply these rules correctly without
any awareness of doing so; and some adults who have never learned formal
grammar are still able to apply these rules correctly to many different words
in many different contexts. Moreover, even if our knowledge of these grammar
rules is very often pre-conscious, with sufficiently careful examination we can
bring them often to consciousness.
The problem becomes still clearer when we consider our simple example of
an implicit semantic-cognitive rule, the criterial rule (C) for the application
of the concept-word ‘chair’ to the identification of chairs. Certainly, a
person can derive many conclusions from this rule. He can predict that normally
five persons cannot sit side-by-side on a single chair. He knows that one can
transform a chair into a stool simply by cutting off its backrest. He can
know the price and if he would like to buy a similar chair. He knows that
by standing on a chair, he can reach an overhead ceiling lamp… He knows all
this and much more, even without having ever consciously considered definition
(C). And this only means that we can have a belief state enabling us to
identify chairs, putting it at the service of many different projects mediated
by other beliefs without being explicitly aware of the involved meaning-rule
(C).
We can see a continuum, beginning with more primitive and instinctively
determined dispositions and ending with semantic-cognitive rules of our
language and their effects. It includes dispositions like those of mice, which
cannot be cognitive, because they are instinctive (it is utterly
implausible to think that a mouse could be reflexively conscious). There are
also more sophisticated ones, like our unconscious beliefs, thoughts and
cognitions, which we can consciously scan and reflexively access (presumably
through meta-cognitive processes).
If we accept the view that our semantic rules are usually conventional
rules exemplified in the simplest cases by models like (C), then we must reject
the radicalism of positions such as those of Evans and McDowell. After all, the
application of such rules allows us to make many different inferences and
relate them to many other conceptual rules. Rule (C) has greater proximity to
the rules of English grammar than to the innate dispositional regularities
demonstrated by a mouse that instinctively avoids foods with certain odors.
Moreover, it is clear that in such cases, unlike the mouse, for people
inferences to other beliefs are always avaliable. This can be so even if
we admit that our semantic-cognitive rules do not in themselves possess the
widest availability proper of those completely conscious belief states
considered by Evans.[18]
The root of the confusion is that the semantic rules in question, with
and because of their apparent triviality, have not yet been investigated in a
sufficiently systematic way. In an academic world dominated by science, the
procedure that leads to their discovery does not seem worthy of careful investigation.
Nevertheless, to proceed more systematically in this seemingly trivial
direction is in fact philosophically invaluable, and this is what I will do in
the remainder of this book.
12. Non-reflexive semantic
cognitions
I believe contemporary theories of
consciousness support the traditional view according to which we have implicit
knowledge of our meaning-rules. I will begin by appealing to reflexive theories of consciousness. But
first, what are these theories?
In the philosophical tradition, the idea of reflexive consciousness was
already suggested by John Locke with his theory of internal sense (Locke
1690, book II, Ch. 1, §19). Reflexive theories of consciousness were introduced
to the contemporary discussion by D. M. Armstrong (Armstrong 1981: 55-67; 1999:
111 f.). We can summarize Armstrong’s view as saying there are at least two
central meanings of the word ‘consciousness.’ The first is what he calls perceptual
consciousness, which consists in the
organism being awake, perceiving objects around it and its own body. This
is the simplest sense of consciousness. John Searle wrote that consciousness
consists of those subjective states of sentience or awareness that begin when
one wakes up in the morning after deep, dreamless sleep and continue throughout
the day until one falls asleep at night, or lapses into a coma, or even dies
(Searle 2002: 7). By this he meant chiefly perceptual consciousness. This is
also a very wide and consequenty not so distinctive sense of consciousness, since
less developed species also have it. For instance, we can say that a hamster
sedated with ether loses consciousness, because it ceases to perceive itself
and the world around it. It seems justified to assume that when a hamster is
awake, it has some primitive form of cognition of the world around it, as shown
by its behavior. However, the width of this extension only suggests the
irrelevance of perceptual consciousness for us. We are aware of the world in
the same way a hamster seems to be conscious of it, but in a much more
demanding, more human sense of the word. Certainly, a mouse perceives a cat,
but it is unlikely to know it is facing its archenemy. This also holds for
internal feelings. A snake may be able to feel anger; but we hardly believe a
snake is aware of this anger, since it certainly has no reflexive
consciousness.
Now, what distinguishes a mouse’s perceptual awareness and a snake’s
anger from our own conscious awareness of things around us and from our own
feelings of anger? The answer is given by a second sense of the word
‘consciousness’ which Armstrong considers the truly important one. This is what
he termed introspective consciousness and that I prefer (following
Locke) to call reflexive consciousness: This is a form of consciousness
that we can define as reflexive awareness
of our own mental states.
According to one of Armstrong’s most interesting hypotheses, reflexive
consciousness emerges from the evolutionary need of more complex systems to
gain control of their own mental
processes by means of higher-order mental processing. In other words: our
first-order mental events, like sensations, feelings, desires, thoughts, and
even our perceptual consciousness of the world around us, can become objects of
simultaneous introspections with similar content (D. M. Rosenthal called these
meta-cognitions higher-order thoughts[19]).
According to this view, only when we achieve reflexive consciousness of
a perceptual state can we say that this state ‘becomes conscious’ in the strong
sense of the word. So, when we say in ordinary speech that a sensation, a
perception, a sentiment or a thought that we have ‘is conscious,’ what we mean
is that we have what could be called a meta-cognition
of it. This shows that Armstrong’s perceptual consciousness is actually a kind
of unconscious awareness, while reflexive consciousness – the true form of
consciousness – is probably a faculty possessed only by humans and a few higher
primates such as orangutans.[20]
Now, let us apply this view to our tacit knowledge of semantic-cognitive
rules. It is easy to suggest that we usually apply these rules without having a
meta-cognitive consciousness of them and therefore without making ourselves
able to consciously scrutinize their structure. In other words, we apply these
rules to their objects cognitively[21], and these rules are
‘cognitive’ because they generate awareness of the objects of their
application. But in themselves these rules usually remain unknown, belonging to
what I above called unconscious awareness. Hence, it seems that we need to
resort to some kind of meta-cognitive scrutiny of our semantic-cognitive rules
in order to gain conscious awareness of their content.
One objection to using this kind of theory to elucidate tacit knowledge
of our rules is that there are a number of interesting first-order theories of
consciousness that do not appeal to the requirement of higher-order cognition.
In my view, we can classify most, if not all, of these apparently competing
theories as integrationist theories of consciousness. We can do this,
because they share the idea that consciousness of a mental state depends on its
degree of integration with other mental states constituting the system. This is
certainly the case of Daniel Dennett’s theory, according to which consciousness
is ‘brain celebrity’: the propagation of ephemerally fixed contents influencing
the whole system (Dennett 1993, Ch. 5). This is also the case with Ned Block’s
view, according to which consciousness is the availability of a mental state
for use in reasoning and directing action (Block 1995: 227-47). This is
likewise the case of Bernard Baars’ theory of consciousness as the transmission
of content in the spotlight of attention to the global workspace of the mind
(Baars 1997). And it is also the obvious case of Giulio Tononi’s theory,
according to which consciousness arises from the brain’s incredible capacity to
integrate information (Tononi 2004: 5-42). These are only some well-known
contemporary first-order theories of consciousness that are historically
consonant with Kant’s view. According to him, to be consciously recognized, a
mental state must be able to be unified (integrated) into a single Self.
From the perspective of such integrationist theories, an unconscious mental
state would be one that remains to a greater or lesser extent dissociated
from other mental states. And all these views seem to possess a degree of
reasonability.
The objection, therefore, would be that I am trying to explain implicit
knowledge of language by relying solely on meta-cognitive theories of
consciousness, ignoring all others. However, I believe there is more than one
way around this objection. My preferred way is the following: we have no
good reason to think integrationist and reflexive views of consciousness are
incompatible. After all, it makes sense to think that a mental state’s
property of being the object of meta-cognition also seems to be a condition –
perhaps even a necessary one – for the first-order mental state to be
more widely available and more easily integrated with other elements
constituting the system. As Robert Van Gulick wrote in the conclusion of his
article on consciousness:
There is unlikely
to be any theoretical perspective that suffices for explaining all the features
of consciousness that we wish to understand. Thus a synthetic and pluralistic
approach may provide the best road to future progress. (Stanford Encyclopedy of Philosophy 2014)
Indeed, we can reinforce our
suspicion by reconsidering a well-known metaphor developed by Baars: A
conscious state of a mind is like an actor on stage who becomes visible and
therefore influential for the whole system because he is illuminated by the spotlight
of attention. However, it seems reasonable to think that this could happen only
because some sort of searchlight of the will added to some sort of meta-cognitive
mental state provides the light for this spotlight. Hence, one could
easily argue that the first-order mental state is accessible to the rest of the
system and hence conscious due to its privileged selection by some kind of supposedly
metacognitive state of attention.
My conclusion is that our awareness of semantic-cognitive rules and the
possibility of scrutinizing them metacognitivey is able to resist integrationist
theories, since they all leave room for conscious processes able to
be scrutinized by means of reflexive attention. Consequently, assumig some
kind of reflexive plus integrationist
view, the plausible conclusion remains that we can have some kind of cognitive
states that make us conscious of their objects even if they are not in
themselves objects of consciousness. Thus,
it seems plausible that only if we first order processes to (reflexive, metacognitive)
scrutiny of attention we can subject them to conscious analysis. And most
of our semantic-cognitive rules belong to such cases
It seems to me that this assumption could explain why we can have
unconscious or implicit tacit cognitions when we consciously follow
semantic-cognitive rules without being cognitively aware of the content of
these rules and consequently without being able to analyze them. They remain
implicit because we rarely pay attention to these rules when we apply them and
because even when this occurs, they are not there as objects of reflexive
cognition. These rules are there, using a well-worn metaphor, like
spectacles. When seeing things through them, we are normally unaware of the
lenses and their frame. Assuming these views, we conclude that we can
distinguish two forms of cognition:
(i) Non-reflexive cognition: This is the case
with cognitions that are not conscious, because they are not accessed by a
higher-order cognitive process and/or focused on by inner attention, etc.
(e.g., my perceptual consciousness when I use rule (C) identifying a chair.)
(ii) Reflexive cognition: This is the case of cognition accessed by a higher-order cognitive
process and/or focused on by inner attention, etc., being for this reason able
to be the object of conscious access and reflexive scrutiny. Any mental states,
sensations, emotions, perceptions, and thoughts can be called reflexive if
they are accompanied by higher-order cognitive inner attention and/or focused
on by inner attention. (This is a previous condition needed for the kind of
reflexive scrutiny that can make us aware of the semantic-cognitive rule (C)
for the identification of a chair as requiring a seat with a backrest, designed
for use by only one person at a time.)
Once in possession of this
distinction, we can better understand the implicit or tacit status of the cognitive meanings or
contents or semantic rules present in uses we make of expressions. When we say
that the structures of semantic-cognitive rules determining the references of
our expressions are often implicit (as in the case of the semantic rules
defining the words ‘chair’ or ‘Aristotle’), we are not assuming that they are properly
pre-cognitive or definitely non-cognitive, lacking any mentality. Nor that they
are completely isolated or dissociated from any other mental states (in the
last case, we would lack even the ability to choose when to apply them). What
we mean is just that the psychological instantiations of these conventional
rules are of a non-reflexive type. That is, although consciously used (we
know we are using them), they are not likely to be the subject of some form
of reflexive cognitive attention. Moreover, as already noted,
there is a reason for this, since the structures of these rules are not the
focus of our attention when we use the corresponding concept-word in an
utterance; it is so because our real concern is much more practical,
consisting primarily in the cognitive effects of applying these rules.
As an obvious example: if I say, ‘Please, bring me a chair,’ I don’t
need to explain this by saying, ‘Please, bring me a moveable seat with a
backrest, made to be used by only one person at a time.’ This would be
discursively obnoxious and pragmatically counterproductive: it would be almost
impossible to communicate efficiently if we had to spell out (or even think of)
all such details each time we applied semantic-cognitive rules. What interests
us is not the tool, but its application – in this case, to inform my hearer
that I would like him to bring me a chair. In linguistic praxis, meaning isn’t
there to be scrutinized, but instead to be put to work.
A consequence of this view is that in principle our inner attention must
be able to focus on non-reflexive semantic-cognitive rules involved in normal
uses of words and scrutinize them meta-cognitively by considering examples of
their application or lack of application. Taking into consideration the
variable functions and complexity of our semantic-cognitive rules enables the
philosopher to decompose them analytically into more or less precise
characterizations. It seems it is by this mechanism, mainly helped by examples,
counterexamples, comparisons and reasoning that we become aware of the
conceptual structure of our philosophically relevant expressions.
13. Conclusion
Summarizing this chapter, we can say
that we have found two main methodological devices: (A) the primacy of established knowledge and (B) the method of philosophizing
by examples. We will use them as guides in this book’s analyses.
Particularly relevant in this context is the idea that we can still see
philosophy as an analytical search for non-reductive surveillable representations
of our natural language’s central meaning-rules. It is almost surprising to
verify that more than two-thousand years after Plato we still have reason to
accept the view that solving some of our most intriguing philosophical problems
would require deeper and better analyzed explanations of what some central
common words truly mean.
[1] See, for instance, the
justification of the external world summarized in Ch. VI, sec. 28.
[2] This is a statement like that
by Heraclitus of Ephesus, who noted that, ‘The sun is the width of a human
foot.’ We need only lie on the ground and hold up a foot against the sun to see
that this is true.
[3] I am unable to find real
exceptions. Under normal circumstances fire has always burned. Some say that
the idea that trees draw energy from the earth was once a commonsense truth
until photosynthesis was discovered… But this idea wasn’t a very basic or
modest commonsense truth, since it could easily be refuted by the well-known
fact that trees do not grow in complete darkness. The idea that a new sun crosses the sky each new day is
surely absurd – but is it a commonsense idea? In fact it was suggested by a
philosopher, Heracleitus, going beyond the humble intentions of modest common
sense. Modest, humble common sense is not interested in answering such
questions, which have no relationship to ordinary life concerns.
[4] Roberto DaMatta, in an
interview. (A more forceful example is the obstinate rejection of any kind of
theism of the Pirahã tribe in the Amazon rainforest studied by Daniel L.
Everett).
[5] It was certainly much easier
to believe in the existence of a personal God and an eternal soul independent
of the body a thousand years ago, before the steady accumulation of conflicting
knowledge discovered by the natural and human sciences.
[6] The expression ‘descriptive
metaphysics’ was introduced by P. F. Strawson in contrast to ‘revisionary
metaphysics.’ It aims to describe the most general features of our actual
conceptual schema, while revisionary metaphysics attempts to provide new schema
to understand the world. Strawson, Aristotle and Kant developed descriptive
metaphysics, while Leibniz and Berkeley developed revisionary metaphysics
(Strawson 1991: 9-10).
[7] As these interpreters wrote:
‘Wittgenstein’s objection to “theorizing” in philosophy is an objection to
assimilating philosophy, whether in method or product, to a theoretical
(super-physical) science. But if thoroughgoing refutation of idealism,
solipsism or behaviorism involves a theoretical endeavor, Wittgenstein engages
in it.’ (Baker & Hacker 1980: 489). Anthony Kenny (1986) preferred to think
that Wittgenstein actually held two competing views on the nature of philosophy
– therapeutic and theoretical. But the here proposed unified interpretation
seems more charitable.
[8] As he writes, ‘We have now a
theory, a “dynamic” theory (Freud speaks of a “dynamic” theory of dreams) of
the sentence, of the language, but it appears to us not as a theory.’ (Zettel 1983b: 444).
[9] Well aware of this, Karl Popper famously called
the statement ‘All swans are white’ a theory,
adding that this theory was falsified by the discovery of black swans in
Australia…
[10] Paul Grice’s sophisticated and ingenious work
contains an influential (albeit qualified) criticism of ordinary language
philosophy as practicized by Ryle, Austin and Strawson (1989, Chs. 1, 2, 10,
15, 17). According to him, these philosophers often confused ordinary uses of
statements resulting from conversational implicatures with their literal
meaning. When implicature failed, they mistakenly concluded that these
statements had no meaning. This would be the case of statements like ‘This flag
looks like red’ (supposedly understood by Austin as showing that sense-data do
not exist because this statement is devoid of sense), ‘The present King of
France is wise’ (understood by Strawson as a statement without truth-value) and
‘If green is yellow then 2 + 2 = 5’ (understood by him as showing the queer
character of material implication). I agree with Grice’s rejection of all these
ordinary language philosophers’ conclusions, although I remain suspicious
regarding Grice’s own explanations. Material implication, for instance, seems
to belong to our practice of truth-functional reasoning, which makes explicit a
basic general layer subsumed under our more informative factual language. In
this sense, it also provides wide intermediate connections. Anyway, under
critical scrutiny, I think that natural language intuitions still
provide a valuable guide – a point with which Grice would certainly agree.
[11] Philosophers like Berkeley, Leibniz, Hegel, even
Heidegger, can be seen as doing revisionary conceptual analysis, refuting and replacing ambitious forms of common sense.
[12] Rudolf Carnap’s formal mode of speech (1937, part 5, sec. A, § 79) instead of material mode of speech, as much as W. V-O.
Quine’s broader semantic ascent
(1960, Ch. 7, § 56) point to this same fact, namely, that conceptual analysis
is also about the world.
[13] If you wish to avoid the word
‘seat’, you can also define a chair as ‘a moveable piece of furniture with a
raised surface and a backrest, made for only one person at a time to sit on.’
[14] As will be frequently
recalled, I do not deny that referential meanings include things that cannot be
easily captured by descriptive conventions, unlike case (C) – things like
perceptual images, memory-images, feelings, smells. However, they
belong much more to the semantic level called by Frege illuminations (Beleuchtungen),
based on natural regularities more
than on conventions.
[15] The correct interpretation of
this distinction is a controversial issue that does not concern us here; I give
what seems to me the most plausible, useful version.
[16] At first view it seems
that these logico-concetual remarks appeal to old-fashioned semantic definitions leading us to the rejection
of findings of modern empirical psychology (cf. E. Margolis & S. Laurence, 1999, Ch. 1). But this is
only appearance. Consider Eleanor Rosch results. She has shown that we are
able to categorize under a concept-word much more easily and quickly by
appealing to prototypical cases (cf. Rosh, 1999: 189-206). For example, we can more easily recognize
a sparrow as a bird than an ostrich or a penguin. In the same way, an ordinary
chair with four legs can be recognized as a chair more easily and quickly than
can a wheelchair or a throne. However, this does not conflict
with our definition, since for us the psychological mechanism of recognition
responsible for the performance is not in question, but rather the leading
structure subjacent to it. We can often appeal to symptoms as the most usual
ways to identify things. For instance, we identify human beings first by their
faces and penguins first by their appearance, even if human faces and a
penguin’s appearance are only symptoms of what will be confirmed by expected
behavior, memories, genetic makeup, etc. Hence, the ultimate criterion remains
dependent on a definition. (In one wildlife film counterfeit penguins outfitted
with cameras deceived real penguins. The trouble with these moronic birds is
that they are overly dependent on innate, instinctive principles of
categorization.)
[17] The expression in
brackets appears in the author’s footnote on this passage. In Dummett’s more
orthodox position, McDowell sees a relapse into the psychologism justifiably
rejected by Frege.
[18] Freud distinguished
(i) unconscious representation able to associate itself with others in
processes of unconscious thought from (ii) unconscious representation that
remains truly isolated, unassociated with other representations, which for him
would only occur in psychotic states and whose repression mechanism he called exclusion
(Verwerfung). Evans treats the relative insularity of our non-reflexive
awareness of semantic rules in a way that suggests Freud’s concept of
exclusion.
[19] Cf. Rosenthal 2005. In this summary, I will ignore the dispute
between theories of higher-order perception (Armstrong, Lycan) and higher-order
thought (Rosenthal), and still others. In my view, David Rosenthal is right in
noting that Armstrong’s perceptual ‘introspectionist’ model suggests the
treatment of cognitions of a higher-order as if they contained qualia,
and that it is implausible that higher-order processes have phenomenal
qualities. Armstrong, on his side, seems to be right in assigning a causal
controlling role to higher-order experience, since for him consciousness
arise from the evolutionary necessity to maintain an unfied control upon more
complex mental systems. Aside from that, although Armstrong doesn’t use the
word ‘thought’, he would probably agree that there is some kind of higher-order
cognitive element in the
introspection of first-order mental states, an element that interests us here.
I prefer the term meta-cognition for
these higher-order cognitions, since I believe that not only Rosenthal, but
also Armstrong would agree that we are dealing with a cognitive phenomenon.
(For initial discussion, see Block, N. O. Flanagan, G. Güzeldere (eds.)
1997, part X.)
[20] I will pass over
the traditional idea that of themselves first-order mental states automatically
generate meta-cognitions. This view would make it impossible to have perceptual
consciousness without introspective consciousness. However, this view not only
seems to lack a convincing intuitive basis; it also makes the existence of
unconscious thoughts incomprehensible.
[21] Some use the term
‘pre-cognitive’ for what is implicitly known. I use the word ‘cognitive’ in a
broader sense, including what is implicitly or pre-consciously known.