sábado, 11 de março de 2017


Draft from a chapter of the book Philosophical Semantics to be published by Cambridge Scholars Publishing in 2017.

- II -

Eine Art zu philosophieren steht nicht neben anderen wie eine Art zu Tanzen neben anderen Tanzarten ... Die Tanzarten schließen sich nicht gegenseitig aus oder ein … Aber man kann nicht ernsthaft auf eine Art philosophieren, ohne die anderen verworfen oder aber einbezogen zu haben. In der Philosophie geht es demgegenüber wie in jeder Wissenschaft um Wahrheit.

[A way of philosophizing is not one way among others, like one way of dancing among others … Ways of dancing are not mutually exclusive or inclusive … But no one can seriously philosophize in one way without having dismissed or incorporated others. In philosophy as in every science, the concern is with truth.]
Ernst Tugendhat

Philosophy has no other roots but the principles of Common Sense; it grows out of them, and draws its nourishment from them. Severed from this root, its honours wither, its sap is dried up, it dies and rots.
Thomas Reid

Given the commonsense assumptions involved when we take the social role of language as a starting point, at least part of this book must be critical. The reason is clear. The new orthodoxy that dominates much of contemporary philosophy of language is dedicated to constructing what we could call a metaphysics of reference. Its views often center on reference more than on meaning, or something like reference-as-meaning, displaying a strong version of semantic externalism, hypostatized causalism and anti-cognitivism. I call these views metaphysical not only because they oppose common sense, but mainly because, as will be shown, they arise from sophisticated attempts to unduly ‘transcend’ the limits of what can be meaningfully said.
   One example of the metaphysics of reference is the position of philosophers like Saul Kripke, Keith Donnellan and others on how to explain the referential function of proper names and natural species terms. According to them, it is not our cognitive access to the world but rather the mere appeal to external causal chains beginning with acts of baptism that really matters. On the other hand, what we may have in mind when using a proper name is for them secondary and contingent. Another example is the strong externalist view of Hilary Putnam, John McDowell, Tyler Burge and others, according to whom the meaning of an expression, its understanding, thought, and even our own minds (!) belong to the external (physical, social) world. It is as if they were floating outside, determined by the entities referred to by words, in a way that recalls Plotinus’ emanation, this time not from the ‘One’, but in some naturalistic fashion, from the ‘Many.’[1] A third example is the view accepted by David Kaplan, John Perry, Nathan Salmon and others, according to whom many of our statements have as their proper semantic contents structured propositions, whose constituents (things, properties, relations) belong to the external world alone, as if the external world had any proper meaning beyond the meaning we give to it. As a final case – which I examine in the present chapter – we can take the views of John McDowell and Gareth Evans. According to them, we cannot sum up most of the semantics of our language in tacit conventional rules that can be made reflexively explicit, as has been traditionally assumed. Consistent with causal externalism, their semantics takes the form of things that can be understood chiefly in the third person, depending only on neuronal machinery, such as linguistic dispositions that cannot become objects of reflexive consciousness.
   Notwithstanding the fact that all such ideas are contrary to the semantic intuition of any reasonable person who has not been philosophically indoctrinated, they have become the mainstream understanding of specialists. Today many theorists still view them as ‘solid’ results of philosophical inquiry, rather than crystallized products of ambitious formalist reductionism averse to cognitivism. – Most of these theorists have in the meantime retreated rhetorically from their radical views, though still holding them in more vague, abstract terms. If taken too seriously, such ideas can both stimulate the imagination of intellectually immature minds, and more seriously, block the ways of inquiry.
   In the course of this book, I intend to prove at least partially that the metaphysics of reference has not found the ultimate truth of the matter. This is not the same, I must note, as to reject the originality and philosophical relevance of its main arguments. If I did reject them on this ground, there would be no point in discussing them here. Such philosophical arguments might be of interest even if they were in the end-effect flawed. If so, they would ultimately require not additional support, but careful critical analysis. In the process of disproving them, we could develop views with greater explanatory power, as philosophical progress is typically dialectical. For this reason, we should value the best arguments of the metaphysics of reference in the same critical way we value McTaggart’s argument against the reality of time or Berkeley’s amazing arguments against materialism. Consider Hume’s impressive skeptical arguments to show there is nothing in the world except flocks of ideas, an absurd conclusion that was first countered by Thomas Reid. What all these arguments surely did, even if we are unable to agree with them, was to expose insufficiently analysed aspects of our conceptual structures, presenting in this way real philosophical challenges to the philosophical investigation. Indeed, without the imaginative and bold revisionism of the metaphysicians of reference, without the challenges and problems they presented, it is improbable that corresponding competing views would ever acquire enough intellectual fuel to take wing.

1. Common sense and meaning
To contend with the metaphysics of reference, some artillery pieces are essential. They are methodological in character. The first concerns the decision to take seriously the somewhat forgotten fundamental principles of common sense and ordinary language philosophy, respectively assumed by analytic philosophers like G. E. Moore and J. L. Austin. According to the latter, we should seek the starting point of a philosophical investigation in pre-philosophical commonsense intuitions reflected in ordinary language. The link between common sense and ordinary language is easy to understand. We should expect that commonsense intuitions – often due to millennia of cultural sedimentation – will come to be mirrored in our linguistic forms and practices.
   As Noah Lemos wrote, we can characterize commonsense knowledge as:

...a set of truths that we know fairly well, that have been held at all times and by almost everyone, that do not seem to be outweighed by philosophical theories asserting their falsity, and that can be taken as data for assessing philosophical theories (2004: 5).

Indeed, commonsense truths seem to have always reconfirmed themselves, often reminding us of something close to species wisdom. Examples of commonsense statements are: ‘Black isn’t white,’ ‘Fire burns,’ ‘Material things exist,’ ‘The past existed,’ ‘I am a human being,’ ‘I have feelings,’ ‘Other people exist,’ ‘The Earth has existed for many years,’ ‘I have never been very far from the Earth,’… (e.g., Moore 1959: 32-45). Philosophers have treasured some of these commonsense statements as particularly worthy of careful analytical scrutiny. These include: ‘A thing is itself’ (principle of identity). ‘The same thought cannot be both true and false’ (principle of non-contradiction). ‘I exist as a thinking being’ (version of the cogito). ‘The external world is real’ (expressing a realist position on the external world’s existence). And even ‘A thought is true if it agrees with reality’ (correspondence theory of truth).
   An influential objection to the validity of commonsense principles is that they are not absolutely certain. Clearly, a statement like ‘Fire burns’ isn`t beyond any possibility of falsification. Moreover, science has falsified many commonsense beliefs. Einstein’s relativity theory decisively refuted the commonsense belief that the mass of a body does not change with its velocity. But there was a time when people regarded this belief as a self-evident truth.
   This last kind of objection is particularly important in our context, because metaphysicians of reference have made this point to justify philosophy of language theories that contradict common sense. Just as in modern physics new theories often conflict with common sense, they feel emboldened to advance a new philosophy whose conclusions depart radically from common sense and ordinary language. As Hilary Putnam wrote to justify the strangeness of his externalist theory of meaning:

Indeed, the upshot of our discussion will be that meanings don’t exist in quite the way we tend to think they do. But electrons don’t exist in quite the way Bohr thought they did, either. (Putnam 1978: 216)

One answer to this kind of comparison emphasizes the striking differences between philosophy of meaning and physics: the way we obtain meaning is much more direct than the way we discover the nature of subatomic particles. Our access to meanings depends on our shared semantic conventions. We make meanings; we don’t make electrons. We find subatomic particles by empirical research; we don’t find meanings: we establish them.

2. Critical common-sensism
Nonetheless, a key question remains unanswered: how indisputable can be our commonsense intuitions? C. S. Peirce, undermining Thomas Reid’s unshakeable belief in common sense and based on his own thesis of the inevitable fallibility of human knowledge, proposed replacing traditional common-sensism with what he called critical common-sensism. According to this theory, slow changes really do take place in commonsense beliefs, even if not in our most central beliefs. This change can occur particularly as a response to scientific progress. Consequently, common sense in general, though highly reliable, is not beyond any possibility of doubt. Still, for heuristic reasons we should maintain a critical attitude and always be ready to submit commonsense views to the scrutiny of reasonable doubt (cf. Peirce 1905: 481-499).
   The notion that critical common-sensism can revise commonsense views has been attacked from various standpoints. One argument against it is, as we will see, that scientific progress has not greatly altered commonsense views. Another point is that we cannot use falsification to disprove the claims of common sense, because this would require a criterion to distinguish true sentences from false ones. This criterion, however, could not itself rest on common sense, since this would involve circularity…
   The answer to this last objection is that it isn’t necessarily so. First, because commonsense beliefs have different levels of reliability and form a hierarchy (for instance, ‘I exist’ is clearly much more reliable than ‘fire burns’). Thus, it seems possible to employ the most trustworthy commonsense beliefs, possibly in combination with scientific beliefs, to falsify or at least restrict the domain of application, especially for some less reliable commonsense beliefs. Moreover, it could very well be that the most fundamental commonsense beliefs can be so intrinsically justified that an analytical justification may be all that philosophy requires.[2] Anyway, since some commonsense beliefs are vulnerable to refutation, it seems advisable to preserve the attitude of critical common-sensism.

3. Ambitious versus Modest Common Sense
I do not think this text can end debates over the ultimate value of common sense. But, I believe I can demonstrate that two deeply ingrained objections against the validity of commonsense principles are seriously flawed, one based on the progress of science and the other based on changes in our worldviews (Weltanschauungen). The first is that science defeats common sense. This can be illustrated by the claim attributed to Albert Einstein that common sense is a collection of prejudices acquired by the age of eighteen… (Most physicists are philosophically naïve.) Changes in worldviews are transformations in our whole system of beliefs, affecting deeply settled ideas like moral values and religious beliefs. In my view, these two charges against common sense are deficient because they arise from confusion between misleading ambitious formulations of commonsense truths and their authentic formulations, which I call modest ones.
   I intend to explain my point by beginning with a closer examination of objections based on the progress of science. With regard to empirical science, consider the sentences:

(a)    The Earth is a flat disk with land in the center surrounded by water.
(b)   The sun is a bright sphere that revolves around the Earth daily.
(c)    Heavy bodies fall more rapidly than light ones, disregarding air resistance.
(d)   Time flows uniformly, even for a body moving near the speed of light.
(e)    Light consists of extremely small particles.

It is widely known that science has disproved all these once commonsense statements. Already in Antiquity, Eratosthenes of Alexandria was able not only to disprove the Homeric view that (a) the Earth is a flat disk rimmed by water, but was even able to measure the circumference of the Earth with reasonable precision. Galileo showed that (b) and (c) are false statements, the first because the Earth circles the sun, the second because in a vacuum all bodies fall with the same acceleration. And Einstein’s relativity theory predicted that time becomes exponentially slower as a body approaches the speed of light, falsifying statement (d). Bertrand Russell once emphasized that the theory of relativity showed that statement (d), like some other important commonsense beliefs, cannot withstand precise scientific examination (cf. Russell 1925, Ch. 1). Finally, statement (e), affirming the seemingly commonsense corpuscular theory of light (defended by Newton and others), has been judged to be mistaken, since light consists of transverse waves (Huygens-Young theory), even though under certain conditions it behaves as though it consisted of particles (wave-particle theory).
   A point I wish to emphasize, however, is that none of the four above-cited statements legitimately belongs to correctly understood common sense, in a sense I call ‘modest.’ If we examine these statements more closely, we see they are in fact extrapolations grounded on statements of modest common sense. These extrapolations are of speculative interest and were made in the name of science by scientists and even by philosophers who projected ideas of common sense into new domains that would later belong to science. In my view, true statements of common sense – the modest statements for which (a), (b), (c), (d) and (e) could be the corresponding non-modest extrapolations – are respectively the following:

(a’) The Earth is flat.
(b’) Each day the sun crosses the sky.
(c’) Heavier bodies fall more rapidly than lighter ones.
(d’) Time flows uniformly for all bodies around us, independently of their motion.
(e’) Light has rays.

Now, my understanding is that these statements have been made for thousands years and have always been confirmed by everyday observation. It is obvious that (a’) is a true statement if we understand it to mean that when we look at the world around us without having the ambition to generalize this observation to the whole Earth, we see that the landscape is obviously flat (discounting hills, valleys and mountains). Statement (b’) is also true, since it is anterior to the distinction between the real and the apparent motion of the sun. Because of the distinction between the apparent and the real motion of the sun, we know that the sentence ‘The sun crosses the sky each day’ can be true without implying that the sun revolves around the Earth. All it affirms is that in equatorial and sub-equatorial regions of the Earth we see that each day the sun rises in the East, crosses the sky, and sets in the West, which no sensible person would ever doubt.[3] Even after science proved that bodies of different masses fall with the same acceleration in a vacuum, statement (c’) remains true for everyday experience. After all, it only affirms the commonplace notion that under ordinary conditions a light object such as a feather falls much more slowly than a heavy one such as a stone... Statement (d’) also remains true, since it concerns the movements of things in our surroundings, leaving aside extremely high speeds or incredibly accurate measurements of time. (In everyday life one would not want to measure irregularities in the flow of time at the subatomic level, and no one ever comes home from a two-week bus trip to discover that family members are now many years older than before). Finally, (e’) has been accepted, at least since Homer, as is shown by his poetic epithet ‘rosy-fingered dawn.’ (We often see sunrays at dawn or dusk or peeping from gaps in the clouds on a gloomy day.)
   But then, what is the point in comparing statements (a)-(b)-(c)-(d)-(e) with the corresponding statements (a’)-(b’)-(c’)-(d’)-(e’), making the first set refutable by science, while the latter statements remain true? The answer is that scientifically or speculatively motivated commonsense statements exemplified by (a)-(b)-(c)-(d)-(e) have very often been viewed equivocally, as if they were legitimate commonsense statements. However, statements of modest common sense like (a’)-(b’)-(c’)-(d’)-(e’) are the only ones naturally originating from community life, being omnipresent in the most ordinary linguistic practices. They continue to be perfectly reliable even after Galileo and Einstein, since their truth is independent of science. The contrast between these two kinds of example shows how mistaken the claim is that commonsense truths have all been refuted by science.[4] What science has refuted are extrapolations of commonsense truths by scientists and philosophers who have projected such humble commonsense truths beyond the narrow limits of their original context. If we consider the aforementioned distinction, we find a lack of conflict between the discoveries of science and the claims of commonsense wisdom, also including ones used as examples by philosophers like G. E. Moore.
   I do not claim modest commonsense truths are in principle irrefutable, but only that no one has refuted them. Nothing warrants, for instance, asserting that from now on the world around us will be different in fundamental ways. A statement like (b’) can be falsified. Perhaps the Earth’s rotation on its axis will slow down so much that the sun will cease its apparent movement across the sky. In this case, (b’) would also be refuted for our future expectations. But even in this case, (b’) remains true for the past, while the corresponding ambitious extrapolation (b) is and surely always has been false. In fact, all I want to show is that true commonsense statements – modest ones – are much more sensible than scientifically oriented minds believe, and science has been unable to refute them, insofar as we take them at their proper face value.
   Similar reasoning applies to the a priori knowledge of common sense, such as the belief that white is not black. To justify this new claim, consider first the case of statements like (i) ‘Goodness is praiseworthy,’ which is grammatically identical with statements like (ii) ‘Socrates is wise.’ Both have the same subject-predicate grammatical structure. Since in the first case the subject does not designate any object accessible to the senses, Plato would have concluded that this subject must refer to ‘goodness in itself’: the purely intelligible idea of goodness, existing in an eternal and immutable non-visible realm.
   Plato reached his conclusion based on the commonplace grammatical distinction between subject and predicate found in ordinary language. Under this assumption, he was likely to see a statement like (iii) ‘Goodness in itself exists’ (referring to an abstract idea) as having the form of an aprioristic commonsense truth. However, we know that with Frege’s innovation of quantificational logic at the end of the 19th century, it became clear that statements like (i) should have a deep logical structure that is much more complex than the subject-predicate structure of (ii). Statement (i) should be analyzed as saying that all good things are praiseworthy, or (iv) ‘For all x, if x is good, then x is praiseworthy,’ where the supposed proper name ‘Goodness’ disappears and is replaced by the predicate ‘… is good.’ This new kind of analysis reduced considerably the pressure to countenance the Platonic doctrine of ideas.
   However, the suggestion that the subject ‘Goodness’ refers to an abstract idea clearly does not belong to modest common sense, and statement (iii), ‘Goodness in itself exists,’ isn’t even inscribed in our ordinary language. It again belongs to ambitious common sense. Statement (iii) was a speculative extrapolation by a philosopher based on an implicit appeal to the superficial grammar of natural language, and though it was probably a bad choice, it would be unjust to blame our modest common sense and our ordinary language intuitions on subject-predicate grammar. Finally, it is wise to remember that quantificational logic has not undermined the grammar of ordinary language; it has only selected and made us conscious of vastly extended fundamental patterns underlying the representative function of natural factual language.
   What all these examples do is to undermine the frequently made claim that scientific progress contradicts common sense. Scientific discoveries only refute speculative extrapolations of common sense and ordinary language made by scientists and philosophers, such as the idea that the Sun revolves around the Earth or that there is a purely intelligible world made up of abstract ideas like that of Goodness in itself. But nothing of the sort has to do with the explanations given by our modest, humble common sense, the only ones long established by the shared practical experience of mankind over the course of human history.

4. Resisting changes in worldviews
Finally, I wish to consider changes in common sense that are challenged by changes in our worldviews. This is the case with the belief that a personal God exists or that we have minds independently of our bodies. The objection is the following. The overwhelming majority of cultures accept a God (or gods) and the soul as undeniably real. In Western Civilization for the last two-thousand years, society has even punished opposition to these beliefs with varying degrees of severity, sometimes even imposing the death penalty. Although they were once commonsense beliefs, today no one would say that they are almost universally accepted. On the contrary, few scientifically educated persons would accept them. Consequently, it seems that common sense can change in response to changes in our worldviews.
   My reaction to this does not differ very much from my reaction to the objection contrasting common sense with the progress of science. Beliefs regarding our worldviews lack universality, not necessarily belonging to what we may call modest common sense. There are entire civilizations, particularly in Asia, where the idea of a personal God is foreign to the dominant religion. Regarding the soul, I remember a story told by an anthropologist[5] who once asked a native Brazilian what happens after people die. The native answered: ‘They stay around.’ ‘And later?’ asked the anthropologist. – ‘They go into some tree.’ – ‘And then?’ – ‘Then they disappear’... The lack of concern was evident. And the unavoidable conclusion is that belief in a personal God and an eternal soul do not enjoy the kind of universality that would be expected of modest common sense. In fact, these beliefs seem to result from the addition of wishful thinking to some commonsense views, which has often happened in Western culture.[6]
   Ordinary language also supports the view that these beliefs are not chiefly commonsense facts: a person with religious beliefs usually does not say she knows that she has a soul independent of her body… She prefers to claim she believes in these things. And even this belief has a particular name: ‘faith,’ which is belief not supported by reason and observation (against faith there are no arguments). On the other hand, the same person would never decline to admit that she knows there is an external world and that she knows this world existed long before she was born… But modest commonsense knowledge is not a question of wishful thinking or non-rational faith.
   What these arguments show is that modestly understood commonsense truths – together with the very plausible discoveries of science – can reasonably be said to form the basis of our rationality, the highest tribunal of reason. Furthermore, since science itself can only be constructed starting from a foundation of accepted modest commonsense beliefs, it does not seem possible, even in principle, to deny modest common sense as a whole on the authority of science without also having to deny the very foundations of rationality.
   Not only do science and changes in our worldview seem unable to refute modest common sense, but even skeptical hypotheses cannot do this in the highly persuasive way one could expect. Suppose, for instance, that radical skeptics are right, and you discover that until now you have lived in what was just an illusory world… Even in this case, you would be unable to say that the world where you lived until now was unreal in the most important sense of the word. For that world would still be fully real in the sense that it was perceived with maximal intensity, was virtually interpersonal, obeyed natural laws and was independent of your will… These are criteria that when satisfied create our conventional sense of reality (see Ch. VI, sec. 18-19).

5. Primacy of Established Knowledge
The upshot of the comparison between modest common sense and science is that we can see science not as opposed to modest common sense, but rather as its proper extension, so that both are mutually supportive. According to this view, science is expanded common sense. Contrary to Wilfrid Sellars (1962: 35-78), the so-called ‘scientific image of the world’ did not develop in opposition to or even independently of the old ‘manifest image of the world,’ for there is no conflict between them. This conclusion reinforces our confidence that underlying everything we can find commonsense truths, insofar as they are satisfactorily identified and understood.
   In endorsing this view, I do not claim that unaided modest commonsense truth can resist philosophical arguments, as some advocates have assumed. One cannot refute Berkeley’s anti-materialism by kicking a stone, or answer Zeno’s paradox of the impossibility of movement by putting one foot in front of the other. These ideas could be wrong, but to disprove them, philosophical arguments are needed to show why these skeptical arguments seemingly make sense, again grounding their rejection at least partially in other domains of common sense and possibly science. So, what I wish to maintain is that the principles of modest common sense serve as the most reliable assumptions and that some more fundamental commonsense principles will always be needed, if we don’t wish to lose our footing in the real world.
   I am not proposing that a philosophy based on modest common sense and its effects on ordinary language intuitions would be sufficient. It is imperative to develop philosophical views compatible and possibly complementary with modern science. We must construct philosophy on a foundation of common sense informed by science. That is: insofar as formal reasoning (logic, mathematics) and empirical science (physics, biology, psychology, sociology, neuroscience, linguistics...) can add new extensions and elements beyond modest commonsense principles, and these extensions and elements are relevant to philosophy, they should be taken into account. As we saw above, it was through the findings of predicate calculus that we came to know that the subject ‘goodness’ in the sentence ‘Goodness is praiseworthy’ should not be logically interpreted as a subject referring to a Platonic idea, since what this sentence really means is ‘For all x, if x is good, x is praiseworthy.’
   I will use the term established knowledge for the totality that includes modest commonsense knowledge and all the extensions the scientific community accepts as scientific knowledge. Any reasonable person with the right information would agree with this kind of knowledge, insofar as she was able to properly understand and evaluate it. It is in this revised sense that we should reinterpret the Heraclitean dictum that we must rely on common knowledge as a city relies on its walls.
   The upshot of these methodological considerations is that we should judge the plausibility of our philosophical ideas against the background of established knowledge, that is, comparing them with the results of scientifically informed common sense. We may call this the principle of the primacy of established knowledge, admonishing us to make our philosophical theses consistent with it. Philosophical activity, particularly as descriptive metaphysics,[7] should seek reflexive equilibrium with the widest possible range of established knowledge, mutually supported by both modest common sense and scientific results. This is the ultimate source of philosophical credibility.
   Finally, if we find inconsistencies between our philosophical theories and our established knowledge, we should treat them as paradoxes of thought and should search for arguments that reconcile philosophical reflection with established knowledge. Lacking reconciliation, we should treat philosophical theses as proposals, even if stimulating ones from a speculative viewpoint, as is the case of revisionary metaphysics, paradigmatically exemplified by Leibniz and Hume. This does not mean that they require acceptance as ‘solid’ discoveries, but rather that they deserve attentive consideration, the sort we grant to the best cases of expansionist scientism. To proceed otherwise can put us on the slippery slope to dogmatism.

6. Philosophizing by examples
We must complement our methodological principle of the primacy of established knowledge with what Avrum Stroll called the method of philosophizing by examples. He himself used this method to construct relevant arguments against Putnam’s externalism of meaning (Stroll 1998, x-xi).
   Stroll was a Wittgenstein specialist, and Wittgenstein’s therapeutic conception of philosophy directly inspired his approach. According to Wittgenstein, at least one way of doing philosophy is by performing philosophical therapy. This therapy consists in comparing the speculative use of expressions in philosophy – which is generally misleading – with various examples of their everyday usage – where these expressions earn their proper and incontestable meanings, using the method of contrast to clear up confusions. He thought this therapy was only possible through meticulous comparative examination of various real and imaginary concrete examples for intuitively correct uses of expressions. This would make it possible to clarify the true meanings of our words, so that the hidden absurdities of metaphysics would become evident... Indeed, it seems that a similar critique of language, complemented by theoretical reflection, is what much contemporary language philosophy needs to find its way back to truth.
   I intend to show that today’s metaphysics of meaning-reference suffers from a failure to consider adequately, above all, the subtle nuances of linguistic praxis. It suffers from an accumulation of potentially obscurantist products of what Wittgenstein called ‘conceptual houses of cards’ resulting from ‘knots of thought’ – subtle semantic equivocations caused by a desire for innovation accompanied by a lack of more careful attention to the nuanced distinctions of meaning that our expressions receive in the different contexts where they are successfully used.
   One criticism of Wittgenstein’s therapeutic view of philosophy is that it would confine philosophy to the limits of the commonplace. Admittedly, there is no reason to deny that the value of philosophy resides largely in its theoretical and systematic dimensions, its persistence in making substantive generalizations. I also tend to agree with this, since I believe philosophy can and should be theoretical, even speculatively theoretical. Nonetheless, I think we can successfully counter this objection to Wittgenstein’s views, first interpretatively and then systematically.
   From the interpretative side, we have reason to think that the objection misunderstand the subtleties of Wittgenstein’s position. The most authoritative interpreters of Wittgenstein, G. P. Baker and P. M. S. Hacker, insisted that he did not reject philosophical theorization tout court. In rejecting philosophical theorizing, he was opposing scientism: the kind of philosophical theorization that mimics science. It reduces philosophy itself to science in its methods, range and contents, as he already saw happening in logical positivism.[8] Instead, he would countenance a different sort of theorization, particularly the ‘dynamic’[9] or ‘organic’ instead of ‘architectonic’ (Wittgenstein 2001: 43) – a distinction he seems to have learned from Schopenhauer (Hilmy 1987: 208-9). This helps explain why, in a famous passage of Philosophical Investigations, he argued that it is both possible and even necessary to construct surveillable representations (übersichtliche Darstellungen). These can show the complex logical-grammatical structure of the concepts making up the most central domains of understanding. As he wrote:

A main source of our failure to understand is that we do not command a clear view of the use of our words – Our grammar is lacking in this sort of surveillability. A surveillable representation produces just that understanding which consists in ‘seeing connections’; hence the importance of finding and inventing intermediate cases. The concept of surveillable representation is of fundamental significance for us. It earmarks the form of account we give, the way we look at things (Is this a ‘Weltanschauung’?). (Wittgenstein 1984c, sec. 122)

Now, in a sense a surveillable representation must be theoretical, since it must contain generalizations, and this constitutes the core of any theory. (Well aware of this, Karl Popper called the statement ‘All swans are white’ a theory, adding that this theory was falsified by the discovery of black swans in Australia…) If we agree that all generalizations are theoretical, any surveillable representation, as it must contain generalizations, must also be theoretical.
   Moreover, the addition of intermediate connections not explicitly named by the expressions of ordinary language enables us to make explicit previously unconscious conventions that serve as links connecting a multitude of cases. It is even possible that because of the generality and function of these links, they never need to emerge in linguistically expressible forms (consider, for instance, our MD-rule for proper names). These links are more properly called ‘descriptive’ if they are already manifest in the expressions of a language. But it could be advisable to call them ‘theoretical’ – in the sense of a description of general principles inherent in natural language – if they are the right way to assure the unity in diversity that our use of expressions can achieve. This helps justify the far more systematic form the philosophy of ordinary language later received from the hands of more systematic philosophers like J. L. Austin, P. F. Strawson, H. P. Grice and John Searle.
   From the argumentative side, we can say that independently of the way we interpret Wittgenstein, there are systematic reasons to believe theoretical considerations are indispensable. An important point is that philosophy can only be therapeutic or critical because its work is inevitably based on theoretical (i.e., generalized) assumptions that make possible its therapeutic efficacy. Usually Wittgenstein did not explicitly state the assumptions he needed to make his therapy especially convincing. He was an intuitive thinker in the style of Heraclitus or Nietzsche who all too often did not develop his insights beyond the epigrammatic level. In any case, such assumptions are inevitable, and the result is the same: The critical (therapeutic) and the more constructive (theoretical) searches for surveillable representations can be understood as two complementary sides of the same analytical coin (Costa 1990: 7 f.). Theoretical assumptions were the indispensable active principle of his therapeutic potions.
   Recapitulating, we have found two main methodological principles for orienting our research in this book:

A.   The principle of the primacy of established knowledge (our principle of all principles), according to which modest common sense, complemented by scientific knowledge, constitutes the highest tribunal of reason in judging the plausibility of philosophical views.
B.    The method of philosophizing by examples, according to which the best way to orient ourselves in the philosophical jungle is to test our ideas in all possible cases by analyzing a sufficient number of different examples. If we do not use this method, we risk losing ourselves in a labyrinth of empty if not fallacious abstractions.

Oriented by the two above-considered methodological principles, On the one hand, I revive some old and too easily dismissed philosophical ideas (like descriptivism, the role of empirical facts as proper truthmakers, the view of existence as a higher-order property, the verificationist view of meaning, the correspondence theory of truth…). On the other hand, I offer a kind of linguistic critique. I show that the most positive and challenging theses of the metaphysics of reference – even if original and illuminating – are no more than sophisticated conceptual illusions.

7. Tacit knowledge of meaning: traditional explanation
I will assume the almost indisputable notion that language is a system of signs governed by conventionally grounded rules, including semantic ones. Linguistic conventions are rules obeyed by most participants in the linguistic community. These participants expect other participants to comply with similar or complementary rules, even if they aren’t consciously aware of them (cf. Lewis 2002: 42). According to this view, the sufficiently shared character of language conventions is what makes possible the use of language to communicate thoughts.
   One of the most fundamental assumptions of the old orthodoxy in philosophy of language is that we lack awareness of the effective structures of semantically relevant rules governing the uses of our language’s most central conceptual expressions. We know how to apply the rules, but the rules are not available for explicit examination. Thus, we are unable to command a clear view of the complex network of tacit agreements involved. One reason is the way we learn expressions in our language. Wittgenstein noted that we learn the rules governing our linguistic expressions by dressage (Abrichtung). Later analytic philosophers, from Gilbert Ryle to P. F. Strawson, Michael Dummett and Ernst Tugendhat, have always insisted that we do not learn the semantically relevant conventions of our language (rules determining referential use of expressions) through verbal definitions, but rather in non-reflexive, unconscious ways. Tugendhat wrote that we learn many of these rules in childhood through ostension by means of positive and negative examples given in interpersonal contexts: other speakers confirm them when correct and disconfirm them when incorrect. Hence, the final proof that we understand these rules is interpersonal confirmation of their correct application (Tugendhat & Wolf 1983: 140). For this reason, it is often so hard if not impossible to obtain a verbal statement of a meaning that could be examined. Using Gilbert Ryle’s terms, with regard to these meaning-rules, what we have is a knowing how, a skill, a competence, automatized ability that enables us to apply meaning-rules correctly; but this is insufficient to warrant a knowing that, namely, the capacity to report verbally what we mean (1990: 28 f.).
     This non-reflexive learning of semantic rules applies particularly to philosophical terms like ‘knowledge,’ ‘consciousness,’ ‘understanding’, ‘perception’, ‘causality,’ ‘action’, ‘free will,’ ‘goodness,’ which are central to our understanding of the world (Tugendhat 1992: 268). Because of their more complex conceptual structure and complicated relationships with other central concepts, these concepts are particularly elusive, resisting analysis. This insight certainly also applies to conceptual words from philosophy of language, like ‘meaning,’ ‘reference,’ ‘existence’ and ‘truth,’ which are examined later in this book. Finally, complicating things still more, relevant concepts are not completely resistant to additions and changes with the growth of our knowledge. Thus, until recent advances in neuroscience, bodily movement was considered essential to the philosophically relevant concept of action. Now, with sensitive devices able to respond to electrical discharges in our motor-cortex, we can even move external objects using willpower. Thoughts unaided by bodily movements are now sufficient to initiate external physical motions (see neuroprostetics and BCIs).
   Anyway, lack of semantic awareness can become a reason for serious intellectual confusion when philosophers try to explain what these terms mean. Philosophers are very often under the pressure of some generalizing goal extrinsic to that required by the proper nature of their object of investigation. Consider theistic goals in the Middle Ages and scientific goals in our time, which can easily produce impressive magnifications of small findings. Wittgenstein repeatedly asserted the aforementioned view throughout his entire philosophical career. Here are some of his best quotes, in a chronological sequence beginning with the Tractatus Logico-Philosophicus and ending with the Philosophical Investigations:

Ordinary language is part of the human organism and not less complicated than it. ... The conventions that are implicit for the understanding of ordinary language are enormously complicated (Wittgenstein 1984g, sec. 4.002).

Philosophers constantly see the method of science before their eyes, and are irresistibly tempted to ask and answer questions the way science does. This tendency is the real source of metaphysics, and leads the philosopher into complete darkness. (1958: 24)

We can solve the problems not by giving new information, but by arranging what we have always known. Philosophy is a battle against the bewitchment of our intellect by language (Wittgenstein 1984c sec. 109).

The aspects of things that are most important for us are hidden because of their simplicity and familiarity. (One is unable to notice something – because it is always before one’s eyes.) The real foundations of his enquiry do not strike a person at all. Unless that fact has at some time struck him. – And this means: we fail to be struck by what, once seen, is most striking and most powerful. (Wittgenstein 1984c, sec.129).

Contrary to empirical statements, rules of grammar describe how we use words in order to both justify and criticize our particular utterances. But as opposed to grammar book rules, they are not idealized as an external system to be conformed to. Moreover, they are not appealed to explicitly in any formulation, but are used in cases of philosophical perplexity to clarify where language misleads us into false illusions … (A whole cloud of philosophy is condensed into a drop of grammar.) (Wittgenstein 1984c, II xi).

Around the mid-twentieth century, a number of analytical philosophers were in significant ways influenced by Wittgenstein. They thought clarification resulting from the work of making explicit the tacit conventions that give meaning to our natural language was a kind of revolutionary procedure: We should identify most if not all philosophical problems with conceptual problems that could be solved (or dissolved) by conceptual analysis.
   Notwithstanding, except for the acquisition of new formal analytical instruments and a new pragmatic attitude leading to more rigorous and systematic attention to the subtleties of spoken language, there was nothing truly revolutionary in the philosophy of linguistic analysis and the critique of language associated with it. Analysis of the meaning of philosophically relevant terms as an attempt to describe the real structure of our thinking about the world is no more than the resumption of a project centrally present in the whole history of Occidental philosophy. Augustine wrote: ‘What, then, is time? If no one asks me, I know; if I wish to explain it to him who asks, I know not.’ (Augustine, 2008, lib. XI, Ch. XIV, sec. 17). In fact, we find the same concern already emphasized in Plato. If we examine questions posed in Plato’s Socratic dialogues, they all have the form ‘What is X?’ where X takes the place of philosophically relevant conceptual words like ‘temperance,’ ‘justice,’ ‘virtue,’ ‘love,’ ‘knowledge’… What always follows are attempts to find a definition able to resist objections and counterexamples. After some real progress, discussion usually ends in an aporetic way due to merciless conceptual criticism. Philosophy based on analysis of conceptual meaning has always been with us. It is a foundation of our philosophical tradition, even when it is hidden within its most systematic and speculative forms.

8. A very simple example of a semantic-cognitive rule
We urgently need to clarify the form of semantic-cognitive rules intended in the position defended here, but it is very hard to analyze such a conceptual rule constitutive of a philosophical concept. Not only is this because the concept-word expressing it is usually polysemic, but also because the structures of central meaning-rules are much more complex and harder to clarify.
   To get a glimpse into this kind of rule, I begin with a very trivial concept-word that we can use as a model, since its logical grammar is correspondingly easier to grasp. Thus, I wish to scrutinize here the meaning of the concept-word ‘chair,’ using it as a simple model that can illustrate my approach to investigating the much more complicated philosophical concepts that shape our understanding of the world. We all know the meaning of the word ‘chair,’ though it would be hard to give a precise definition if someone asked for one. Now, following Wittgenstein’s motto, according to which ‘the meaning of a word is what the explanation of its meaning explains’ (1984g, sec. 32), I offer a perfectly reasonable definition of the word ‘chair.’ You can even find it in the best dictionaries, and it expresses the characterizing ascription rule of this concept-word:

(C) Chair (Df.) = a moveable seat provided with a backrest, designed for use by only one person at a time (it usually has four legs, sometimes has armrests, is sometimes upholstered, etc.).[10]

In this definition, the conditions stated outside of parentheses are necessary and together sufficient: a chair must be a seat with a backrest designed for a single person. These criterial conditions form an essential (indispensable) condition, also called the definitional or primary criterion for the applicability of the concept-word, to use Wittgenstein’s terminology. What follows in parentheses are complementary (dispensable) secondary criteria or symptoms: usually a chair has four legs, often it has armrests, and sometimes it is upholstered. These indications can be helpful in identifying chairs, even though they are irrelevant if the definitional criterion isn’t satisfied. A chair need not have armrests, but there cannot be a chair with armrests but no backrest (this would be a bench). Thus, with (C) we have an expression of the conventional ascription rule for the general term ‘chair,’ which should belong to the domain of what Frege calls sense (Sinn).[11]
   I find it hard to oppose this definition. Table chairs, armchairs, easy chairs, rocking chairs, wheelchairs, beach chairs, kneeling chairs, electric chairs, thrones… all conform to the definition. Cars and airplane seats are not called ‘chairs’ only because they are not moveable, though they are quasi-chairs. It is true that we can always imagine borderline cases. There could be a seat whose backrest is only 20 cm. high (is it a stool or a chair?), a chair with a seat raised only 10 cm. above the floor (is it really a seat?), a chair whose backrest was removed for some hours (did it become a backless chair or provisionally a stool?). Suppose we find a tree trunk in a forest with the form of a chair that, with some minor carving and painting, is now being used as a chair (it was not manufactured as a chair, but minor changes turned it into something we could call a real chair, depending on the relevance of the changes). Nevertheless, our definition is still reasonable despite vague borderline cases. Empirical concepts all have some degree of vagueness, and one can even argue that vagueness is a metaphysical property of reality. Indeed, if our definition of a chair had overly sharp boundaries, it would be inadequate, since it would not reflect the true vagueness of our concept-word ‘chair’ and would tend to diminish the useful extension of the concept. An often overlooked point is that what really justifies a semantic-cognitive rule is its practical applicability to common cases. That is, what really matters are cases to which we can apply the ascription rule without hesitation, not those rare borderline cases where we do not know if the ascription rule is applicable, since they are irrelevant from the practical point of view. Accordingly, the function of a concept-word is far from being discredited by a few borderline cases where we are at a loss to decide whether it is still applicable.
   Furthermore, we need to distinguish real chairs from ‘so-called chairs,’ because in such cases we are making an extended or even metaphorical use of the word. A toy chair, like a sculptured chair, is a chair in an extended sense of the word. In Victor Hugo’s novel Toilers of the Sea, the main character puts an end to his life by sitting on a ‘chair of rock’ near the ocean, waiting to be swept away by the tides... But it is clear from our definition that this use of the word is metaphorical: a real chair must be made by someone, since it is an artifact, while the unmoveable stone chair was only a natural object accidentally shaped by erosion into the rough form a chair and then used as a chair.
   There are also cases that only seem to contradict the definition, but that on closer examination do not. Consider the following two cases, presented as supposed counterexamples (Elbourne 2011, Ch. 1). The first is the case of a possible world where some people are extremely obese and sedentary. They require chairs that on the Earth would be wide enough to accommodate two or three average persons. Are they benches? The relevant difference between a bench and a chair is that chairs are artifacts made for only one person to sit on, while benches are wide enough for more than one person to sit on at a time. Hence, in this possible world what for us look like benches are in fact chairs, since they are constructed for only one sitter at a time. If these chairs were ‘beamed’ over to our world, we would say that they remained chairs, since the makers intended them as chairs, even if we used them as benches. The second counterexample is that of a social club with a rule that only one person at a time can use each bench in its garden. In this case, we would say they continue to be benches and not chairs, since they are still artifacts designed for more than one person to sit on, even if they are now limited to single sitters. Elbourne also asked if a chair must have four legs. Surely, this is a mistake, since according to our definition having four legs isn’t a defining feature: there could be a chair with no legs, like an armchair, a chair with three legs or even one with a thousand legs. The property of having four legs is what we have called a symptom or a secondary criterion of ‘chair-ness,’ only implying that a randomly chosen chair will probably have four legs.
   One can always imagine new and more problematic cases that do not seem to fit the definition, but if we look at the definition more carefully we discover that the difficulty is only apparent or that they are borderline cases or that they are extensions or metaphors, or even that the definition indeed deserves some refinement.
   Finally, the boundaries of what we call a ‘chair’ can also undergo changes from language to language and over time; in French an armchair (easy chair) is called a ‘fauteuil’ in contrast to a ‘chaise’ (chair), though a French person would agree that it is a kind of chair. I suspect that thousands of years ago, in most societies one could not linguistically distinguish a stool from a chair, since a seat with a backrest was a rare piece of furniture until some centuries ago.

9. Criteria versus symptoms
To make things clearer, it is already worthwhile to broaden our consideration of Wittgenstein’s distinction between criteria and symptoms. A symptom or a secondary criterion is an entity E that – assuming it is really given – only makes our cognitive awareness A of E more or less probable. In contrast, a definitional or primary criterion is an entity E (usually appearing as a complex criterial configuration) that – assuming it is really given – makes our cognitive awareness A of E beyond reasonable doubt (Wittgenstein 1958: 24; 2001: 28).[12]
   For instance, if we assume I am given four chair legs I can see under a table, this is a symptom of a chair, since it increases the probability that a chair is behind the table. But if we assume that what is visually given to me is ‘a seat with a backrest made for only one person to sit on,’ this makes my cognitive awareness of a chair beyond doubt. The definition (C) also expresses a definitional criterion, understood as such because its assumed satisfaction leaves no possibility to doubt that we can apply the ascription rule for the concept-word ‘chair.’
   We cannot guarantee with absolute certainty that entity E (criterion or symptom) is ‘really given’: our experiences are inevitably fallible. Nonetheless, using this ‘assumed given-ness’ based on experience and an adequate informational background, we can find a probability if a symptom is satisfied and a practical certainty if a criterion is satisfied. In this last case, we claim there is a probability so close to 1 that we can ignore the possibility of error in the cognitive awareness A that entity E is given. (Correspondingly, one could also speak in this sense of practical or presumed necessity.)
   Symptoms or secondary criteria can help us identify entity E using cognitive awareness A, even if we cannot regard E as necessary. However, symptoms are of no use unless definitional criteria are also met. Four legs and armrests that do not belong to a chair would never make a chair.[13]
   Terms like ‘criteria’ and ‘symptoms,’ as much as ‘conditions’ have so-called process-product ambiguity. We can see them as (a) elements belonging to the rule that identifies what is given, but we can also see them as (b) something really given in the world. Our semantic-cognitive rules are also criterial rules, able with the help of imagination to generate criterial configurations belonging to them internally. Hence, we could say that definition (C) is the expression of a semantic-criterial rule with the form: ‘Given E, we may conclude A,’ where the conclusion A is our awareness with practical certainty that E is given.
   One problem here is to know what this awareness means. I believe we can equate this cognitive awareness with our acceptance of the existence and applicability of a network of external inferential relations once the semantic-cognitive rule is satisfied. The concept of chair, for instance, consists of internal relations expressed by a definitional rule (C). But our awareness of the application of this concept arises as a maze of external relations resulting from the satisfaction of (C). For example, if I am aware that a chair exists, I can infer that it has a particular location, that I can sit on it or ask someone to sit on it, that I could possibly damage it, borrow it, etc.

10. Challenges to the traditional explanation (i): John McDowell
Supporters of semantic externalism have challenged the idea that the meanings of expressions consist in our implicit knowledge of their constitutive rules or conventions. According to their view, the meanings of expressions are predominantly relative to physical and social worlds, depending in this way only on objects of reference, and ultimately, on neurobiological processes involving autonomous causal mechanisms. In this context, there is little room for discussing the conventionality of meaning.
   As evidence for the externalist view, we can adduce our lack of awareness of the structure of semantic rules determining the linguistic uses of our words. If we lack awareness of senses or meanings, could it be that they occur to a greater or lesser extent in a non-psychological domain? Here, however, in principle participation by cognitive elements in meaning could be unnecessary. Meaning could result solely from autonomous causal mechanisms not recoverable by consciousness. In opposition to Michael Dummett’s ‘rich’ view of implicit meaning, John McDowell illustrated the externalist position on the referential mechanism of proper names, observing that:

We can have the ability to say that a seen object is the owner of a familiar name without having any idea of how we recognize it. The assumed mechanisms of recognizing can be neural machinery [and not psychological machinery] – and its operations totally unknown to whoever possesses them (McDowell 2001: 178).[14]

Some pages later, McDowell (following Kripke) asserts that the referential function of proper names would not be explained by conventionally based implicit identification rules for objects that can be descriptively recovered, because:

The opinions of speakers on their divergent evidential susceptibilities regarding names are products of self-observation, as much as this is accessible, from an external point of view. They are not intimations coming from the interior, from a normative theory implicitly known, a receipt for the correct discourse which guides the behaviour of the competent linguist. (McDowell 2001: 190)

This view is in direct opposition to the one I defend in this book, not because it cannot in same cases be justified (see Ch. V, sec. 11), but because it isn’t the typical case. I intend to show that usually the implicit application of internal semantic-cognitive rules based on criteria is absolutely indispensable for the referential function. We will see that to have reference, a usually tacit and unconscious cognitive element must be associated with our expressions and should be instantiated in some measure and at some moment in the language user’s head. And in no case is this clearer than with McDowell’s main focus: proper names (see Appendix to Chapter I).
   Here is how we could argue against McDowell’s view. If he were correct, an opinion about the given criterial evidence for the application of a proper name found through external observation of our referring behavior should be gradually reinforced by the cumulative consideration of new examples, that is, inductively. Even repetition of the same example would be inductively reinforcing! However, this is not the case. Consider our characterizing semantic-cognitive rule (C) for applying the concept-word ‘chair.’ We can see from the start that (C) seems correct. We naturally tend to agree with (C), even if we have never considered any examples of the word’s application. And this shows that speakers are indeed only confirming a recipe for the correct application that comes from inside, as a matter of tacit agreement between speakers… Admittedly, after we hear this definition, we can put it into trial, imagining a chair without a backrest but will see it is a stool, which isn’t properly a chair. If we try to imagine a chair designed so that more than one person can sit on it, we will conclude that we should call it a sofa or a garden bench. We can try to see imagined counterexamples only as means to confirm and possibly correct or improve the definition, discovering its extensional adequacy in a non-inductive way. This specification of meaning seems to be simply a contemporary formulation of something Plato identified as reminiscence (anamnesis) of his ideas. We do not need to go beyond this, imagining all sorts of chairs (rocking chairs, armchairs, wheelchairs…) in order to reinforce our belief in the basic correctness of our intuitive definition.
   Now consider the same issue from McDowell’s perspective. Suppose he is right and our knowledge of the meaning of a common name like ‘chair’ were the result of self-observation from an external viewpoint. We could surely acquire more certainty that chairs are seats with backrests made for one person to sit on by observing the similarities of real chairs that we can see, remember or imagine. Inductively, the results would then be increasingly reinforced, possibly by agreement among observers about an increasing number of examples. As we already noted, even examples of people reaching shared agreement by identifying thousands of identical school chairs would be able to increase the certainty that we are coming closer to factually true evidential conditions for applying the concept-word chair, depending on our neuronal machinery. But not is not the case. Additionally, even the idea of definition (C) capturing a neuronal mecanism that isn’t the implicitly cognitive result of a shared convention seems to be odd.
   We conclude, therefore, that the ascription rule made explicit in the definition (C) really has the function of rescuing for consciousness the tacit convention governing the referential use of the word ‘chair.’ It seems from the start intuitive and may only require the help of confirmatory, corrective and improving examples. And what is true for a general term should presumably also be true for other expressions (consider the specific case of proper names offered in the last chapter’s appendix).
   Indeed, if all we have in these cases is a shared convention, then a psychological element needs to be involved, even if only in an implicit way, constituting what could be called a non-reflexive cognitive application of the rule. Definition (C) makes explicit a convention normally instantiated in our heads as an (implicit) non-reflexive application, whenever we make conscious use of the word ‘chair,’ which only confirms the traditional standard explanation.

11. Challenges to the traditional explanation (ii): Gareth Evans
There is another argument against the claim that we have tacit cognitive access to semantic conventions that govern our use of expressions. This argument is the work of philosopher Gareth Evans, who directly influenced McDowell. Evans invites us to contrast a person’s belief that a substance is poisonous with a mouse’s disposition not to consume it. In the case of a human being, it is a genuine belief involving propositional knowledge; in the case of a mouse, it is a simple disposition to react in a certain way to a certain smell, not a true belief. Proof of the difference is the fact that:

It is of the essence of a belief state that it be at the service of many distinct projects, and that its influence on any project is mediated by other beliefs. (Evans 1985: 337).

If someone believes a certain substance is poisonous, he can do many different things based on that belief. He can test his belief by feeding it to a mouse, or if he is depressed, he can try to commit suicide by swallowing a dose. He can also relate his belief that the substance is poisonous to a variety of other beliefs. For instance, he might believe he will become immune to a poison by consuming small amounts everyday, gradually increasing the dose... As our knowledge of semantic rules is not susceptible to such inferences, thinks Evans, it consists not of actual belief states, but rather of isolated states, not very different from those of the mouse. Therefore, they are not cognitive psychological states in an acceptable sense of the word. (Evans 1985: 339)
   The characterization of belief proposed by Evans is interesting and in my view correct, but his conclusion does not follow. Certainly, it agrees with most of our theories of consciousness, according to which a belief is only conscious if it isn’t insular, while an unconscious belief is insular – though there are degrees of insularity. But the crucial point is that Evans’ argument blinds us to the vast gulf between our semantic uses of language and the mouse’s behavioral disposition to avoid consuming poison.
   As a weak but already useful analogy, consider our knowledge of simple English grammar rules. A child can learn to apply these rules correctly without any awareness of doing so; and some adults who have never learned formal grammar are still able to apply these rules to many different words in many different contexts. Moreover, even if our knowledge of these grammar rules is very often unconscious, with sufficiently careful examination we can bring them to consciousness.
   The problem is made still clearer when we consider our standard example of a semantic-cognitive rule, the criterial rule (C) for the application of the concept-word ‘chair’ to the identification of chairs. Certainly, a person can derive many conclusions from this rule. She can predict that normally five persons cannot sit side by side on a single chair. She knows that one can transform a chair into a stool simply by cutting off its backrest. She knows she would like to buy a similar chair. She knows that by standing on a chair, she can reach an overhead ceiling lamp… She knows all this and much more even without having ever consciously considered the definition (C). And this only means that we can have a belief state enabling us to identify chairs, putting it at the service of many different projects mediated by other beliefs without being explicitly aware of the involved meaning-rule (C).
   We can see a continuum, beginning with more primitive and instinctively determined dispositions and ending with semantic-cognitive rules and their effects. It includes dispositions like those of mice, which cannot be cognitive, because they are instinctive (it is sufficiently implausible to think that a mouse could be reflexively conscious). There are also more sophisticated ones, like our unconscious beliefs, thoughts and cognitions, which we can consciously scan and reflexively access (presumably through meta-cognitive processes).
   If we accept the view that semantic rules are usually conventional rules exemplified in the simplest cases by models like (C), then we must reject the radicalism of positions such as those of Evans and McDowell. After all, the application of such rules allows us to make many different inferences and relate them to many other conceptual rules. Rule (C) has greater proximity to the rules of English grammar than to the innate dispositional regularities demonstrated by a mouse that instinctively avoids foods with certain odors. Moreover, it is clear that in such cases, unlike the mouse, for people, inferences to other beliefs are unconsciously available, even if as cognitive-semantic rules meant to be applied, they do not in themselves possess the widest availability of the really conscious belief states considered by Evans.[15]
   The root of the confusion is in my view that the semantic rules in question, with and because of their apparent triviality, have not yet been investigated in a sufficiently systematic way. In an academic world dominated by science the procedure that leads to their discovery does not seem to be something worth of serious consideration. However, to proceed more systematically in this seemingly trivial direction is philosophically invaluable, and this is what I will do in the remainder of this book.

12. Non-reflexive semantic cognitions
I believe contemporary theories of consciousness support the traditional view according to which we have implicit knowledge of our meaning-rules. I will begin by appealing to reflexive theories of consciousness. But first, what are these theories?
   In the philosophical tradition, the idea of reflexive consciousness was already suggested by John Locke, with his theory of internal sense (Locke 1690, book II, Ch. 1, §19). Reflexive theories of consciousness were introduced to contemporary discussion by D. M. Armstrong (Armstrong 1981: 55-67; 1999: 111 f.). We can summarize Armstrong’s view as saying there are at least two central meanings of the word ‘consciousness.’ The first is what he calls perceptual consciousness, which consists in the organism being awake, perceiving objects around it and its own body. This is the simplest sense of consciousness. John Searle wrote that consciousness consists of those subjective states of sentience or awareness that begin when one wakes up in the morning after deep, dreamless sleep and continue throughout the day until one goes to sleep at night, or falls into a coma, or dies (Searle 2002: 7). By this he meant chiefly perceptual consciousness. This is also a very wide sense of consciousness, since less developed species also have this form: For instance, we can say that a hamster sedated with ether loses consciousness, because it ceases to perceive itself and the world around it. It seems justified to assume that when a hamster is awake, it has some primitive form of cognition of the world around it, as shown by its behavior. However, the width of this extension only suggests the irrelevance of perceptual consciousness for us. We are aware of the world in the same way a hamster seems to be conscious of it, but in a much more demanding, more human sense of the word. Certainly, a mouse perceives a cat, but it is unlikely to know it is facing its arch-enemy. And this also holds for internal feelings. A snake may be able to feel anger. Yet, we can hardly believe a snake is aware of this anger, since it probably has no reflexive consciousness.
   Now, what distinguishes a mouse’s perceptual awareness and a snake’s anger from our own conscious awareness of things around us and from our own feelings of anger? The answer is given by a second sense of the word ‘consciousness’ which Armstrong considers the truly important one. This is what he termed introspective consciousness and that I prefer (following Locke) to call reflexive consciousness: This is a form of consciousness that we can define as reflexive awareness of our own mental states.
   According to one of Armstrong’s most interesting hypotheses, reflexive conscious­ness emerges from the evolutionary need of more complex systems to gain control of their own processes by means of higher-order mental processing. In other words: our first-order mental events, like sensations, feelings, desires, thoughts, and even our perceptual consciousness of the world around us, can become objects of simultaneous introspections with a similar content (D. M. Rosenthal called these meta-cognitions higher-order thoughts[16]).
   According to this view, only when we achieve reflexive consciousness of a perceptual state can we say that this state ‘becomes conscious’ in the strong sense of the word. So, when we say in ordinary speech that a sensation, a perception, a sentiment or a thought that we have ‘is conscious,’ what we mean is that we have what could be called a meta-cognition of it. This shows that Armstrong’s perceptual consciousness is actually a kind of unconscious awareness, while reflexive consciousness – the true form of consciousness – is probably a faculty possessed only by humans and a few higher primates like orangutans.[17]
   Now, let us apply this theory to our tacit knowledge of semantic-cognitive rules. It is easy to suggest that we usually apply these rules without having a meta-cognitive consciousness of them and therefore without making ourselves able to consciously scrutinize their structure. In other words, we apply these rules to their objects cognitively, and these rules are ‘cognitive’ because they generate awareness of the objects of their application. But in themselves these rules usually remain unknown, belonging to what I above called unconscious awareness. We need to resort to a meta-cognitive scrutiny of our semantic-cognitive rules in order to gain conscious awareness of their content.
   One objection to using this kind of theory to elucidate tacit knowledge of our rules is that there are a number of interesting first-order theories of consciousness that do not appeal to the requirement of meta-cognition or higher-order cognition. In my view, we can classify most, if not all, of these apparently competing theories as integrationist theories of consciousness. We can call them this, because they share the idea that consciousness of a mental state depends on its degree of integration with other mental states constituting the system. This is certainly the case of Daniel Dennett’s theory, according to which consciousness is ‘brain celebrity’: the propagation of ephemerally fixed contents influencing the whole system (Dennett 1993, Ch. 5). This is also the case of Ned Block’s view, according to which consciousness is the availability of a mental state for use in reasoning and directing action (Block 1995: 227-47). This is likewise the case with Bernard Baars’ theory of consciousness as the transmission of content in the spotlight of attention to the global workspace of the mind (Baars 1997). And it is the also case of Giulio Tononi’s theory, according to which consciousness arises from the brain’s incredible capacity to integrate information (Tononi 2004: 5-42). These are only some well-known contemporary first-order theories of consciousness that are historically consonant with Kant’s view. According to him, to be consciously recognized, a mental state must be able to be unified into a single Self. From the perspective of such integrationist theories, an unconscious mental state would be one that remains to a greater or lesser extent dissociated from other mental states. And all these views seem to have a degree of reasonability.
   The objection, therefore, would be that I am trying to explain implicit knowledge of language relying only on meta-cognitive theories of consciousness, ignoring all others. To the contrary, I believe there is more than one way around this objection. The first is the following: we have no good reason to think integrationist and reflexive views of consciousness are incompatible. After all, it makes sense to suppose that a mental state’s property of being the object of meta-cognition also seems to be a condition – perhaps even a necessary one – for this mental state to be more widely available and more easily integrated with other elements constituting the system.[18]
   We can reinforce this suggestion by applying a well-known metaphor developed by Baars: A conscious state of a mind is like an actor on stage who becomes visible and therefore influential for the whole system because he is illuminated by the spotlight of attention. But it seems reasonable to think that this could happen only because some sort of meta-cognitive state provides the light to this spotlight. Hence, one could easily argue that the first-order mental state is accessible to the rest of the system and hence conscious due to its privileged selection by a higher-order cognitive state of attention.
   Of course, you are free to reject this last hypothesis as insufficiently justified. Nevertheless, I still hold that consciousness of semantic rules and the possibility of scrutinizing them is able to resist integrationist theories, since they also leave room for unconscious processes. Consequently, assuming a meta-cognitive reflexive higher-order view or some first-order integrationist view, or (my preferred option) a reflexive plus integrationist view, the conclusion remains the same. We can have cognitive states that make us conscious of their objects but which in themselves are not objects of consciousness and conscious scrutiny, and this makes them cognitive yet in the proper sense unconscious. They are below the level of consciousness either because they are not sufficiently accessible, or because they are not in the spotlight of attention, or because they are insufficiently integrated with the system, or because they are not the object of higher-order cognition, or perhaps for all these reasons. Only if we bring them to conscious scrutiny we can subject them to analysis. And to such cases belong the application of most of our semantic-cognitive rules.
   This assumption could explain why we can have unconscious or implicit tacit cognitions when we follow semantic-cognitive rules without being cognitively aware of the content of these rules and consequently without being able to analyze them. They remain implicit because we rarely pay attention to these rules when we apply them and because even when this occurs, they are not there as objects of reflexive cognition. These rules are there, using Wittgenstein’s well-known metaphor, like spectacles. When seeing things through them, we are normally unaware of the lenses and their frame. From this fact, we conclude that we can distinguish two forms of cognition:

(i) Non-reflexive cognition: This is the case with cognitions that are not conscious, because they are not accessed by a higher-order cognitive process and/or focused on by inner attention... (e.g., my perceptual consciousness when I identify a chair.)
(ii)  Reflexive cognition: This is the case of cognition accessed by a higher-order cognitive process and/or focused on by inner attention… being for this reason able to be the object of reflexive scrutiny. Any mental states, sen­sations, emotions, perceptions, and thoughts can be called reflexive if they are accompanied by higher-order cognitive inner attention and/or focused on by inner attention. (This is what is needed for the kind of reflexive scrutiny that can make us aware of the semantic-cognitive rule for the identification of a chair as requiring a seat with a backrest and built to be used by only one person at a time.)

Once in possession of this distinction, we can better understand the implicit or tacit status of the cognitive meanings or contents or semantic rules present in uses we make of expressions. When we say definitions of semantic cognitions determining the references of our expressions are often implicit (as in the case of the semantic rule defining the word ‘chair’), we are not assuming that they are typically pre-cognitive or definitely non-cognitive, lacking any mental activity. Nor that they are completely isolated or dissociated from any other mental states (in the last case, we would lack even the ability to choose when to apply them). What we mean is just that the psychological instantiations of these conventional rules are of a non-reflexive type. That is, they do not have the form of semantic-cognitive rules likely to be the subject of (higher-order or not) cognitive attention. And, as already noted, there is a reason for this, since the structures of these rules are not the focus of our attention when we use the corresponding concept-word in an utterance. That is, our real concern is more practical, consisting primarily in the cognitive effects of applying these rules.
   As an obvious example: if I say, ‘Please, bring me a chair,’ I don’t need to explain this by saying, ‘Please, bring me a seat with a backrest, made to be used by only one person at a time.’ This would be discursively obnoxious and pragmatically counterproductive: it would be nearly impossible to communicate efficiently if we had to spell out (or even think of) all such details each time we applied semantic-cognitive rules. What interests us is not the tool, but its application – in this case, to inform my hearer I would like him to bring me a chair. In linguistic praxis, meaning isn’t there to be scrutinized, but instead to be put to work.
   A consequence of this view is that in principle our inner attention can focus on non-reflexive semantic-cognitive rules involved in normal uses of words and scrutinize them meta-cognitively by considering examples of its application and lack of application. Taking into consideration the variable functions and complexity of our semantic-cognitive rules enables the philosopher to decompose them analytically into more or less precise characterizations. It seems this is how we become aware of the conceptual structure of our philosophically relevant expressions.

13. Conclusion
Summarizing this introductory chapter, we can say that we have found two basic methodological ideas: (A) the primacy of established knowledge and (B) the method of philosophizing by examples. We will use them as guides in this book’s analyses. Particularly relevant in this context is the idea that we can still see philosophy as an analytical search for non-reductive surveillable representations of our natural language’s central meaning-rules. It is almost surprising to verify that more than two-thousand years after Plato we still have reason to accept the view that solving some of our most intriguing philosophical problems would require deeper and better analyzed explanations of what some central common words truly mean.

[1] I am only trying to supply the right images for what is lacking in explanations. In fact, externalism is an unclean concept. After refinements, externalism is defined in a vague way as the general idea that ‘certain types of mental contents must be determined by the external world’ (Lau & Deutsch, 2014). This is obviously true, insofar as the expression ‘determined by the external world’ is understood as saying that any mental content referring to the external world is in some way causally associated with things belonging to an external world. As Leslek Kolakowski noted, ‘if there is nothing outside myself I am nothing’ (2001). But this is trivial enough to be accepted by any reasonable internalist (or by a very weak externalist, which amounts to the same thing). Nonetheless, externalists have proposed in their most central and radical writings to read ‘determined’ as suggesting that the locus of meanings, beliefs, thoughts and even minds is not in our heads, but somewhere in the external world… But this sounds like a genetic fallacy.
[2] See, for instance, the justification of the external world summarized in Ch. VI, sec. 19.
[3] This is a statement like that by Heraclitus of Ephesus, who noted that, ‘The sun is the width of a human foot.’ We need only lie on the ground and hold up a foot against the sun to see that this is true.
[4] I am unable to find real exceptions. Under normal circumstances fire has always burned. The idea that trees obtain energy from the earth was once a commonsense truth until photosynthesis was discovered… But this idea wasn’t a very basic or modest commonsense truth, as it could easily be refuted by the well-known fact that trees do not grow in complete darkness. The idea that a new sun crosses the sky each new day is surely absurd – but is it a commonsense idea? It was suggested by the philosopher Heraclitus and also goes beyond the humble intentions of modest common sense. Common sense is not interested in such claims, which have no relationship to ordinary life concerns.
[5] Roberto DaMatta, in an interview.
[6] It was certainly much easier to believe in the existence of a personal God and an eternal soul independent of the body a thousand years ago, before the steady accumulation of divergent knowledge arising from the progress of natural and human sciences.
[7] The expression ‘descriptive metaphysics’ was introduced by P. F. Strawson in contrast to ‘revisionary metaphysics.’ It aims to describe the most general features of our actual conceptual schema, while revisionary metaphysics attempts to provide new schema to understand the world. Strawson, Aristotle and Kant made descriptive metaphysics, while Leibniz and Berkeley made revisionary metaphysics (1991: 9-10).
[8] As these interpreters wrote: ‘Wittgenstein’s objection to “theorizing” in philosophy is an objection to assimilating philosophy, whether in method or product, to a theoretical (super-physical) science. But if thoroughgoing refutation of idealism, solipsism or behaviourism involves a theoretical endeavour, Wittgenstein engages in it.’ (Baker & Hacker 1980: 489). Anthony Kenny (1986) preferred to think that Wittgenstein actually held two competing views on the nature of philosophy – therapeutic and theoretical. But the here defended unified interpretation seems more charitable.
[9] As he writes, ‘We have now a theory, a “dynamic” theory (Freud speaks of a “dynamic” theory of dreams) of the sentence, of the language, but it appears to us not as a theory.’ (Zettel 1983b: 444).
[10] If you wish to avoid the word ‘seat’, you can also define a chair as ‘a piece of furniture with a raised surface and a backrest, made for only one person at a time to sit on’.
[11] As will be frequently remembered, I do not deny that referential meanings include things that cannot be easily captured by descriptive conventions, unlike case (C) – things like perceptual images, memory-images, feelings, smells. They belong much more to the semantic level called by Frege illuminations (Beleuchtungen), based on regularities instead of conventions.
[12] The precise interpretation of this distinction is a controversial issue that does not concern us here; I give what seems to me the most plausible, useful version.
[13] This is one reason why I think that by appealing to semantic definitions I am not rejecting any finding of empirical psychology. Eleanor Rosch has shown that we are able to categorize under a concept-word much more easily and quickly by appealing to prototypical cases (cf. Rosh, 1999: 189-206). For example, we can more easily recognize a sparrow as a bird than an ostrich or a penguin. In the same way, an ordinary chair with four legs can be recognized as a chair more easily and quickly than can a wheelchair or a throne. This does not conflict with our definition, however, since for us the psychological mechanism of recognition responsible for the performance is not in question, but rather the leading structure subjacent to it. We can often appeal to symptoms as the most usual usual ways to identify things. For instance, we identify human beings first by their faces and penguins first by their appearance, even if human faces and a penguin’s appearance are only symptoms of what will be confirmed by expected behavior, memories, genetic makeup, etc. Hence, the ultimate criterion remains dependent on a definition. (In one wildlife film, counterfeit penguins outfitted with cameras deceived real penguins. The trouble with these moronic birds is that they are overly dependent on innate principles of categorization.)
[14] The expression in brackets appears in the author’s footnote on this passage. In Dummett’s more orthodox position, McDowell sees a relapse into the psychologism justifiably rejected by Frege.
[15] Freud distinguished (i) unconscious representation able to associate itself with others in processes of unconscious thought from (ii) unconscious representation that remains truly isolated, unassociated with other representations, which for him would occur in psychotic states and whose repression mechanism he called exclusion (Verwerfung). Evans treats the relative insularity of our non-reflexive awareness of semantic rules in a way that recalls exclusion.
[16] Cf. Rosenthal 2005. In this summary, I will ignore the dispute between theories of higher-order perception (Armstrong, Lycan) and higher-order thought (Rosenthal) and still others. In my view, David Rosenthal is right in noting that Armstrong’s perceptual ‘introspectionist’ model suggests the treatment of cognitions of a higher-order as if they contained qualia, and that it is implausible that higher-order processes have phenomenal qualities. Armstrong, on his side, seems to be right in assigning a controlling role to higher-order experience. Aside from that, although Armstrong doesn’t use the word ‘thought’, he would probably agree that there is some kind of higher-order cognitive element in the introspection of first-order mental states, and this element interests us here. I prefer the term meta-cognition for these higher-order cognitions, since I am sure that not only Rosenthal, but also Armstrong would agree that we are dealing with a cognitive phenomenon. (For evaluations, see Block, N. O. Flanagan, G. Güzeldere (eds.) 1997, part X.)
[17] I will pass over the traditional idea that first-order mental states automatically generate meta-cognitions of themselves. This view would make it impossible to have perceptual consciousness without introspective consciousness. However, this view not only seems to lack a convincing intuitive basis, it also makes the existence of unconscious thoughts incomprehensible.
[18] As Robert Van Gulick wrote in the conclusion of his article in the Stanford Encyclopedia of Philosophy (2014), ‘there is unlikely to be any theoretical perspective that suffices for explaining all the features of consciousness that we wish to understand. Thus a synthetic and pluralistic approach may provide the best road to future progress.’

Nenhum comentário:

Postar um comentário