The book Philosophical Semantics will be published by Cambridge Scholars Publishing (2017/1)
DRAFT OF THE INTRODUCTION
DRAFT OF THE INTRODUCTION
- 1 -
In such theories, it seems to me, there is a failure of that feeling for reality which ought to be preserved even in the most abstract studies. Logic, I should maintain, must no more admit a unicorn than zoology can; for logic is concerned with the real world just as truly as zoology, though with its more abstract and general features.
Would the old orthodoxy of the philosophy of language that prevailed in the first half of the Twentieth Century, with its insistence in the centrality of meaning, its eroded principle of verifiability, its correspondentialism, its sharp distinction between analytic and synthetic, its crude descriptivist-internalist theories of proper names and general terms, its naive monolithic dichotomy between the necessary a priori and the contingent a posteriori, be nearer to the truth than the now dominant causal-externalist orthodoxy?
I wrote this book in the conviction that this question should be answered in the affirmative. In my view, the philosophy of language of the first half of the Twentieth Century was in some ways more profound, wider and nearer to the truth than those of the current orthodoxy, and its insights were more powerful. The reason resides in the socio-cultural background. All culture is born of great conflict; and the time between the end of the Ninetieth Century and the Second World War was a period of increasing social turmoil that made all established cultural values dubious, opening the gates for sweepingly original innovation in the sciences, arts, and also in philosophy.
However, in saying this I am not attempting to dismiss the ‘normal philosophy’ that came later. I don’t reject, for instance, the interest of anti-verificationist arguments like those of W. V-O. Quine; nor do I reject the deep philosophical relevance of the new causal-externalist mainstream founded by Saul Kripke and Keith Donnellan in the early Seventies, which was later expanded by Hilary Putnam, David Kaplan and others. They are relevant and in a sense even indispensable to what I am trying to do.
However, the importance of these achievements is in my judgment predominantly negative, since I think that their results are intrinsically less than the truth. In other words, in my view the relevance of these achievements consists mostly in being dialectically useful challenges, which if adequately answered would bring us to an enriching reformulation of old descriptivist-internalist-cognitivist views of meaning and reference, although more complex in an almost unrecognizable way.
The aim of the present book is to contribute in the proposed direction, though often in an indirect methodological way. My approach to the issue here consists in gradually developing – aided by a critical examination of some central views of traditional analytic philosophy, particularly those of Wittgenstein and Frege, but also of others – the defense of a purely internalist, cognitivist and neo-descriptivist view of the meaning of our expressions and the mechanisms of reference. This task will be complemented by a renewed reading of the concept of existence as a higher order property, by a reconsideration of the verificationist view of meaning, in which the main objections to it are answered and, finally, by a thoroughgoing reassessment of the correspondence theory of truth, seen as complementary to the proposed form of verificationism.
The obvious assumption that makes my project prima facie plausible is the idea that language is a system of rules, some of them directly accountable for meanings. The most central meaning-rules are those responsible for what Aristotle called apophantic discourse – the representational discourse, whose meaning-rules could be generally called semantic-cognitive rules. Indeed, it is prima facie highly plausible to think that the cognitive meaning (i.e., informative content and not mere linguistic meaning) of our representational language cannot be given by anything other than semantic-cognitive rules or combinations of such rules. This is a plausible idea, particularly because our knowledge of these conventions is usually tacit, implicit, non-reflexive, that is, we are usually unable to expose them by verbal means.
My ultimate aim should be to investigate the structure of the semantic-cognitive rules by means of a careful examination of basic referential expressions, which are singular terms, general terms and even declarative sentences, in order to furnish an appropriate explanation of their reference mechanisms. In the present book this will be done only very partially, often in the appendices, summarizing ideas already presented in my previous work (Costa, 2014, ch. 2, 3 and 4), though still demanding development. In the body of the present book, my central tenet is rather to clarify my own assumptions on the philosophy of meaning and reference.
By trying to develop these ideas, I realized, in retrospect, that what I wanted to do could in its core be understood as reviving a program already speculatively developed by Ernst Tugendhat in his classical work Vorlesungen zur Einführung in die Sprachanalytische Philosophie. This book, published in 1976, can be seen as the swansong of the old orthodoxy, defending an anti-externalist program that was gradually abandoned during the next decade under the ever-growing influence of the new causal-externalist orthodoxy. Tugendhat’s strategy in developing this program can be conceived of in its core as a semantic analysis of the fundamental singular predicative statement, since it is not only epistemically fundamental, but also the most indispensable unit for building our first order truth-functional language. In summary, giving a statement of the form Fa, he suggested that:
1) the meaning of the singular term a should be given by its identification rule (Identifikationsregel),
2) the meaning of the general term F should be given by its application rule (Verwendungsregel), which I will also call characterization or, preferably, ascription rule,
3) the meaning of the complete singular predicative statement Fa should be given by its verifiability rule (Verifikationsregel), which results from the combined application of the first two rules (Tugendhat & Wolf 1983, pp. 235-6, Tugendhat 1976, pp. 259, 484, 487-8).
The verifiability rule is obtained by jointly applying the first two rules in such a way that the application of the identification rule of the singular term must be made first, in order to make possible the application of the general term’s ascription rule. Thus, for instance, Yuri Gagarin, the first person to orbit the earth from outside its atmosphere, gazed out of the small window of his space capsule and claimed: ‘The earth is blue!’ In order to make this into a true statement, he should first have identified the earth by applying the identification rule of the proper name ‘Earth’; then, based on the result of this application, he was able to apply the ascription rule of the predicative expression ‘…is blue’. In this combined application, these two rules work as a kind of verifiability rule for the statement ‘The earth is blue’. If these rules can be conjunctively applied, then the statement is true, otherwise, it is false, which Tugendhat saw not only as a form of verificationism, but also as a kind of correspondence theory of truth (a conclusion contested by some readers).
One can critically ask if it is not possible that one first applies the ascription rule of the predicative expression. For example, one sees a fire at a distance in the night, without identifying what is on fire. Only after approaching it does one see that it is an old building. It also seems that the person in this example first applied the ascription rule and later the identification rule. However, by suggesting this we forget that in order to see the fire one first needs to direct one’s eyes at a certain spatio-temporal region, thereby localizing it: the region where something is on fire. Hence, a primitive localizing identification rule is first applied. Thus, initially the statement will not be; ‘That old building is on fire’, but simply ‘There is a fire over there’ or ‘There is a fire in that place’. Later, closer to the building, one can make a more precise statement. In this same way Gagarin could think ‘There is blue color down below me’ before saying ‘The earth is blue’, while looking out the window of his space capsule. But in these cases the ascription rule cannot be applied without the earlier application of some identification rule, even if only to be able to identify a vague spatio-temporal region.
Tugendhat came to his conclusions as result of purely speculative considerations, without analyzing the structure of these rules and without answering external criticism of the program, like the numerous objections that others have already made against verificationism. But what is extraordinary is that he was arguably right, since I believe I can make his view more plausible. I can do this by investigating in some detail the structure of the semantic-cognitive rules, that is, first by analyzing the identification and ascription rules, then by refuting the main arguments against verificationism, and finally by explaining some fundamental attributes related to them, particularly existence and truth, in a more satisfactory way.
My methodological approach, as will be seen, is also different from that used in the more formally oriented views opposed in this book, which are mostly inherited from the philosophy of ideal language in its positivist developments. It is primarily oriented by the communicational and social roles of language, which are seen as our fundamental units of analysis, being in this way more influenced by the so-called ordinary language tradition than by the ideal language tradition.
Since I consider that a comprehensive understanding of language must situate it in its unavoidable involvement in our overall societal life, I will always begin with commonsense and ordinary language, often seeking support in a more careful examination of concrete examples of how our linguistic expressions are really used.
Finally, my approach is intended to be systematic. The chapters shall be interconnected so that the plausibility of each one is better supported when regarded in its relation to arguments developed in the preceding chapters and their often critical appendices. Even if complementary, these appendices are placed as a counterpoint to the chapters, aiming to justify the expressed views, if not to add something complementary to them.
 English translation under the title Traditional and Analytical Philosophy: Lectures on the Philosophy of Language (2016).
 In this book I will use the word ‘statement’ in most cases as referring to the speech act of making an assertion.
 According to J. L. Austin’s correspondence view, an indexical statement (e.g. ‘This rose is red’) is said to be true when the historical fact correlated to its demonstrative convention (linguistically expressed by the demonstrative pronoun ‘the’) is of the type established by the sentence’s descriptive convention (the given red rose type) (Austin 1950, p. 122). This is to me the first approximation of similar conventionalist strategies later employed by Dummett in his interpretation of Frege (see 1981, pp. 194, 229) and still later more cogently explored by Tugendhat.
 The ideal language tradition (inspired by the logical analysis of language) and the ordinary language tradition (inspired in the workings of natural language) represent opposite views. The first was founded by Frege, Russell and the early Wittgenstein. It was also strongly associated with philosophers of logical positivism, mainly Rudolf Carnap. With the rise of Nazism, most philosophers associated with logical positivism fled to the USA, deeply influencing American analytical philosophy. The philosophies of W. V-O. Quine, Donald Davidson, and later Kripke, Putnam and David Kaplan, along with the present mainstream philosophy of language with its metaphysics of reference, are in direct or indirect ways a later product of ideal language philosophy. The ordinary language tradition, on its side, was represented after the Second World War by the Oxford School. It was inspired by the analysis of what Austin called ‘the whole speech act in the total speech situation’. Its main theorists were J. L. Austin, Gilbert Ryle and P. F. Strawson, although it had an antecedent in the later philosophy of Wittgenstein and still much earlier in G. E. Moore’s commonsensical approach. Ordinary language philosophy also affected American philosophy through relatively isolated figures like Paul Grice and John Searle, whose academic influence has been foreseeable less. For history, see J. O. Urmson (1956).