III
PHILOSOPHY AS CONJECTURAL ANTICIPATION OF SCIENCE
Where
there is philosophy, there will be Science.
Robert Nozick
Now I would like to initiate our descriptivist inquiry into the criteria
used to identify philosophical discourse and thought. My proposal is that, even
if we cannot establish a proper object unique to philosophical investigation,
nor anything methodologically distinctive that belongs exclusively to it, we
may nonetheless discern something peculiar to philosophy – provided we direct
our attention to the constitutive elements of its form.
1. THE INEVITABLY CONJECTURAL NATURE OF PHILOSOPHICAL INQUIRY
Even if the descriptivist metaphilosopher does not discern a distinctive
feature of philosophy in the material aspects of his inquiry, he may
nonetheless always identify a salient formal trait common to all philosophical
investigation, namely, its conjectural character: Philosophy is, in essence,
a conjectural or speculative endeavour, in the sense that philosophers are unable
to reach sufficient consensus on their ideas, doctrines, and even their most
fundamental values and conceptions.
There is no philosophy whose
results can be taken as definitive and beyond dispute, as in truly scientific
domains, such as, say, molecular biology. As we can see from Russell’s scientifically
directed realist comments about the uncertain nature of philosophy, collected
by Alan Wood:
Science is what we know; philosophy is what we don’t know.
Science is what we can prove to be true; philosophy is what we cannot
prove to be false.
Philosophy is something intermediary between theology and science.
Nine-tenths of philosophy is mystification; the only part entirely
defined is the logic, and as it is logic, it is not philosophy. [1]
The reason for the inevitably conjectural character of philosophy is not
hard to see. To reach a consensual agreement on the results of our inquiries,
we need at least to share some basic background assumptions and general
presuppositions. But philosophy lacks even a minimal set of such shared
presuppositions at almost every step of its investigations. What is
particularly important here is the absence of foundational assumptions that
could generate consensus about what we might call:
(A)
Justifying
evidence. These general
assumptions enable the formulation of common questions and the
determination of what counts as relevant data. Philosophers, however, rarely
agree on which data are supposed to ground their arguments, nor on the degree
of relevance of those data. They don’t even agree on which questions are worth
asking: what some regard as crucial, others dismiss as irrelevant or
meaningless.
(B)
Justifying
procedures. These require sufficient
prior agreement on criteria and methods for evaluating truth and value,
thereby enabling shared solutions. But here again, what counts as a
convincing argument for some may strike others as implausible or irrelevant.
Without the sharing of assumptions of types (A) and (B), which philosophy
does not possess, while the particular sciences do, it seems impossible to
expect anything like agreement on results.
To illustrate, let us return to
Plato’s doctrine of the Forms. This theory was proposed as a solution to what
might be called the problem of generality. Moreover, it was built on the
presupposition that something must be immutable to count as a legitimate object
of knowledge. Since Heraclitus had already made clear that the sensible world is
in constant flux, such a world could not be the proper object of knowledge. So
the proper object of knowledge could only be what Plato called Forms (εἶδη)
or Ideas (ἰδέαι): eternal, unchanging entities existing outside time and
space, in a purely intelligible realm. The consequence is straightforward: it
becomes possible, for example, to predicate justice of a wide variety of
visible things, insofar as they exemplify the abstract Form of the justice: the
Justice-in-itself.
However, the doctrine also entails
serious difficulties. One of them is this: how can a single abstract idea be
related to the many concrete individuals to which it applies? To deal with this
problem, Plato appealed to the metaphors of participation (μέθεξις) and of
copying (μίμησις). However, these metaphors seem to be unrecoverable. By the
metaphor of participation, he was forced to claim that many things can
participate in the same idea without thereby dividing it into parts—a claim
that looks inconsistent. The metaphor of copying seems more promising until we
notice that it is not intelligible how things could copy an abstract idea
belonging to the purely intelligible world into the visible, sensible world.
The very notion of the Platonic
idea faces difficulties. Critics of the doctrine may be tempted to conclude
that the Platonic concept of idea is intrinsically incoherent, precisely
because it depends on metaphors that cannot be cashed out. Are these objections
justified? To me, it seems they are. Today, there are not many philosophers who
defend Platonism, though some do. – Frege, in his essay “The Thought” (Der
Gedanke), offered the most refined defense of Platonism by an analytic
philosopher. Beyond that, we lack any alternative that commands universal
acceptance. All we can say is, at present, that Platonism strikes many as an
implausible option.
The situation of doubt is not
itself intolerable, but it becomes desperate once we demand that the historical
period in which the doctrine was formulated be part of the equation. In
Plato’s time, there was simply no way to conclude that his doctrine was implausible.
It is therefore understandable that Aristotle found himself entangled in webs
of difficulty when he tried, in his Metaphysics, to refute the existence
of universals conceived as separate from substance.[2]
The identification of the
historical period and the subsequent context in which a philosophical idea
emerges is fundamental to our systematic perspective. To better see this point,
consider the case of Empedocles, regarded by Darwin as a precursor of his
theory of natural evolution. Empedocles maintained that living beings arose
through the random combination of body parts; many of these assemblages were
monstrous and unfit for survival, but those that were well adapted endured. His
view was right, but was Empedocles in his time already doing science? Certainly
not, for in his time such notions could only be stated as speculative
hypotheses, impossible to evaluate scientifically. Had he advanced these claims
in Darwin’s era, they would have been treated as scientific hypotheses, since
empirical data would have made their evaluation possible. This means that we
must always attend to the historical context in which a philosophical conjecture
is proposed, in order to determine whether it is genuinely philosophical.
Uncertainty is, in fact, to be
expected, since philosophy is concerned with building theories on shaky
foundations. This is a fallibilist conclusion, somewhat depressing,
which many traditional philosophers tried to deny, but which contemporary
philosophers have long since learned to accept as inevitable. And there are no
exceptions. Even the therapeutic philosophy attempted by the later
Wittgenstein, which was supposed to be purely descriptive, quickly revealed
itself incapable of producing consensus: what he saw as a remedy, others saw as
a placebo, or even as a poison.
This impossibility of consensus
is also the most striking point of contrast between philosophy and science. For
unlike philosophy, in everything we call ‘science’—whether empirical or
formal—there is always a sufficient degree of prior agreement about…
(A) Justifying evidence, that is, general presuppositions which
make possible the formulation of common questions and the selection of relevant
data (sensitive data/axioms); and
(B) Justifying procedures, that is, a sufficient prior agreement
concerning the criteria and methods for evaluating truth or the intended value,
thereby enabling the achievement of shared solutions (verifications/proofs).
These prior agreements make subsequent consensus on results possible, whether
through verification or refutation in the empirical sciences, or through the
demonstration of theorems in the formal sciences. It is precisely because
scientists have been able to establish such common grounds that, unlike
philosophers, they also can reach agreement on the outcomes of their inquiries
and sustain the expectation of progressive development.
Philosophy, by contrast, is
conjectural by its very nature. Two formal features follow from this: it is typically
argumentative, and it is inevitably aporetic in character, with
few and dubious exceptions. Philosophers are always postulating or suggesting
uncertain principles, and attempting to validate them by tracing their implications.
Since these principles are themselves conjectural, the process requires
constant critical comparison of consequences and ongoing evaluation of the
arguments used to support them. The task has no natural end. It is this speculative
character that grounds the distinctively argumentative, dialogical, and
aporetic practice of philosophy.
Can philosophy be defined solely by its
conjectural or speculative character? Not without qualification, since not all
conjectures are philosophical. We can, for example, formulate hypotheses about
the Earth’s climate conditions over the next hundred years. But such hypotheses
do not constitute philosophical inquiry. They lack a theoretical point: they
amount to plausible scientific projections of empirical events subject to
variation. In mathematics, Goldbach’s conjecture – that every even number
greater than two is the sum of two primes – also does not count as philosophical.
The reason seems to be that, like many other mathematical conjectures, it is at
least believed to be provable. In philosophy, by contrast, we do not even know
whether our conjectures can be demonstrated as true; they may well fall into
the category of so‑called pseudoproblems.
Moreover, conjectural projection, even when theoretical, does not, by
itself, sustain philosophy. Take Noam Chomsky’s hypothesis of an innate
universal grammar: although it has inspired extensive research, it resists
straightforward demonstration, yet it remains scientific rather than
philosophical. This is due not only to its specificity but also to the fact
that evaluation paths could be identified and continue to be pursued.
Similarly, speculative frameworks in contemporary physics, such as string
theory, are in principle testable but remain far from practical verification.
These theories retain, one might say, a speculative or “philosophical” trace,
yet they are regarded as scientific because physicists do not consider them so
speculative as to make it absurd to imagine a way of submitting them to the
tribunal of experience.
The distinction between
scientific speculation and philosophical speculation rests, at least in part,
on the extent to which consensual demonstration is possible. Yet this
difference, it is worth noting, need not be sharply defined.
In conclusion, it seems we
can classify as philosophical all investigative efforts that, in their own time,
are regarded as essentially conjectural, that is, views that, at the moment
of their formulation, lack any conceivable means of evaluation with respect to
their outcomes. This may be taken as the most general criterion for
distinguishing what belongs to philosophy and what does not. Yet, it remains a
rather crude and unilluminating criterion when it comes to a characterization
of the actual nature of philosophy in its central and historically most
significant domains.
2. THE IDEA OF PHILOSOPHY AS A PROTO-SCIENCE
“Why is philosophy a conjectural form of investigation?” A possible
answer to this question could be that the conjectural or speculative character of
philosophy derives, at least in many cases, from its proto-scientific nature. That
is, philosophy is conjectural because it is an enterprise that anticipates the
scientific endeavor. From this perspective, the persistent relevance of many
philosophical formulations would lie in the scientific truths that, in some
way, are prefigured within them.
A considerable amount of
philosophy has historically anticipated science. This is not a hypothetical
claim but a statement of fact, accompanied by changes in the vocabulary. As it
is well known, among the Greeks, when all the basic empirical sciences were
still in the process of formation, the term ‘philosophia’ (φιλοσοφία) was applied
indiscriminately to the entire domain of human inquiry. Only much later, gradually,
as a result of the emergence of empirical sciences like physics and chemistry, between
the 17th and 19 centuries, occurred the gradual replacement of the word ‘science’
for ‘natural philosophy’. William Whewell’s coinage of ‘scientist’ in 1833
signaled the consolidation of the modern distinction. With the emergence of
basic sciences such as physics, chemistry, and biology, the application of the
term ‘philosophy’ gradually became more restricted, though retaining a resilient
central core.
By yielding portions of its domain to science,
the philosophical tradition has revealed itself as the cradle – or better, as Kenny
suggested, the womb – from which the basic sciences were born,[3]
or again, as their “place-holder”. This recognition of philosophy’s role as the
anticipation of science was memorably captured in a well-known metaphor by J.
L. Austin:
Philosophy
is the original sun—central, seminal, and tumultuous—which, from time to time,
sheds a portion of itself that hardens into science: a cold, well-regulated planet,
advancing steadily toward some distant final state. This happened long ago with
the birth of mathematics, and again with the birth of physics. Only in the last
century have we witnessed the same process once more, slow and at first almost
imperceptible, in the emergence of the science of mathematical logic, born from
the joint labors of philosophers and mathematicians.[4]
Austin demonstrated this
thesis in practice by devoting the last ten years of his life to the
development of a kind of grammar of communicative speaking – namely, the theory
of speech acts – which today is studied more extensively in courses on
linguistics than in those on philosophy.[5]
Indeed, insofar as philosophy is conceived
as a speculative inquiry elaborated upon a body of thought that may, at least
potentially, find its place within science, we gain a deeper reason for
understanding its conjectural, argumentative, and aporetic aspects. If
philosophy is that which can be undertaken prior to the possibility of any
scientific investigation, it becomes more intelligible that the most diverse
hypotheses may be formulated, that multiple lines of reasoning may be developed
in their justification, and that the dispute over the correct hypothesis and
the most persuasive argument may persist indefinitely.
As even Wittgenstein unexpectedly observed:
“One may also call ‘philosophy’ that which is possible before all discoveries and
inventions.”[6] This
state of affairs comes to an end only when the path of scientific inquiry is
definitively established, that is, when scholars reach a sufficient degree of
consensus regarding the fundamental presuppositions that sustain a given field of
research. Such a consensus provides a clear delimitation of what counts as
relevant data, which questions are admissible, and which procedures are valid
for assessing their answers. Once this prior agreement is broad enough to make
conceivable the production of consensual results, scholars cease to describe
their object of investigation as “philosophical” and simply redefine it as the
object of science. This gives rise to the popular saying that the tragedy of
the philosopher is that whenever he arrives at a definitive truth, he loses it
to the scientist.
3. ORIGINS AND DIVISIONS OF SCIENCE
Before discussing in detail the possibilities of deriving science from
philosophy, it is advisable to say something about the classification and
emergence of the most fundamental sciences.
The sciences are traditionally divided into
two kinds: formal and empirical. These two kinds have always maintained,
to some extent, a relation of interdependence throughout their development. The
fundamental formal sciences are logic and mathematics, whose beginnings reach
back to antiquity. Elementary arithmetic and geometry separated themselves from
philosophy already among the Greeks, when their respective objects – the number,
in the case of arithmetic, and the point and geometric forms, in the case of
geometry—came to be considered independently of the practical problems they were
originally meant to resolve. A limited form of logic also appeared early in the
Aristotelian syllogistic.
We could, without doubt, speak
of a protological and a protomathematical philosophy. Parmenides’ poem, for instance,
offers an implicit metaphysical formulation of the logical laws of identity,
non-contradiction, and even the excluded middle, in asserting that “being is
and non-being cannot be.” Plato, in turn, already possessed a rudimentary
theory of predication. The Pythagorean philosophers, impressed by the
achievements of abstract mathematics, believed that numbers were the arché
(ἀρχή), the causal principle sustaining all of reality, thereby conflating, in
their own way, the formal with the empirical. By trying to explain our lifeworld
through mathematics, they exemplified reductionism in antiquity. Yet the true
question, still philosophical today, concerning the ontological nature of
numbers remained, at that time, shrouded in obscurity.
Resuming the discussion on the
empirical sciences, I shall adopt here a revised and updated version of Auguste
Comte’s classification of what can be called the basic empirical sciences.
This classification remains quite reasonable when properly interpreted. Moreover,
it is capable of providing us with a framework for understanding the order in
which these sciences emerged as the historically demonstrated trunk of the tree
of knowledge, whose branches, in turn, become exceedingly diverse.
(a) From the greatest to
the least generality in the scope of the phenomena investigated.
(b) From the least to the
greatest complexity of these phenomena, insofar as the exactness of a
science is inversely proportional to the complexity of the objects it studies.
By modifying and updating Comte’s
original classification, we can distinguish five basic empirical sciences: physics,
chemistry, biology, psychology, and sociology.[7] The following scheme summarizes this classification:
PARTICULARITY COMPLEXITY
5. sociology
human
4. psychology
sciences
(a) (b)
3.
biology ciências
2. chemistry
naturais
1. physics
(formal
sciences: logic and mathematics)
GENERALITY SIMPLICITY
From (1) to (5), we have what we can call the basic empirical
sciences, organized in a hierarchy in which each presupposes the preceding one.
Physics, which depends on the development of mathematics, occupies the foundation
of this structure. It is rightly regarded as the fundamental empirical
science, for its scope encompasses the entirety of empirical reality without
exception: atoms, subatomic particles, and elementary forces, submited to its
laws, are believed to permeate the entire universe. Its principles are also the
simplest, which allows for the broadest range of applications. This does not
mesan that a theory in physics cannot be complex, together with its applications.
Still, it rests on principles or laws that must be sufficiently simple to apply
to the whole universe, as much as we know. To illustrate this point, consider
general relativity: it involves daunting, complex mathematics, yet it relies on
simple principles, such as the principle of equivalence, which states that
gravitational force and acceleration are indistinguishable. Chemistry, in turn,
has a more restricted scope, focused on phenomena arising from the combination
of atomic elements. It is divided into two major areas: inorganic chemistry,
concerned with non-carbon-based compounds, and organic chemistry, consisting of
carbon-based compounds that can be much more complex. The result is that
chemistry applies to planets and stars, whereas organic chemistry applies only
to the biochemistry of living beings, organic materials such as crude oil, and
synthetic materials such as plastics. With an even narrower scope, biology is
devoted to the study of living beings, vegetals or animals, constituted by
organic material. Psychology is limited to a small subset of living beings:
those that exhibit mental phenomena from which consciousness emerges. Finally,
sociology has the most restricted scope, focusing exclusively on the study of
human societies in both their static and dynamic forms.
An increase in the complexity
of the principles involved compensates for the progressive loss of generality
in the phenomena investigated. This occurs because more complex phenomena can
only emerge in more specific and delimited contexts, such as those of the higher
basic sciences. Restricting ourselves to natural sciences, chemistry is more
complex than physics; organic chemistry is more complex than chemistry; life is
a much more complex phenomenon. Turning to the human and social sciences, we
see phenomena that seem even more complex. Consider, for example, the vast
number of variables that would need to be computed to predict the fate of a human
being or future socio-political events, and compare it with the mathematics
necessary to predict a lunar eclipse.
It must be emphasized that the
human and social sciences distinguish themselves from the natural sciences by
incorporating an interpretative dimension referred to in psychology as empathy
and in sociology as comprehension (Verstehen, in Max Weber’s
formulation), or as the sociological imagination (C. Wright Mills). In
other words, to apprehend psychological and social phenomena, one must turn to
the mind itself as a mirror of what is to be understood, placing oneself in the
mind of others, or of others in groups of others, in order to discern how they
feel or respond in given situations. Admittedly, the addition of this
interpretive element renders the attainment of consensus in these sciences more
difficult. Yet this does not render such results impossible, for interpretation
is not, at least in principle, incapable of scientific clarification.
The relations between
generality and complexity also shed light on the order of our cognitive
apprehension of the basic sciences, as well as on the very sequence of their
historical development. Indeed, to learn physics, it is, in principle,
unnecessary to possess any prior knowledge of chemistry. Chemistry, however,
presupposes sufficient understanding of its physical foundations. Likewise, to
grasp the phenomena of life more fully, one must turn to organic chemistry in
the form of biochemistry, for it is through this discipline that the pillars of
genetics and molecular biology are established. The study of psychology, in
turn, requires sufficient knowledge of biology. Finally, the comprehension of
sociology demands some familiarity with psychology, including its
interpretative dimension, and thus tends, to a certain extent, to presuppose
the preceding sciences.
These interdependencies help us
to understand why the development of the more narrowly scoped and complex
sciences generally depends upon the progress of the more general and
simpler ones. Such dependence is not confined to theoretical foundations but
also encompasses the technological and instrumental advances achieved by the
more general sciences. How, for instance, could biology have advanced without
the invention of the microscope, whose construction rests upon the principles
of optics, themselves directly derived from physics? Thus, the progress of the
higher basic sciences is conditioned not only by the accumulated knowledge of
the earlier sciences but also by their practical applications, which enable new
techniques of investigation and comprehension.
These considerations allow us to justify the order
in which the basic sciences came into being. The first to emerge was physics during
the Renaissance. Although rudimentary elements of this science were already
present in Antiquity (for instance, in Archimedes’ discovery of specific
density), it was only after Galileo that experimental physics consolidated
itself as a unified body of scientific ideas. Chemistry, in turn, arose
as a distinct science only between the eighteenth and nineteenth centuries.
Psychology gradually evolved into scientific experimental psychology at the
turn of the twentieth century, though its legitimacy as a science as a whole remains
debated, particularly from the standpoint of “depth psychology,” as advanced by
Freudian psychoanalysis. Sociology, meanwhile, was structured as an independent
complex theoretical body with scientific aspirations only through the
contributions of thinkers such as Karl Marx, Max Weber, and Émile Durkheim.
Both psychology and sociology became independent of philosophy only partially,
in a gradual, staged, and often conflict-ridden process.[8]
These dependencies help to explain why the process of establishing
psychology and sociology as sciences has been far slower, more laborious, and
more incremental. We observe a leap, a genuine turning point that could be
called an epistemic turning point, between science and what preceded its
emergence,[9]
with the birth of physics as a body of scientific knowledge through Galileo and
Newton in the seventeenth century; with the birth of chemistry through
Lavoisier, Cavendish, and others at the end of the eighteenth century; and even
with the much more gradual organization of biology as a scientific body of
knowledge throughout the nineteenth century, by figures such as Louis Pasteur,
Claude Bernard, Gregor Mendel, and Charles Darwin.
These ruptures occurred when,
in addition to the accumulation of knowledge, appropriate methods of
investigation were discovered – methods capable of generating consensus
regarding the predictive and explanatory power of theories coalesced into a
unified body. Yet the more complex and dependent domains of inquiry become, the
lower the likelihood of abrupt leaps or ruptures. This is precisely what we
encounter in the more complex fields of psychology and the social sciences,
where no single historical moment of epistemic rupture can be clearly
identified.
Thus, the more gradual constitution of the human sciences is closely
related to the hierarchy of the sciences. It involves a far greater complexity
and diversity of phenomena to be investigated, with intervening variables that tend
to multiply exponentially. In addition, evaluative procedures in these fields
require much broader foundational assumptions, often developed through the basic
sciences and their applications.
Nevertheless, the principal
reason for the difficulty in rendering the human sciences fully scientific lies
in their irreducibly interpretative element (Verstehen, empathy,
social imagination), which depends upon constant reflexive examination and
plays a central role in psychology and the social sciences. This interpretive
dimension encompasses aspects not accessible to direct interpersonal
observation and therefore cannot be treated as readily in an objective manner.
Yet it should not be regarded, as behaviorists such as J. B. Watson in
psychology and social positivists such as Émile Durkheim attempted, as
hopelessly subjective. For, as John Searle has noted, what is ontologically
subjective need not, by that very fact, also be epistemically subjective.[10]
In summary, the full
development of the human sciences depends both upon the maturation of the basic
sciences and upon the advancement of the technical applications made possible
by them. We may ask, for instance, to what extent psychology might present itself
as more genuinely scientific in the future, insofar as it becomes integrated
into more fully developed neuroscientific foundations. Beyond this, their
progress also relies on an expansion of the epistemic possibilities inherent in
the empathetic comprehension of human beings, both at the individual and social
levels.
There is a reason why the
sciences considered within the scheme I derive from Comte deserve to be called
“basic.” The other empirical sciences are, in general, specialized subdivisions
of these fundamental sciences, such as linguistics and economics, which fall
within the domain of the social sciences, or else result from the combination
of their principles, applied locally to specific regions or objects. Examples
of this second type include History, which draws upon psychology and sociology,
among others, to understand the temporal transformations of human societies; ethnology,
which applies psychological and sociological concepts to the study of culturally
distinct groups; geology, which employs the foundations of physics and
chemistry to investigate the structure and dynamics of the Earth; and
Neurophysiology, which relies upon biochemistry and biophysics to explore
cerebral functioning.
There are also “open” sciences,
whose evolution depends upon future events, such as history and economics.
Although political economy has produced schools of thought that have made
significant contributions since Adam Smith’s founding, it remains marked by
uncertainty, given the complexity and constant transformation of its object of
study. Other sciences stand out for their intrinsic complexity, such as
neuroscience, which investigates the brain from multiple disciplinary
perspectives. The number of possible subdivisions and local combinations
appears virtually unlimited. Yet our aim here is not to propose an exhaustive
or precise classification of the sciences, but rather to classify the basic
sciences to provide a minimal conceptual framework for investigating the
relations between philosophy and science.
It is important to emphasize
that the emergence of the basic sciences has consistently displaced purely
philosophical speculation within the domains to which they pertain. The
consolidation of physics as an experimental science, for instance, brought an
end to the reign of Aristotelian speculative physics – at least insofar as it
did not overlap with metaphysics, which to this day has not been superseded by
any science. A similar fate befell the doctrine of the four elements, first
proposed by Empedocles in the fifth century BCE and later adopted by Aristotle.
This doctrine prevailed in Western thought for more than two millennia, only to
be seriously challenged in the seventeenth century by Robert Boyle. The same
occurred with vitalism, the doctrine that vital phenomena were governed by
immaterial impulses distinct from physical forces, a view ultimately undermined
by the development of molecular biology. It should be noted, however, that even
in the twentieth century a philosophical reformulation of vitalism was defended
by Henri Bergson in his theory of the élan vital.
In this and the following
chapters, I will adopt Comte's modified classification of the basic sciences,
as I consider it, broadly speaking, to be the true trunk of the family tree of
the sciences. What I aim to do here is establish a bare foundation that will
help us understand the relationship between philosophy and science.
4. SOME EXAMPLES OF PROTOSCIENTIFIC PHILOSOPHICAL INSIGHTS
In this section, I shall examine several instances in which
philosophical ideas anticipated concepts later developed within the sciences –
specifically in mathematics, physics, chemistry, biology, and psychology.
These examples may prove misleading, as we shall
see, insofar as they pertain only to anticipations within the central trunk of
well‑established basic sciences. They do not extend to the derivative sciences,
which are less familiar or even still undiscovered, and which may differ considerably.
This limitation can foster the false impression that our present philosophical
inquiries ought to relate to the sciences of the future in the same way that philosophical
insights from a more or less remote past have been related to our empirical
basic sciences. Such an assumption may well underlie the persistence of a
stubborn positivist scientism, still prevalent today, which tends to reduce
philosophy of science to its relation with the most firmly established
sciences, such as physics, thereby obstructing the very development of science
itself. (For this tendency, Comte himself employed terms such as “usurpation,”
“hypertrophy,” and “annexation.”) If we remain cautious enough in considering
this point, however, the examples that follow will be instructive.
My initial examples concern
logic and mathematics. As noted in the preceding chapter, Parmenides, through
his doctrine that “being necessarily is, whereas non‑being cannot be,”[11]
may be seen as anticipating the three so‑called “laws of thought,” namely: (i) the
principle of identity, according to which “being is,” formally expressed as “A
= A” or “A → A,” already identified by Plato; (ii) the principle of
non‑contradiction, which in Aristotle’s formulation asserts that “it is
impossible that the same thing should at the same time both belong and not
belong to the same subject in the same respect,”[12]
formally represented as “¬(A ∧ ¬A)”; and (iii) the principle of the excluded
middle, according to which “if something belongs to a given subject, it cannot
at the same time and in the same respect fail to belong to it,” formally
expressed as “A ∨ ¬A.”
Let us now consider an example
of anticipation within mathematics. It may be found in Aristotle’s response to
Zeno’s celebrated paradox of motion, according to which Achilles would be
unable to overtake a tortoise in a race if the latter were granted a head
start. For whenever Achilles reached the point where the tortoise had been, the
animal would already have advanced somewhat further.
Aristotle replied by observing that the time
required for Achilles to traverse each spatial interval is proportional to the
size of that interval. Since these intervals become progressively smaller, the
time needed to cross them likewise diminishes without bound in proportion.
Thus, although there are infinitely many points to be reached, the total time
required to reach them is finite, so that Achilles soon overtakes the tortoise.[13]
This reasoning strikingly foreshadows the modern notion of the limit –
though the concept itself would only be rigorously formalized many centuries
later, with the development of infinitesimal calculus by Leibniz and Newton.
Considering empirical examples,
we encounter the notion advanced by Anaximander (647–610 BCE), according to
which the Earth is not supported by anything, but remains suspended because it
is equally distant from all things, thus making it impossible for it to move
simultaneously in opposite directions.[14]
Karl Popper argued that this
was one of the boldest ideas in the entire history of human thought, for it
paved the way for the theories of Aristarchus, Copernicus, and others. To
conceive of the Earth as freely situated in the midst of space, and to assert
that “it remains motionless because of equidistance and equilibrium,” is, in
some measure, to anticipate the idea of immaterial and invisible gravitational
forces that would be formally articulated by Isaac Newton many centuries later.[15]
Although
anticipatory of physics, Anaximander’s hypothesis cannot properly be regarded
as scientific, since at the time it was formulated, there existed no procedure
for assessing truth capable of leading to consensus. By contrast, the ideas of
Copernicus and Newton could be subjected to tests and validations, thereby
attaining consensus with mathematical precision regarding their truth – a condition
of scientificity that was already possible in their respective epochs.
A well-known example
of anticipation is the atomistic theory of Democritus and Leucippus (5th
century BCE), according to which visible matter is composed of invisible,
physically indivisible atoms that possess innumerable distinct forms. This theory
constitutes a speculative anticipation of what we might call the conceptual
framework of an atomic theory of matter, though not of its specific content.
Similarly, the theory of the four elements (earth, water, air, and fire) first
proposed by Empedocles, anticipated, though in a highly illusory way in terms
of conceptual structure, Mendeleev’s periodic table with its ordering of the
fundamental chemical elements.
In the
field of cosmology, the Presocratics offered anticipations of both the
contemporary Big Bang theory and the hypothesis of a pulsating universe. The
anticipation of the Big Bang theory, according to Anthony Kenny, was suggested
by Anaxagoras. I shall be content here to present a quote of Kenny’s exposition:
All things were together, infinite in number, infinite in smallness; for
the small was also infinite. Since all things were together, none was recognizable
because of its smallness. (…) That primeval little stone began to spin, casting
out the surrounding ether and air, thus forming the stars and the sun and the
moon… But the separation was never complete, for even today in each thing there
remains a portion of everything else. (…) The expansion of the universe
continues to this day and will continue into the future. Perhaps it has
generated other worlds beyond our own, with animals, people, cities, and
products of the earth, just as happens with us, and also with sun and moon,
just as in our case.[16]
Anaxagoras (c. 450 BCE) was not only the first
to suggest a theory resembling the Big Bang, but also the first to propose the
existence of other planets in the universe, inhabited by civilizations as
advanced as our own.
As
for the theory of the pulsating universe, it was anticipated by Empedocles (c.
450 BCE), who conceived the cosmos as governed by two alternating forces: Love
(Φιλία) and Strife (Νεῖκος). When Love prevails, the universe merges into a unified
whole; when Strife dominates, the universe fragments into multiplicity, in an
eternal cycle of union and separation.
Regarding
the contemporary pulsating or oscillating universe, its possibility was
mathematically formulated by Richard Tolman. According to this hypothesis,
after the expansion caused by the Big Bang, gravity would eventually overcome
the expansive force, leading the universe to contract in a collapse known as
the Big Crunch. From that moment, the process would begin anew cyclically until,
with the constant and inevitable increase of entropy, the universe would reach
its final death.[17]
Another example of anticipation
of science was Anaximander's hypothesis, seemingly pointing toward biological
evolution.[18]
He asserted that life originated in water, that living creatures could be
spontaneously generated from moisture, and that human beings evolved from lower
species, since in their earliest years they would have perished had they been
as defenseless at birth as they are today. It is true that Anaximander’s ideas
(6th century BCE), when taken in a strict sense, were deeply mistaken, for he
believed in spontaneous generation and that human beings were initially
gestated within fish, emerging fully formed rather than developing gradually.
Empedocles, however, went
further in the right direction. He believed that living beings were born from the
combination of the elements, specifically, two parts of water, two of earth,
and four of fire. From this, parts of animals were formed. Certain monstrosities
appeared, such as oxen with human heads and, conversely, human heads upon oxen,
as well as androgynous creatures, fragile and sterile. Only the fittest
survived, giving rise to present-day animals and human beings. Charles Darwin
hailed Empedocles as the first person to foresee natural evolution.[19]
One might object here that
sentences such as “The earth is suspended in empty space” and “Man developed
from lower forms of life,” which can be extracted from the writings of the Pre-Socratic
philosophers, are today scientific truths. Were they, then, philosophical
truths that later became scientific? In a certain sense, yes. The ideas expressed
in those sentences have come to be regarded as scientific by us. Nevertheless,
this does not imply that they were not philosophical for other men in other
times, for they only become self-evident when tied to the contemporary context
of their utterance – that is, at least after Copernicus and Darwin.
Precisely because we are
examining the ideas of thinkers from the past, it is essential to consider them
within the context in which they emerged. In that setting, given the absence of
evidential support, such ideas could only be addressed in a speculative manner.
Thus, the predicate ‘…is philosophical’ acquires an appropriate meaning only
when opposed to the historical context in which philosophical ideation was born.
When we situate these statements within the works of the Pre-Socratic
philosophers – at a time when there was virtually no evidential foundation – we
are compelled to regard them as philosophical speculations. Otherwise, we would
be obliged to treat them as scientific generalizations, which would be anachronistic.
The final example pertains to
psychology, a field of inquiry that has not yet fully consolidated as science.
Here we refer to Plato’s doctrine of the tripartition of the soul or psyche (ψυχή).
According to this doctrine, the soul is formed by three distinct parts:[20]
(1)
The first part is the most primitive,
constituted by bodily appetites, desires, and needs.
(2)
The second part is the spirited element,
formed by emotional impulses such as courage, anger, ambition, pride, friendship,
honor, loyalty, and so forth.
(3)
The third part of the soul is constituted by
reason, which functions as an inhibitory principle commanding the others.
In the dialogue Phaedrus, Plato compared reason to the charioteer
of a winged chariot to which a pair of horses is harnessed: one, noble,
representing the spirited element and striving to ascend toward the realm of Ideas;
the other, ignoble, symbolizing the lower appetites and attempting to drag the
chariot back to the earthly world, thereby imposing great difficulty upon its
driver.[21]
Now, Plato’s doctrine of the tripartition of
the soul has, to some extent, been corroborated by neuroscience. According to
the renowned neurophysiologist Paul McLean[22],
author of the triune brain theory, the brain comprises three interrelated,
evolutionarily derived “computers”: the reptilian brain, the limbic
system, and the neocortex. The reptilian brain corresponds to the
medulla oblongata and the basal ganglia. It is responsible for the organism’s
instinctive dispositions, such as respiration, heartbeat, hunger, and sexual
drive. The limbic system governs emotional memory, mood, and motivation.
Finally, the neocortex, which in humans occupies approximately 78% of the
encephalic mass, is responsible for rational thought, language, decision-making,
and consciousness.
Although the triune brain theory is now by
many regarded as only a roughly acceptable oversimplification, a “didactic
device” about the way the brain works, there are reasons to see this objection
as a symptom of reductionist scientism[23]. Anyway,
its resemblance to Plato’s conception of the soul, composed of desire (the reptilian
system), emotion (the limbic system), and reason (the neocortex), is remarkable.
From the perspective of psychology, Plato’s
theory of the tripartition of the mind may also be regarded as a precursor to
Sigmund Freud’s structural theory of the mind.[24] According to the latter, the mind is likewise
divided into three instances:
1) The Id (Es), entirely unconscious, represents
instinctual impulses and basic drives.
2) The Superego (Über-Ich), generally unconscious,
corresponds to the introjected paternal figure and functions as the moral
instance, demanding the realization of ideals.
3) The Ego (Ich), largely unconscious, is directly connected
to perception, conscious will, and motor control.
The dynamics among these instances, according to Freud, are governed by
the Ego, which seeks to balance the demands of the Superego with the impulses
of the Id.
The theories of Plato and Freud exhibit only partial
correspondences. The Freudian Id largely corresponds to the bodily appetites
described by Plato, but it also encompasses volitional elements such as anger,
which the philosopher attributed to the spirited part of the soul. The
Superego, in turn, bears a certain resemblance to Plato’s inhibitory element,
symbolized by the noble horse in the allegory of the winged chariot. The Ego
appears to correspond to the Platonic rational principle, the charioteer
charged with reconciling the opposing demands of the Id and the Superego. Freud
(like Nietzsche) would regard Plato as an escapist who, unconsciously, downplayed
the significance of the hedonistic dimension of the human psyche. As Freud declared
in an interview, the life of the ordinary man is reduced to two great driving
forces: “sex and money.” Isn’t here someting be missing? Freud considered Marx
psychologically naïve, but his view of human nature was as somber as a painting
by Hieronymus Bosch.
When
we confront these theories, we encounter a difficulty similar to that faced
when comparing philosophical doctrines. Freudian psychoanalysis, though in many
ways an unsurpassed work, exhibits its own shortcomings and does not fully
satisfy the criteria of scientific inquiry, particularly if such inquiry requires
consensus among specialists regarding its results. Indeed, its practitioners,
however qualified, never achieved such agreement, which contributed to the
fragmentation of psychoanalysis into various competing schools, each guided by
its own “intellectual mentors.” Even so, whereas Plato’s proposal was based
essentially on his personal experience and general observations of human
behavior, Freud’s theory derived its conclusions from a systematic method of
free association applied to numerous patients in a controlled medium. Moreover,
it introduced a particularly significant theoretical element – the unconscious –
which was investigated by him in a less metaphorical and far more detailed
manner. Within this context, Freud’s structural theory of mind seeks to provide
a more comprehensive understanding, and indeed appears to do so. Although
uncertain and open to questioning, it offers a conceptual framework more
suitable for evaluation, at least with respect to the categories of
contemporary clinical psychology.
Is it
possible to identify, throughout this trajectory, a linearly clear evolution?
Unfortunately, no. Not everything Plato wrote about the tripartition of the
soul was assimilated by psychoanalysis, and even less so by the theories like
that of the triune brain. Consider, for instance, the association Plato
established between the three parts of the soul and the four cardinal virtues
of Hellas: the rational part corresponds to wisdom; the volitional part,
to courage; and the appetitive part, when subjected to the control of
the will, to temperance. Finally, it is from the harmony among these
three dimensions of the soul, integrated into a whole, that the virtue of justice
emerges. None of this can be found in Freud.
I wish
to conclude this section by distinguishing between good and bad
anticipations. Most of the examples considered may be seen as good
anticipations: Anaximander’s ideas about the shape and location of the Earth,
Empedocles’ idea of biological selection… these show, in an obviously very
rudimentary way, the direction to be followed by science. And Plato’s theory of
the tripartition of the soul anticipates roughly the structure of a supposedly scientific
theory and a theory close to science.
Nevertheless, some philosophical
endeavors may be regarded as “misguided anticipations”, insofar as they pointed
toward erroneous paths. The theory of the four elements, proposed by Empedocles,
stands as a clear example. It took more than two millennia for Robert Boyle, in
the seventeenth century, to demonstrate its inconsistency. Another notorious
case emerged in the eighteenth century with the phlogiston hypothesis, which
posited the existence of a substance released by fire and responsible for
combustion. This notion proved entirely mistaken and delayed the advancement of
chemistry for nearly a century. The most emblematic instance of misguided
anticipation, however, was Aristotle’s aprioristic physics.[25]
Accepted by the Church as dogma, it
significantly hindered the development of experimental physics throughout the
Middle Ages, until Galileo’s experiments rendered it untenable.
5. FISSION
Anthony Kenny, reflecting on the way in which philosophical thought gives
way to science, observed that this process occurs through a kind of parturition,
which he called “fission.”[26].
To illustrate this concept, Kenny turned to an example related to one of the
central problems of seventeenth‑century philosophy: the question of innate
ideas.
Initially, the problem was
formulated in the following way: which of our ideas are innate and which are
acquired? After Kant, this question, originally obscure, was divided into two
distinct inquiries: on the one hand, the investigation into the respective roles
of heredity and environment in the formation of our ideas; on the other, the
inquiry into how much of our knowledge can genuinely be regarded as a priori.
According to Kenny, the first question was transferred to psychology, whereas
the second, concerning the justification of knowledge, remained within
philosophy. Subsequently, the residual issue of a priori knowledge underwent a
further division, giving rise to both philosophical and non‑philosophical
problems. Among its developments emerged the distinction between analytic and
synthetic propositions. For Kenny, the notion of analyticity found a precise
formulation in the works of Frege and Russell by means of mathematical logic.
The question “Is arithmetic analytic?”, he wrote, received a rigorous mathematical
answer in Kurt Gödel’s incompleteness theorem. Despite these advances, residual
questions concerning the nature and justification of mathematical truth remained
unresolved, constituting the final points of philosophical contention. The
following scheme summarizes Kenny’s account of this process:
Philosophical problem
of innate
ideas
fissão
psychological question on philosophical
problem of knowing
the paper of hereditariety how
much o four knowledge is
and of the environment in a priori
the
constitution o four ideas
fissão
logico-mathematical philosophical questions
questions
on the remaining
on the nature
extension of apriority and extension of our
in mathematics. a priori knowledge in
general
It does not matter whether one fully agrees with the example. What
matters is that the developmental model suggested here is coherent. It
is a model in which broad, initially ambiguous philosophical problems gradually
decompose into distinct parts. Some crystallize into scientific questions,
susceptible to consensual answers, while others remain within the philosophical
domain. The same process tends to repeat itself with the remaining
philosophical questions, perhaps even leading to their complete disappearance,
if that should be the case.
When we consider this process
of fission, the most important point to emphasize is that the loss of part of
philosophy to science produces transformations that may affect the entire
organization of the remaining field of philosophical inquiry. As the
example illustrates, after fission, the portion of the problem that remains
philosophical must be reformulated, a process that is bound to generate new
conjectures. Yet these transformations do not remain confined. Other related
problems belonging to the same domain of philosophical investigation may also
need to be accommodated to the new scenario, together with their
speculative responses. This adjustment occurs through a more or less profound reformulation
of the problems and their responses, as well as through a repositioning of their
relations to other problems and responses within philosophy.
This final point can be clarified by an
example: Kant’s reformulation of the lingering philosophical problem of innate
ideas, articulated in his doctrine of knowledge and a priori concepts,[27]
ultimately led to subsequent reconfigurations of questions concerning the
concepts of world, soul, and God. At least within his theoretical
philosophy, Kant ceased to conceive these concepts as designating real objects,
instead treating them as ideas of reason, that is, directive concepts
that we might paraphrase as “as if” (als ob, in Hans Vaihinger’s
metaphor). Such ideas, generated by the very structure of reason, are a priori;
however, their function is not to represent objects but rather to orient our
inferential processes “as if” such objects could be designated.
Thus, we must proceed
intellectually “as if” the external world were a closed causal totality, in
order to continue pursuing our knowledge of causal chains; we must proceed “as
if” there were a simple permanent object (the soul), so as to be able to pursue
a unified understanding of our psychic phenomena; and we must proceed “as if”
there existed an intelligent creator (God) of all nature, both external and
internal, conceived as an intelligible system, so as to deepen our knowledge of
the external and internal world as a totality.
As a consequence of this reformulation of the
concepts of nature, soul, and God as directive a priori ideas, their functions
were relocated within the conceptual framework of Kant’s philosophy. In this
new context, the concept of God, for example, no longer needed nor could be
regarded as that of an existing entity, fulfilling the same functions that,
say, the omnipotent and truthful God had in Descartes’ pre-critical philosophy,
or the role that Kant made Him assume once again in the Critique of
Practical Reason as a supposedly real entity grounding morality.[28]
6. THE RESISTANT NUCLEUS OF RESIDUAL PHILOSOPHICAL PROBLEMS
As a result of the development of the basic sciences, we can find a kind
of fission within philosophy. On the one hand, there remains a core of
philosophical resilien inquiries that form the center of gravity of traditional
philosophy, like metaphysics, epistemology, and ethics. On the other hand, we
have the emergence of the philosophies of the basic sciences as second-order
investigations, taking these sciences themselves as their objects, arising only
after those sciences had developed. To see the differences, philosophy of
physics asks about space, time, quantum theory, and physical laws; philosophy
of chemistry examines the nature of substances and whether emergent chemical
properties are reducible to physics; philosophy of biology considers what
defines life and species; philosophy of psychology addresses the ontology of
mental states and their representational role; and philosophy of the social
sciences explores the nature of society, social ontology, and social laws...
Philosophy of science in general,
by contrast, should remain a meta-level reflection on science as a whole, while
the philosophies of particular sciences are domain-specific. Although there has
historically been a strong reductionist tendency that privileges the philosophy
of physics, it is advisable to avoid it. In this chapter, I will thematize the
most general level, asking what science in general is, independently of its
many branches, since this seems to provide the right contrast between
philosophy in general and the anticipation of science in general.
Let us return to the most resilient
set of philosophical inquiry, its historical center of gravity, the one that
most demands our attention and that fits better with our working hypothesis of
a triadic dimension of philosophy. This center resides in the disciplines
traditionally regarded as the most central, significant, and difficult. They
can be divided (if one wishes) into theoretical and practical
domains with a variety of disciplines. The theoretical disciplines concern the input
of the world into our minds, while the practical disciplines address the output
of our minds into the world. The most general theoretical disciplines are metaphysics,
concerning the most general kinds of things and their internal relations
(properties, particulars, existence, number, causality, space, time, identity,
part and whole...) and epistemology (concerning the concepts of knowledge,
truth, belief, and justification, along with their internal relations and the forms
of knowledge). Finally, there are practical philosophies such as the philosophy
of action, ethics, philosophy of art, of culture, of history, of politics, that
is, those domains related to the mind's output upon the world. Historically,
ethics, from Aristotle to Kant and Derek Parfit, has been the most discussed
and difficult branch of practical philosophy, due to its complexity and aporeticity.
Perhaps equally relevant is political philosophy, which ranges from Plato and
Hobbes to Marx and John Rawls, yet still falls short of definitive conclusions.
Summarizing: the central domains of philosophy are, at least from a historical
viewpoint, metaphysics, epistemology, ethics, and political philosophy. These
core domains have thus far resisted assimilation into science, and it is
crucial to recognize their peculiarity. They occupy neither the same
theoretical level of basic sciences (or those derived from them) nor that of
the philosophies of science.
What is most striking about
disciplines such as metaphysics and epistemology is their maximal range of
applications. They involve many, if not all, objects of experience, both
external and internal, thereby traversing the objects of inquiry of all
the basic sciences. Consider the objects of metaphysics such as properties, space
and time, existence, causality, number... All objects of physics, chemistry,
biology, psychology, and sociology also possess properties, exist in space and
time, follow causal laws, can be enumerated...
In the case of epistemology, its
range of application is likewise remarkable, for its questions do not concern
this or that specific form of knowledge, as we can see in the philosophies of
the sciences, but rather knowledge in general, including our modest (moorean)
common sense[29],
such as my knowledge that I am now seated and that I am writing. The concept of
knowledge is closely associated with that of truth, as well as with belief,
justification, and reason.
Given the difficulty and
significance of these domains of inquiry, the question of what constitutes the
nature of philosophy may, at this point, be replaced by another, no less important:
what is the proper nature of philosophy’s central disciplines?
The
most serious issue concerning the idea of philosophy as a precursor to science does
not lie in the indisputable fact that science emerged from philosophy, but
rather in the scope of that derivation. It is possible that the remaining set
of philosophical inquiries, or at least part of it, belongs essentially to philosophy,
resisting its transformation into science. Or is it the case that everything
that is centrally philosophical may, in principle, eventually become science?
Philosophers diverge on this matter. Some,
such as Keith Lehrer, have advanced the progressive hypothesis that philosophy
is “merely the collective name for the pot of problems not yet touched by
science.”[30] For him, the fact that some philosophical questions
must wait more than two millennia before receiving a scientific answer does not
imply that such an answer will never be found.
Others, however, adopt a more reserved stance.
Anthony Kenny, for example, argued in his book on Aquinas’s philosophy of mind
for a conservative hypothesis. According to him, even though philosophy may
have, in its past, handed over parts of itself to science, those parts were not
genuinely philosophical. Only the remaining and clearly central philosophical domains
are genuinely philosophical. For Kenny, these domains include epistemology, metaphysics,
ethics, and the theory of meaning. Such domains, he contended, will remain
philosophical forever.[31]
In attempting to justify this claim, Kenny, drawing on Wittgenstein’s
notion of panoramic representation, suggested that philosophy, unlike
the particular sciences, concerns itself with our knowledge as a whole. Accordingly,
its aim is to organize what we already know to provide a synopsis, that is, a
vision of our knowledge in its entirety. This purpose endows philosophy with a
kind of comprehensiveness not found in any particular science. Such comprehensiveness,
Kenny argued, is the reason why Aquinas’s philosophy of mind remains in many
respects relevant:
Philosophy is so
comprehensive in its object of investigation, so expansive in its field of
operation, that the construction of a systematic philosophical synopsis of
human knowledge is so difficult that only a genius could accomplish it. So vast
is philosophy that only a truly exceptional mind can discern the consequences
even of the simplest philosophical arguments and conclusions.[32]
According
to Kenny, the comprehensiveness of the philosophical task calls for the figure
of the “philosophical genius”, a figure not only difficult to identify but also
prone to mystification. If we apply our assumptions about the nature of
philosophy to this case, the philosopher should combine the intellect of a
scientist, the sensitivity of an artist, and the visionary insight of a
prophet, a pattern indeed visible from Plato to Hegel. In practice, however,
this genius seems to consist less in any isolated skill than in what a Kantian
might describe as the harmonious integration of the faculties, since philosophy
lacks a specific domain of its own.
Consider Kant: his work exemplifies a kind
of thought that depends above all on prolonged and largely unconscious
ruminative labor, where the philosopher critically selects, from among
countless inadequate ideas, those few that prove fruitful when articulated
within broader domains of knowledge. Philosophy, in this sense, is a long,
independent, and generally unconscious process.
Nietzsche recognized this dynamic. He
described the so‑called inspiration of genius as the sudden release of
an unconscious accumulation of ideas, which unexpectedly find a way to connect
and flow forth, as though the floodgates of an intellectual reservoir had been
opened.[33] A
striking example lies outside philosophy proper: Einstein’s breakthrough in
1905, when in conversation with a friend, he realized that time need not be
conceived as “absolute, flowing uniformly and independently of any external factors,”
as Newton had proposed. What appeared as inspiration was in fact the
culmination of years of submerged reflection, suddenly crystallized into
insight.[34]
Obviously, minimally favorable external and
internal conditions must be present for such epiphanies to occur. I recall a
commercial that displayed a photograph of Einstein accompanied by the question:
“What did he have that we do not? Answer: the program!” Yet it is worth noting
that, armed with his program, Einstein made no further major discoveries during
the last forty years of his life. This observation serves as a complementary
consideration: inspiration and genius may ignite breakthroughs, but they do not
guarantee a sustained sequence of discoveries. The unconscious accumulation of
ideas, once crystallized into insight, requires not only favorable conditions
but also a continual openness to new problems and perspectives. Without this,
even the greatest intellect risks becoming captive to its own program.
But I am digressing. Returning to our
central question of how far philosophy can give place to science, I shall argue
in favor of the progressive hypothesis that philosophy can be seen as an anticipation
of science, at least concerning the central problems. At the same time, I
consider this position compatible with the idea of panoramic representation, even
with the notion of a scientific panoramic representation, though not in the
usual reductionist way of understanding what science is.
7. OUR GENERAL IDEA OF SCIENCE
My suggestion that our central philosophical questions may ultimately be
absorbed by science can be rendered plausible insofar as the reasons advanced
by philosophers for rejecting it can be removed.
There are two complementary
reasons why philosophers like Kenny have come to reject the idea that philosophy’s
central domains anticipate science.[35]
The first is that, when they think of science, they have in mind primarily the
well‑established experimental sciences of nature. In this context, they
consider not only the methodological limitations of disciplines such as
physics, but also their far more direct empirical character. To accept the
progressive thesis concerning the nature of philosophy seems to commit us to an
impoverished and reductive conception of the core of the remaining
philosophical problems – a conception that appears to deprive philosophy of
much of its breadth and relevance by leveling its problems with those of the
natural sciences. To agree with the progressive hypothesis thus seems to leave
us with nothing but a pedestrian form of scientism, intrinsically narrow and
hostile to the breadth and abstraction to which genuine philosophizing most
properly belongs.
The second reason for
disregarding the progressive hypothesis lies in the implicit adoption of conceptions
of the nature of science that profoundly shaped the twentieth century, such as Logical
Positivism and its cultural influence. Philosophers of science were only able
to construct interesting and detailed theories insofar as they took the most
developed sciences as their point of reference. Yet, since not all scientific
domains are at advanced stages, and some have not even emerged, it became
common for these philosophers to select the natural sciences, especially mathematical
physics, as exemplary models.
This procedure may be fruitful
when applied to those consolidated sciences considered in themselves.
Nevertheless, when the results are interpreted as representative of science in
general, or as yielding a general criterion for demarcating what belongs to
science, valid for all past and future candidates, the consequence is a narrow
and restrictive conception of the boundaries of science. This is evident even
in domains of basic natural science, such as biology, as illustrated by Popper’s
criterion of scientificity, grounded in the falsifiability of our theories
through decisive experiments.[36]
That criterion may reasonably apply to physics, his model of science, for
example, in measuring the deflection of starlight by the curvature of spacetime
during solar eclipses, a crucial experiment that confirmed the theory of
general relativity, an example often recalled by Popper. However, when applied
to other areas of science, the same criterion proves excessively exclusionary.
It does not apply to psychological or socio‑historical theories. It even
excludes the biological theory of Evolution – a theory whose scientific status
no one today would dare to deny. After all, what kind of experiment could
falsify a theory that explains a myriad of processes extending over millions of
years in the past? And even if it can be tested indirectly, failure to pass
such a test would hardly be interpreted as a decisive refutation.[37]
Karl Popper was right to
emphasize that his methodology was not meant as a description of what people, including
scientists, actually recognize as science, but rather as a proposal: a
rationally grounded suggestion. Yet, when applied broadly to all forms of inquiry,
it can seem overly narrow and somewhat artificial. The most natural way to distinguish
philosophy from science lies in the contrast between conjectural thought,
proper to philosophy, where no consensus on results is possible, and the
non‑conjectural enterprise of science, where truth or falsity can be
established and progress achieved. Moreover, the conception of science as a
non‑conjectural pursuit that produces truth aligns closely with what scientists
and educated individuals typically mean by the word science.
Indeed, when judging whether a
theory belongs to the domain of science, we do not ask, in the first place,
whether it can be subjected to empirical confirmation or disconfirmation
(although this aspect, as we shall see, also has its relevance). What we ask
first is whether the scientific community is, in principle, capable of reaching
interpersonal agreement on what it considers to be the truth or falsity of its
results, even if such agreement may often not arise from a form of verification
(or resistance to falsification) through empirical tests. The possibility of
obtaining consensual results among scientists is a more general and decisive
criterion, in contrast with the specific methods by which such agreements may
in fact be achieved, though deserving to be called science.[38]
The consequence of adopting such a model of scientificity by the
philosopher is that he can no longer admit that philosophy functions as an anticipation
of science. After all, it is evident that the central nuclei of philosophical
inquiry, by their very nature, will never become capable of accommodating the
demands imposed by models of this kind.
Nevertheless, the two already
mentioned reasons for rejecting a generalization of the hypothesis that
philosophy, even in its central domains, might anticipate science as a kind of
proto‑science do not apply here. For in affirming that philosophy performs an
anticipatory role in relation to science, we are not bound to restrict the
meaning of the word ‘science’ to the already established particular sciences.
Nor are we compelled to adopt the prescriptions accepted by the heirs of
logical positivism regarding how science ought to be understood.
What
most naturally comes to mind when contrasting philosophy with science seems to
lie in the opposition between the conjectural or speculative thought proper
to philosophy, in which no agreement on results is possible, and a non‑speculative
enterprise—characteristic of science, where it is possible to reach agreement
on the truth or falsity of results, thereby allowing for progress. Moreover,
the idea of science as a non‑speculative undertaking that produces truth
accords quite well with what we – scientists and educated individuals –
naturally mean by the word science.
Indeed, when judging whether a
theory belongs to the domain of science, we do not ask, in the first place,
whether it can be subjected to empirical confirmation or disconfirmation (although
this feature, as we shall see, also has its relevance). What we ask first is whether
the scientific community is, in principle, capable of reaching interpersonal
agreement on what it considers to be the truth or falsity of its results –
even if such agreement may often not arise from some form of verification (or
resistance to falsification) through empirical tests. The possibility of
obtaining appropriate consensual results among scientists is a more general and
decisive criterion, in contrast with the specific methods by which such
agreements may in fact be achieved.
The notion that the scientific
enterprise might be defined on the basis of its capacity to generate consensus
struck me as too plausible to have gone unnoticed. After all, original ideas in
philosophy are generally either false or have already been conceived at some
point. Upon consulting the literature, I found support for a similar perspective
in John Ziman's work, a physicist and sociologist of science. As early as the
1960s, Ziman emphasized the centrality of this idea, arguing that the unifying
principle of science, in all its aspects, rests “on the recognition that
scientific knowledge must be public and capable of achieving consensus.”[39]
As he wrote:
The aim of science is not merely to acquire information or to state
indisputable postulates; its goal is to reach a consensus of rational
opinion that covers the widest possible field....[40]
This idea may be understood as the most general identificatory criterion
of science, namely, the:
TRULY CONSENSUALIZABLE PUBLIC KNOWLEDGE
It is a form of public knowledge that is, at least in principle, liable
to achieve agreement among peers regarding its results—something that, as we
will see, does not in fact occur in pseudoscience or in philosophy.
One advantage of admitting such
a criterion is that it frees us from a strict commitment to specific models of
scientificity directly derived from some well‑established basic science or from
any already existing science. By adopting an open concept of the nature of
science as a counterpoint to philosophical conjecture, we avoid the risk of
interpreting it through the lens of positivist scientism.
In what follows, I shall deepen the general
conception of science preliminarily outlined by Ziman. Unlike philosophers such
as Karl Popper, Imre Lakatos, and others, who devoted themselves to the problem
of demarcating science from non‑science, I will not advance a normative
proposal: my approach will be entirely descriptivist. My aim is to recover the
generality of the technical, academic, and cultivated sense of the word ‘science’
by making explicit the principal criteria by which scientifically educated individuals
recognize it. This is, therefore, a procedure parallel to that adopted by the
descriptivist in metaphilosophy (Chapter I).
Indeed, if a descriptivist
approach leads us to the idea that philosophy may be regarded as a
proto‑science in the sense of being unable to generate consensus, then, by
parity of reasoning, the “science” of which philosophy would be “proto‑” must
likewise be treated within a descriptivist framework. This approach accords
with the premise that philosophy, by contrast, constitutes an inquiry that, in
principle, is incapable of achieving genuine consensus regarding its results at
the time they are produced.
In fact, not only have the central domains of
philosophy, such as metaphysics, epistemology, and ethics, historically fallen
far short of the possibility of reaching consensus. Non-central áreas like the
philosophies of science and peripheral areas, such as the philosophy of medicine,
of computing, of cinema, and of sport, are designated as philosophical
precisely because of the absence of agreement among their factions.
What is thereby suggested is
that a descriptivist account of science provides the most coherent way of
conceiving the contrast between philosophy and science within a
metaphilosophical approach that is itself descriptivist. Only after we have
explored this conception of science in greater depth will we be able to see whether
the characterization of philosophy as an anticipation of science has any
restrictive implications.
8. FOR A NON-RESTRICTIVE CONCEPTION OF SCIENCE
My aim here will not be to develop a fully descriptivist
characterization of science in general, based on an analysis of the demarcation
criteria actually employed by scientists, but rather to render its foundations
accessible. The intention is to make sufficiently explicit, for the purpose of
contrasting science and philosophy, a conception of the nature of science that
may be termed consensual‑objectivist‑progressivist. According to this
conception, the unifying principle of all science is that it consists in an
evaluative inquiry into objective truths, enabling progress through the
attainment of authentic consensual agreements among members of the scientific
community regarding the results of such evaluations. To explain this idea
in greater depth and to explore its implications, we may identify three
conditions of scientificity, namely:
(i)
PROGRESSIVITY,
(ii)
CONSENSUALIZABILITY,
e
(iii)
OBJETIVITY,
so that, as we shall see, condition (i) presupposes (ii), which
presupposes (iii). These conditions are so comprehensive that they can be
considered applicable to all sciences, both empirical and formal.
With regard to condition (i), that
of progressivity, it stipulates that, during its period of development, a
science must behave as a progressive enterprise. This means that its theories,
once proposed, should prove capable of being refined or replaced by others with
greater explanatory power, or else reinforced by new ideas and theories that,
in some way, enhance the explanatory capacity of the whole. Moreover, this
condition implies that, in the course of its development, a science must be
cumulative in its knowledge, in the sense of enabling the community of ideas to
recognize the truth of an increasing number of propositions. This condition of progressivity
may be formulated as follows:
C1: Science is an epistemic endeavour capable of revealing itself as progressive
in the sense of enhancing the truths of its theoretical approaches.
The condition applies primarily to the whole science conceived as a structured
and interconnected ensemble of particular sciences. But it can also be applied
to any particular science, empirical (natural and human) or formal (logical and
mathematical), which themselves can be constituted by subfields and more or
less interrelated clusters of theories. In the empirical sciences, we expect
progressive development even amid paradigm shifts, and in the formal sciences,
we have an increasing number of theorems proven.
Condition (ii) is central and often undervalued. It concerns the possible
consensualizability noted by Ziman. It should be considered that condition
C1 presupposes the satisfaction of C2. The latter, in turn, is prevalent and
applies primarily to theories, hypotheses, and systems of hypotheses that
aspire to scientific status insofar as they are, at least in principle,
susceptible to consensual verification. Derivatively, this condition also applies
to whole bodies of scientific knowledge that build any particular science. The
condition of consensualizability may thus be formulated as follows:
C2: Science is an epistemic endeavour through which, at least in
principle, it is possible to reach a legitimate consensual agreement on the
truth or falsity of its theories; an agreement to be rationally reached by the
critical community of ideas that propose them.
A proper analysis of the concept of a critical community of ideas
introduced in C2 is required. This concept enables us to determine who
is legitimately entitled to evaluate purportedly scientific ideas and how
such evaluation is possible. There are compelling reasons to include this concept,
since science is inevitably a corporate enterprise and scientific research is a
social activity.
So, for instance, if there are individuals who do not believe that the
theory of natural evolution has received sufficient confirmation, this does not
invalidate the belief that a scientific consensus regarding the truth of this
theory may exist, given that such a consensus does, in fact, exist. Likewise,
if a totalitarian government labels a spurious ideology as science and imposes
a compulsory consensus on the scientific community (as occurred in the Soviet
Union with Lysenkoist genetics), we would not conclude that the ideology is
genuinely scientific. Nor do we believe that a community of ideas grounding its
truths in the authority of sacred scriptures or in the visions of Crystal gazers
is operating as a scientific community. Even if agreement exists among its
members, such agreement would be regarded as arbitrary and not rationally
grounded.
The
concept of a critical community of ideas is fundamental to justifying such
conclusions; without this possibility, the consensualizability of its results, scientific
enterprise would be inevitably compromised. The requirement that consensus be
established by a critical community of ideas must serve to ensure the legitimacy
or authenticity of consensus, since spurious consensuses are also possible
outside the scientific domain—for example, among astrologers eager for
approval. Such conditions were approximated by sociologists of science, such as
R. K. Merton, and most notably by the philosopher Jürgen Habermas. I wish first
to consider them.
For Merton[41],
science cannot exist without social collaboration. Accordingly, it must adhere
to four fundamental principles that constitute its ethos. Science must
be: (1) universalist, in the sense of being open to all who can
contribute to its development: “race, nationality, religion, class, and
personal qualities are irrelevant. Objectivity excludes any form of
particularism.”[42]
Science must be (2) communist, in the sense of being the common property
of society, with its results not restricted to individuals or groups. It must
be (3) disinterested, in the sense of being pursued by individuals who
seek to contribute to the common good rather than personal gain. Finally, it
must exhibit (4) organized skepticism, in the sense that all scientific
claims must be critically examined in a neutral manner, even at the cost of
limiting the scope of scientific activity.
The
conditions established by Merton aimed merely to inventory the social ethos of
science. Nevertheless, as we shall see, they also contribute to justifying the
legitimacy of scientific consensus.
An
analysis explicitly intended to confer legitimacy upon consensus was advanced
by Jürgen Habermas in his consensual theory of truth.[43] His
proposal, which strikes at the heart of the matter, was that the determination
of what counts as truth must rest upon a discourse (Diskurs) conducted
under the presupposition of an ideal speech situation (ideale
Sprachsituation). To the preceding conditions, I now add Habermas’s requirements,
setting aside possible overlaps:
(5) Unrestricted Access to Discourse:
All participants must have the right to take part in dialogue. No one may be
arbitrarily excluded.
(6) Equality of Opportunities for
Expression: Everyone must have the same chance to present claims, ask
questions, raise objections, and articulate needs or desires.
(7) Freedom of Expression: Participants
must be able to express themselves without external coercion, that is, without
fear of punishment, manipulation, social pressure, or rhetorical artifices.
(8) Truthfulness: Interlocutors must be
sincere in their intentions, guided by truth-oriented purposes—that is, by
intentions aimed at seeking the truth. Lies or manipulations undermine the
validity of discourse.[44]
(9) Comprehensibility: The language employed
must be clear and understandable to all those involved.
(10) Rational
Justifiability: Assertions must be capable of rational justification and
remain permanently open to critique.
Mainly, for Habermas, what must prevail is what
he called “the effortless force of the better argument,” rather than any
argument from authority. Although this set of conditions may not be sufficient
to guarantee truth, it is, nonetheless, to a sufficient degree, necessary:
truth can only emerge from a consensus achieved through a discourse free of
coercion, in which participants seek mutual understanding on the basis of the
force of the better argument, and not through the imposition of power.
Habermas’s
theory was not conceived to test the requirements of science, but rather to
evaluate the claim to truth in general. Nevertheless, when we restrict
ourselves to the scientific domain, two further conditions may still be invoked:
(11) Competence: all participants
should be equally well trained and informed about the topics to be discussed.
(12) Transparency:
all participants should have the right to receive all available information.
What I have called a critical community of
ideas is nothing more than a society of ideas that sufficiently satisfies
all twelve conditions. I say “sufficiently” because, when we consider the
concrete practice of science, we observe that it invariably fails to fulfill
them in their entirety. Nevertheless, here is where the danger rests: if these
conditions are not met to a sufficient degree, it is certain that science, as a
collective enterprise, will become profoundly flawed, if not altogether
impossible.
Our
question, then, is whether these eleven conditions (all of them quite
reasonable) are sufficient to guarantee the legitimacy of scientific consensus.
Consider the dialectical pseudoscience practiced by Trofim Lysenko in Stalin’s
Russia. Lysenko was a charlatan who rejected classical genetics and advocated
the inheritance of acquired characteristics in plants, along with useless methods
such as subjecting seeds to cold in order to force growth. Stalin believed blindly
in Lysenko, and his government persecuted anyone who dared to disagree. The
results were repeated failures, always justified by factors extraneous to his pseudoscience.
We may
assert that, in Stalin’s Russia, the conditions for genuine consensus were
absent, since the prerequisites identified by Habermas – (5) unrestricted
access to discourse, (6) equality of opportunities for expression, (7) freedom
of expression (above all), and (8) truthfulness—were not fulfilled. Likewise,
the more general conditions outlined by Merton were, in part, disregarded. The
requirements of (2) universalism, (3) disinterestedness, and (4) organized skepticism
were lacking. Even condition (11), the sufficient competence of participants,
was clearly unmet, as a direct consequence of the failure to satisfy the
preceding conditions.
A
very similar phenomenon occurred with the so‑called “Aryan physics” promoted
under Nazi‑fascist totalitarianism, which rejected the contributions of Jewish
scientists such as Einstein and Niels Bohr. Its proponents sought to replace
“Jewish physics” with “Aryan physics,” dismissing both relativity theory and
quantum mechanics. Here, above all, Merton’s condition (1), universalism, was
violated, since collaboration from scientists of Jewish origin was
categorically excluded, together with Habermas’ conditions (5) and (6).
By
comparison, practices such as card reading, crystal ball gazing, or astrology
likewise fail to satisfy several of the aforementioned conditions. It is
virtually impossible for them to meet condition (4), organized skepticism, or
condition (10), openness to criticism. This is easily demonstrated. Let us
consider astrology alone. From the standpoint of physics, astrology is absurd.
Carl Sagan observed that the gravitational force exerted by the obstetrician’s
abdomen on a newborn at the moment of birth is greater than that of the Moon at
the same instant.
On
the methodological plane, Karl Popper highlighted a recurrent stratagem in
astrology: the reliance on vagueness. If predictions are sufficiently imprecise,
even apparent failures can be reinterpreted by the astrologer, rendering them
unfalsifiable.
James
Randi, the professional magician who dedicated himself to exposing pseudoscientific
frauds and who offered a one‑million‑dollar prize to anyone able to demonstrate
the existence of paranormal forces or similar phenomena, was never able to
award the prize to any claimant. According to Randi, while some individuals
were indeed charlatans, most genuinely believed in their alleged paranormal
powers. In a well‑known experiment, Randi distributed sheets of paper to a
class of students containing astrological predictions based on their date and
time of birth. The vast majority judged the predictions to be sufficiently accurate.
Yet when he asked them to exchange their sheets with those of the classmates
behind them, the surprise was immediate: all the predictions were identical.
This
experiment not only exposes the futility of astrology but also illustrates the
power of suggestion and self‑deception in the human mind.
It
thus appears that, by bringing together the twelve conditions considered thus
far, we are able to establish a sufficiently robust distinction between
legitimate and illegitimate consensus. As has already been noted, it is
important to emphasize that these conditions constitute an ideal
constellation that no scientific community ever fully satisfies. Nevertheless,
they must be met to a sufficient degree, since no scientific community can
achieve reliability without at least minimal compliance.
Indeed,
when we accept a scientific discovery as true, for instance, a breakthrough in
medicine, we must all presuppose that such criteria are being adequately
fulfilled: that scientists are honest, that they are not under pressure to
manipulate data, among other requirements. Hence, the importance of
experimental replication by independent laboratories. This was precisely the
case with Dolly the sheep, the first mammal successfully cloned from an adult
cell. At first, other laboratories were unable to reproduce the demanding
cloning experiment. It took two years for this practical difficulty to be fully
resolved.
Moreover, the scientist engaged in research
must conduct their work under the constant assumption that, at some point, the
results will be evaluated by a critical community of ideas, capable of applying
criteria that ensure their consensual legitimacy. This assumption should guide
a continuous process of self‑evaluation of what is being produced, even if such
external evaluation never materializes – as in the case of Gregor Mendel – or perhaps
never occurs at all, given that the outcome of sound research may be lost like
a flower blooming in the desert, never to be seen. Conceived in this way,
condition C2, of legitimate consensual agreement regarding results, becomes the
central requirement for accepting a theory as belonging to the domain of
science.
Agreement on the truth or falsity of
theories within a critical community of ideas requires a third condition of scientificity
– one that acknowledges a debt to more traditional philosophy of science. As
noted earlier, consensual agreement on truth among members of such a community
is possible only if there is prior agreement on the assumptions underlying the
criteria and methods for evaluating scientific truth. Thus, the fulfillment of
condition C2 presupposes the satisfaction of condition C3: a material
requirement that the critical community must meet in order to be considered
scientific. This is what can be called the condition of objectivity,
which may be formulated as follows.
C3: The critical community of ideas responsible for scientific inquiry
must be grounded in a prior consensual agreement regarding what counts as foundational
assumptions and the methodologies that enable the intersubjective
evaluation of the theories developed within it. The existence of a previously legitimized
consensus on these assumptions confers objectivity upon scientific discourse.
Agreement on the truth or falsity of
theories requires, within a critical community of ideas, a prior consensual
agreement on foundational assumptions that confer objectivity on
scientific discourse. Without attempting an exhaustive clarification, and
taking epistemic domain to mean the set of entities regarded as foundational
within a given field of scientific knowledge, we propose that a critical community
must achieve a previous agreement regarding the following fundational assumptions,
in order to endow scientific objectivity to any epistemic domain:
(i)
Elementary
data or axioms: Assumptions about
what counts as elementary data within the epistemic domain (e.g., sensory data
in empirical sciences, or axioms in formal systems).
(ii)
Methodological
procedures: Assumptions concerning
valid methods for evaluating the truth of a theory, including explanatory and
predictive power, which should imply some form of correspondence with reality.
(iii)
Properly
formulated questions:
Assumptions about what qualifies as legitimate questions or problems within the
epistemic domain, ensuring that theories address relevant and meaningful
issues.
(iv)
Properly
constructed theories:
Assumptions regarding the criteria of their theories' internal consistency, as
well as external alignment with established knowledge.
These
assumptions must be conceived as encompassing the broadest possible spectrum,
though their specific content will inevitably vary according to the epistemic
domain to which they belong.
Assumption (i) is associated with the issue
of generality of scientific theories; assumption (ii) with explanatory and/or predictive
power of scientific theories; assumption (iii) with the adequacy of the
questions formulated; and assumption (iv) with the coherence and sound entrenchment
of scientific views.
The
admission of such foundations of scientific objectivity makes it possible to
establish a bridge between two conceptions of science: on the one hand, science
as a form of knowledge subject to legitimate public consensus achieved by a
critical community of ideas that we have considered beforehand; on the other,
the traditional conception of the scientific method in the empirical sciences,
understood as inductive–deductive or hypothetico–deductive, which abstracts the
social character of science.
Are such
associations inevitable? Could there be a legitimate consensual agreement
without such conditions of objectivity being satisfied, for instance, by the
supposedly critical community of astrologers, crystal-ball seers, or tea-leaf
readers? I think not. It is indispensable that the foundational assumptions
constitutive of the condition of objectivity be fulfilled in order for a
critical community to achieve legitimate consensus. It is necessary, for
example, that a theory possess a confirmed predictive (or demonstrative) power,
which is encompassed by assumption (ii).
But the skeptic will ask: what guarantees
that it must be so? The answer is that this question appears problematic only
to the skeptic, who expects an a priori solution, a logical or necessary
guarantee, which, in fact, does not exist. What is at stake here is an
empirical and experiential matter. Experience has shown us, again and again,
that legitimate consensus can only be formed when the conditions of objectivity
are satisfied.
The necessity of admitting conditions of
objectivity, and of demonstrating their applicability, is an inescapable
experiential truth—one that critical communities of ideas have been compelled
to learn in order to constitute themselves. Human beings have simply observed,
perhaps reluctantly, that legitimate consensus can only be achieved when such
conditions are satisfied.
A definition of science that fails to
recognize these experiential conditions of objectivity, which in their contents
will materially vary from one scientific domain to another, from astrophysics
to social history, would be destined to fall into dogmatism.
One could object that the discovery of data
and methodology within an epistemic domain is already the result of theory-laden
conventional agreement; as a consequence, we have circularity: condition (C2)
of possible conventional agreement demands the satisfaction of condition (C3),
which demands the satisfaction of (C2). The answer is that there is no circularity,
since conventional agreements delimiting our search for grounding data are not
at the same level as the conventional results. Moreover, there can be a dynamic
interplay between C3 and C2: objectivity evolves with consensus, and consensus
evolves with objectivity. We can even
undergo epistemic shifts that change what counts as data, procedures, and theoretical
results. But this does not matter, provided that continuity of inquiry is
preserved, even if the boundaries of the field are redefined.
What I have just presented may be termed a
progressivist–consensualist–objectivist definition of the scientific enterprise
in general. Understood in this way, the conditions of progressivity,
consensuality, and objectivity constitute a sufficiently reliable descriptivist
criterion for distinguishing between science (whether empirical or even formal)
and non-science, as well as for identifying what cannot be considered
scientific, regardless of its nature. Summarizing, in its broadest sense,
science is a collective pursuit of truth, potentially progressive in its
discoveries, consensualizable in its judgements, and objective in its foundations.
In
light of the considered view of science in itas broadest sense, let us now
examine what occurs when we compare this general definition of science with our
characterization of the philosophical enterprise.
9. WHY CONCEIVE OF PHILOSOPHY AS A PROTOSCIENTIFIC ENDEAVOUR?
The point to emphasize is that the
consensualist conception of science just outlined places it in direct contrast
to philosophy. Unlike science, philosophy is neither progressivist, nor
consensualist, nor objectivist. Nevertheless, both share a common feature: both
need the appeal to a critical community of ideas, though this requires some
qualifications.
In
both philosophy and science, a critical community of ideas must be presupposed –
even if only counterfactually. Hegel, for instance, likely secured his position
by presenting the Prussian state as the embodiment of reason. Schopenhauer, by
contrast, remained largely ignored until the age of sixty-three, when the
popular success of Parerga und Paralipomena finally brought him
recognition. Nietzsche, unlike both, never achieved acceptance during his lifetime,
yet he consistently wrote with the expectation of a future community of readers
capable of grasping the scope of his thought. In each case, the philosopher
presupposed – counterfactually – a community of ideas that could evaluate their
work. Importantly, what they sought was not validation of their philosophy as true
in the scientific sense, but rather acknowledgment that their writings were worthwhile
as philosophical contributions.
One of the conditions that the philosophical community
of ideas demands is that philosophers must possess competence for their activities.
Since this competence cannot be the same as that of scientists, some considerations
need to be made regarding it. One such condition may be familiarity with the
development of science, at least in its principles and proportionally to its
relation to the area of philosophy under investigation, insofar as such a
relation can be found. Philosophy cannot be admitted when it contradicts what in
its time is considered well‑established scientific truths. Beyond this,
philosophical competence resides in mastery of a tradition of critical
discussion. This mastery may be limited: Hume, for example, was practically
confined to the English tradition in which he was situated, as he knew little
of the Greek and medieval tradition. Wittgenstein knew only what he learned
alongside Russell and what he heard in Vienna and Cambridge, and he responded
to it critically with extraordinary originality. But in some cases, the mastery
of the tradition was considerable: Kant, for instance, was deeply familiar with
Descartes, Locke, Leibniz, Hume, and Spinoza, and he also engaged with the
Greek tradition (Plato and Aristotle) and some scholastic medieval philosophy.
His Critique of Pure Reason is often read as a synthesis and response to these
traditions. Ideally, mastery of the tradition should be as broad as possible,
at least in what pertains to the domain or subdomain under consideration, as
was the case with Aristotle and Kant.[45]
Since Plato, to do philosophy has meant aligning oneself, even if critically,
with a tradition.
A characteristic of the critical community of
ideas in academic philosophy is that, even if it is not capable of directly
accessing truth, as in science, it is at least able to identify which views have
rendered it improbable or clearly false. In the long run, the exclusion of
the most implausible views is one of the few achievements of which the
philosophical community can boast. Moreover, philosophers are presumed to seek
truth and are willing (even if reluctantly) to submit their philosophical theories
to the free critical scrutiny of other thinkers, equally or more competent, in
an effort to satisfy conditions (i) to (iv) from C3, which gives their theories
a minimum of objectivity. Finally, it is expected that the philosophical
community will at least satisfy enough of the twelve conditions of consensual
legitimacy (C2) listed above, even if it remains incapable of achieving sufficient
consensuability (C1) on any matter.
As
has already been observed, the critical community of ideas may, in science, and
certainly also, to some extent, in philosophy, suffer from limitations,
distortions, and pathologies. A classic example in philosophy was the religious
coercion of the medieval period: condition (7), concerning freedom, was not
satisfied by anything that might in any way conflict with religious dogmas. At
present, Anglophone analytic philosophy faces limitations, such as scholasticism,
scientism, hermeticism, fragmentation, and hyper-specialization
– features Susan Haack identifies as symptoms of a dysfunctional academic
community.
Scientism
is a worldview that regards science as the sole path to truth. Scientism leads
to scientificism, the overconfidence in formalist theorizatons or in empiricist
research.[46]
The former is too often found in contemporary analytic philosophers. Symbolic
logic, which in the time of Frege and Russell was used to sharpen our
understanding of the world, is now often used beyond its limits, to blend our
views. As Kevin Mulligan, Peter Simons, and Barry Smith noted in a well‑known
article, even if there can be important formalist philosophical works, it is all
too common for contemporary analytic philosophers to take refuge in hermetic
formalism over engaging with the confusing and complex nature of the real world.
As they wrote:
F(a)ntological philosophy triumphs, because
elegantly structured possible worlds are so much more pleasant places to
explore than the flesh and blood reality which surrounds us here on Earth... But
a philosophical tradition that
suffers from the vice of horror mundi in an endemic way is condemned to
futility.[47]
Scientificism is closely linked to
fragmentation and hyper‑specialization in philosophy, for in order to mimic,
philosophically, the procedures of a scientific domain, one must hypostatize
it, excluding anything that might call it into question. Scientificism is also close
to reductionism.
As
for overspecialization, in a world where knowledge expands far beyond our
capacity to assimilate it, specialization becomes a matter of intellectual
survival. The motto is “divide and conquer”, and in this regard reductionism
becomes the guiding principle. However, such a dysfunction risks obscuring the
conditions for the legitimacy of consensus from (1) to (12), which are more
appropriate to philosophical practice.[48]
It is important to note that, although we are
dealing with a limited critical community of ideas – one grounded in a
tradition of specialists in the field and in adjacent áreas – the reflections
of philosophers have not been able to meet any of the three conditions of
scientificity considered here: namely, the absence of linear progress, consensus,
and objectivity. This allows us to characterize philosophy in purely negative
terms, as a truth-seeking enterprise undertaken on the assumption of a critical
community of ideas in which such conditions remain unmet. The negative
conditions include, first:
NC1: Philosophy fails to satisfy the condition of progressivity C1,
since it is not a progressive enterprise capable of a growing and reasonably
linear increasing of knowledge.
Timothy Williamson rightly
defended an incremental view of philosophy, according to which it advances
through increasing argumentative rigor, gradual refinement, and the
accumulation of insights.[49] This is fairly evident.
Yet there is more to be said. Beyond this, one can discern modest but substantive
progress: ideas once regarded as plausible have come to seem unpalatable or
archaic, while, conversely, notions previously dismissed as uninteresting or
implausible may gain renewed significance. The Timaeus, a theological-speculative
work composed in Plato’s later years, was the most influential dialogue in
Antiquity and the Middle Ages, for obvious mystical reasons. After the
Renaissance, however, the Republic was rediscovered as the most
important dialogue, owing to its rational and dialectical argumentation
concerning the central doctrines of the Platonic system.
Another example concerns Kant’s so‑called
Copernican revolution. In formulating it, he assumed that Euclidean geometry
and Newtonian physics represented absolute truths. On this basis, he believed
that by appealing to the synthetic a priori judgments that underpinned them, we
became legislators of the universe. In other words, the structure of reality
must conform to the conditions of our sensible intuition and understanding, as
if by divine miracle. Less than a century later, however, this vision began to
unravel. New geometries, such as the hyperbolic and the elliptic, were
developed, challenging the exclusivity of Euclidean geometry. Worse still, in
1915, Einstein reformulated the concept of gravitation with his general theory
of relativity, showing that in the vicinity of massive bodies, space‑time
follows a Riemannian elliptic geometry rather than a Euclidean one. Neither
Newton’s laws nor Euclidean geometry proved capable of accounting for the real
world with sufficient precision. As a result, much of the impetus behind the Copernican
revolution lost its force: we are no longer legislators of the universe, but
interpreters of a reality that often exceeds the natural frameworks of our understanding.
Advancement in philosophy differs profoundly
from the progress observed in fields such as biology. Rather than unfolding
through linear development, philosophical progress occurs by narrowing
possibilities and accumulating alternatives. Yet this advancement is almost
imperceptible: partial gains are often offset by setbacks, and what remains is
a long history of hypotheses, some occasionally correct, without certainty as
to which are valid or to what extent. At best, philosophy has succeeded in
eliminating overly implausible ideas. For once a hypothesis achieves certainty,
it ceases to be philosophy and becomes science. Bertrand Russell likened
philosophers to the “Pilgrim Fathers,” who continually moved westward, fleeing
civilization (here understood as science), which, once established, ends
philosophical labor by subjecting imagination to reason. Unlike the scientist,
the philosopher seeks to preserve a space for the free exercise of imagination,
resisting the closure imposed by certainty.
In philosophy, what accumulates positively
is a hypothetical content, in the sense that our philosophical conjectures
can be rendered more complex, increasing both in number and, at times, in
plausibility. Philosophy thus amasses an ever‑growing set of possible truths,
which tends to narrow the mesh of the speculative network across its various
domains.
The accumulative character of hypotheses, though
not necessarily of knowledge, common to philosophy, becomes readily apparent
when we compare different philosophical theories of the past. Consider, for
instance, the systems of Kant and Hegel. Kant was a transcendental idealist and
empirical realist, concerned primarily with epistemological questions regarding
our cognitive structure and its limits. Hegel, by contrast, was an absolute
idealist, interested in a process philosophy centered on the historical
evolution of humanity and of moral, aesthetic, and religious cultures. Each
system appears to illuminate distinct speculative domains; each must contain
some truth, and together they are likely to contain more truths than in
isolation.
The
difficulty, however, is that we are not in a position to determine with
sufficient certainty where those truths lie, to what extent they hold, nor to
dismiss skeptical doubts about them, and even less to compare the systems in
any conclusive way. If we attempt to compare, for example, the philosophy of
Democritus with that of Parmenides, or Spinoza with Leibniz, we find ourselves
approaching the domain of incommensurability.
The reasons for
incommensurability are easily explained: one philosopher begins from the set of
premises (A) in order to arrive at (M) by the procedure (P); another
philosopher begins from the set of premises (B) in order to arrive at (N) by
the procedure (Q). Yet no one is in a position to compare either the value of
(A) and (B), or the value of the procedures (P) and (Q) by which the results (M)
and (N) are obtained. At least until the end of the nineteenth century, this
description remains entirely appropriate.
Philosophy distinguishes itself
from science by its inability to satisfy conditions C1, C2, and C3. Condition
C1, that of being a progressive enterprise, has not been met by philosophy,
since it fails to satisfy its precondition, namely, consensualizability. Hence,
with respect to C2, the following applies to philosophy:
NC2: Philosophy fails to satisfy the condition of consensualizability
C2, since no agreement regarding the truth or falsity of its hypotheses can be
reached within its critical community of ideas.
The best that can occur is the acceptance of new philosophicals views
for discussion. This is so because, in one way or another, the condition of
objectivity is not minimally satisfied:
NC3: Philosophy fails to satisfy the conditions of objectivity C3, since
the philosopher is unable, before the critical community of ideas, to establish
foundational presuppositions upon which consensus can be reached.
In fact, philosophers are unable to satisfy any of the four fundamental
assumptions of scientific objectivity. They are unable to:
(i)
to reach
consensus regarding what may
be counted as elementary data within the epistemic domains of
philosophy.
(ii)
to reach
consensus regarding what can be qualified as the right methodological
procedures for the evaluation of the explanatory and/or predictive power of
theories.
(iii)
To reach
consensus regarding properly formulated questions and problems within the
epistemic domain.
(iv)
o reach
consensus regarding the properly constructed theory in terms of its internal and
external consistency.
Since, in terms of satisfaction, C1 depends on C2, C2 on C3, and C3 on
the presuppositions (i)–(iv), it becomes evident that, ultimately, philosophy
does not configure itself as science, for it is unable to sufficiently meet the
required conditions of objectivity.
Regarding the case of the
anticipation of the sciences, this means that philosophical views, in the time
of their elaboration, were not intrinsically capable of satisfying the
conditions imposed by scientific methods. After all, it is the conditions of
progressivity, consensuality, and objectivity that enable science to expand its
scientific horizon far beyond what previously seemed possible.
We therefore conclude that these three
conditions – progressivity, consensuality, and objectivity
– correspond exemplarily to the criteria we intuitively employ in
distinguishing what belongs to the domain of science from what remains confined
to the field of philosophy. The former satisfies them; the latter does not.
10. SOME CONSEQUENCES OF WHAT WAS PROPOSED
When philosophy is regarded as an enterprise that anticipates science,
the adoption of the general conception of science just outlined yields
noteworthy consequences.
First, since the proposed
criteria for defining what may be considered science leave open the concrete
ways in which an inquiry might come to be recognized as scientific, the very
identity of the investigation that will emerge from philosophical activity remains
open. In other words, the suggested criteria do not anticipate the specific
profile of any scientific field yet to arise. More importantly, they do not require
that future sciences, those destined to occupy the space presently dominated by
philosophy, bear any resemblance to the sciences already well established.
This, in itself, imposes a significant barrier to the grandest aspirations of
scientistic reductionism.
Even broad speculative theories,
such as Comte’s law of the three stages, Max Weber’s thesis on the
disenchantment of the world, Freud’s metapsychology, or Herbert Marcuse’s
thesis of repressive desublimation, may ultimately fall under this expanded
conception of science, insofar as our knowledge grows. For this to occur, it
would suffice that they be reinforced and even corrected by subsequent
discoveries, forming a body of information and methods that render them capable
of achieving consensus within a critical community of ideas.
At this point, it is worth
considering the concept of consilience (from con = together, saliens
= leap, meaning “joint leap”). This concept was introduced by William Whewell
in 1840 to designate the convergence of inductions drawn from diverse classes
of facts. It was revived in the twentieth century by E. O. Wilson, who
understood it as the synthesis of facts and theories from different
disciplines, aimed at producing a unified understanding of reality.[50]
In his book, Wilson demonstrates how the natural and human sciences are
interconnected in ways that mutually reinforce one another. Finally, Susan
Haack applied the concept of consilience to philosophy. According to her:
What I mean is that there is a real world, a “pluralistic universe”, to
borrow James's phrase, and that all the truths about this complex and varied
world somehow combine.[51]
We can hold the presupposition a unity of
reality functioning here as a normative ideal: if we admit that reality is in
some sense unified, then scientific theories that are closely related to one another
should be able to complement and mutually reinforce each other in their
relation to truth, reinforcing one another in a way that remembers a crossword
puzzle. A good example, among many others, is the relationship between
molecular genetics, Mendelian genetics, the theory of natural evolution, and
paleontological and geological data. These theories and data mutually
complement one another, reinforcing one another’s validity.
As we said, Haack’s innovation consisted of
applying the idea of consilience to philosophical theories. If different subfields
of philosophy contain elements of truth and are interconnected, then, by the
principle of consilience, these elements should mutually reinforce one another.
Applying the idea of consilience to the supposition that philosophy is
protoscience means that ideas belonging to areas of knowledge complementary to
a given domain of philosophy – whether philosophical or not – should be capable
of reinforcing the true ideas belonging to that same domain and, by contrast,
weakening the false ones.
This assumption leads us to a provocative
conclusion: the overlapping of truths coming from multiple directions can
tighten the knots of the web of knowledge, gradually bringing the interrelated
results of philosophical speculation closer to a legitimate consensus regarding
their truth, that is, to science, insofar as science is understood as an
objective form of knowledge genuinely capable of consensual validation. If we
accept this idea, much of philosophical thought, whether speculative or not,
may in principle contain elements of truth which, once reconstructed, refined,
and further developed, could allow for a legitimate consensual agreement about
their truth, an agreement that cannot be realized if philosophy is fragmented
into scientistic theses that force it to be what it is not.
Even a philosophical conception of the
nature of philosophy, such as the one being developed in the present book,
could cease to be merely philosophical and become scientific if, when applied
to itself, it proves capable of achieving legitimate consensus regarding its
results. Suppose, for instance, that the conception of philosophy as, in large
part, a protoscience anticipating science, consistent with the
progressivist-consensualist-objectivist conception outlined here, were to
withstand criticism and be further developed in a more adequate and complete way.
Suppose further that, in the future, this conception were confirmed by the
emergence of new scientific fields and data that evolve replacing our current
conjectures. One consequence of this would be that a critical community of ideas
would eventually accept, by legitimate consensus, the truth of the claim that (i)
the most general characteristic of philosophy is that it is not capable of
achieving legitimate and objective consensus with regard to its results; and
(ii) at least in its more traditional centers of gravity, philosophy presents
itself as a proto-science in the sense of being capable of transforming into a
field susceptible to authentic consensual agreements, thereby becoming
scientifically unobjectionable. In this case, the view of philosophy as a
proto-science would satisfy the general condition of scientificity that it
itself established.
As has already been noted, a relevant
consequence of our conception of science, insofar as it concerns philosophy, is
that it justifies an alternative to fragmentary and reductionist scientistic
maneuvers. It justifies, in many cases, that we need not eliminate the breadth
of our philosophical visions by admitting them as replaceable by a diversity of
scientific theories. Something different may be expected. In reflecting on the
interdependence of the most central philosophical problems (such as those of
metaphysics, epistemology, philosophy of mind, theory of action, ethics,
philosophy of culture, etc.), I recall the observation attributed to Wittgenstein,
according to whom the difficulty of
philosophy lies in the fact that its problems are so interconnected that a
single problem can only be fully resolved when all the others are resolved as
well.
Although Wittgenstein’s remark is an obvious
hyperbole, it highlights a way in which central philosophical problems may give
rise to science: not by constructing theories directly demonstrable through the
empirical facts they seek to explain, but through consilience, namely, through the
mutual support theories provide one another, their cooperative explanatory
power, the stronger entrenchment of whatever truth they contain, and,
ultimately, their indirect yet genuine agreement with the facts.
There are, finally, some
conclusions to be drawn from the recognition that, in much of philosophical
inquiry, the intertheoretical support derived from consilience can prevail as a
coherential means of evaluating truth.
The first conclusion is that there are few reasons to abandon the optimistic
belief that in central domains of the philosophical tradition, sooner or later
we will be able to find a path toward legitimate consensual agreement, a transformation
that can occur through complete reconstruction, transformation, rejections, or
by the dissolution of problems critique of language. The existence of only five
basic sciences seems to reinforce this expectation. On the other hand, there are
cases such as process philosophies (including political philosophies), philosophies
of life, of technology etc., whose truth depends on an unpredictable human
history, which tends to make them resistant to the possibility of generalized
consensus.
A second conclusion is that, in
light of the principle of consilience, there is no reason to expect that the
central problems of philosophy will disperse into a multitude of mini-theories
without any prospect of consensus. On the contrary, it is to be expected that
they will be addressed by theories that are more or less comprehensive and
interconnected with one another through consilience. In this scenario, only the
conjectural form of the problems will tend to disappear, and not their scope.
A third conclusion, indicated
by the reinforcing interdependence of the truth-claims of theories, is that we
cannot disqualify philosophical attempts in areas such as epistemology, metaphysics,
and ethics merely by analogy with what happened to many philosophical
conjectures that anticipated sciences such as physics, chemistry, or biology,
which ultimately proved to be simply too rudimentary or erroneous, retaining
only a residual historical value.
In the natural sciences, beginning
with physics, profound epistemic ruptures occurred, separating the emergence of
these scientific bodies from the pre-scientific philosophical inquiry that
preceded them, generally false and incapable of achieving consensus. Consider,
for example, what is happening with the Aristotelian concept of material
substance, which is constituted by matter and a defining principle that he calls
form. This explanatory key has been effectively explored in Kathrin Koslicki’s[52]
hylomorphic investigation of the construction of material objects, which is
implicitly grounded in consilience, bringing it closer to the truth. This
suggests that the transition from philosophy’s central domains to science
occurs more gradually, as it involves refinements and corrections of
interrelated ideas rather than an abrupt leap into something entirely new.
This implies that philosophical
speculation in its central domains—such as Aristotle’s theory of substance, his
ethics, Descartes’ cogito, Leibniz’s relational theory of space, Locke’s
distinction between primary and secondary qualities, and Kant’s theory of concepts—may,
as has long been suspected, continue to hold significant relevance for the
present day.
Even though we do not yet know
precisely how to evaluate such truths, it is plausible that they accumulate
over time until sufficiently robust consensus allows for the correction of
errors, the elimination of confusion, and the promotion of convincing
refinements in a more urbane and discreet manner. Recognizing this phenomenon
is important for understanding the value of the fundamental philosophical disciplines
in their historical context, a dimension often neglected by positivist scientism.
11. ANALYTICAL PHILOSOPHY: DECLINE AND FALL
Several
authors have noted the decline of Anglophone analytic philosophy, a tradition
that, over the past four decades has conquered the world, while continental
philosophies, both German and French, have nearly faded into obscurity.[53] As
already noted, the symptoms of this decline may be described as scholasticism,[54]
scientism,[55]
hermeticism,[56] fragmentation,[57]
hyper-specialization,[58]
and superficiality.[59]
The blemish of scholasticism is stagnation.
Theoretical assumptions inherited from the past are accepted dogmatically, while
debates circle around minute, abstract, and technical distinctions—designed
less to illuminate than to generate artificial complexities that never dare to
challenge the so‑called “received wisdom.” What is missing are disruptive
innovations. The last philosopher I recall who truly produced them is
Jürgen Habermas.
Yet scholasticism is an effect, not the
cause. The root of the problem lies in scientism. We live in a society where
science, and even more so, technology, occupies an ever‑expanding space. Scholars
believe in science as the ancients believed in the gods. The problem arises
when the scientistic mindset spills over into philosophy. Although philosophy may,
and indeed must, draw upon scientific advances, it cannot be absorbed into
contemporary science without losing its very shape, precisely because of the
otherness of its scientific potential. Wittgenstein, as early as the 1930s,
after his 1929 year with close dialogue with the logical positivists (physicists,
logicians, economists…), philosophers who sought to reduce philosophy to
something akin to the successful hard sciences they already knew, summed up his
critique of scientism in the following words:
Philosophers constantly see the method of
science before their eyes and are irresistibly tempted to ask and answer
questions in the same way that science does. This tendency is the real source
of metaphysics.[60]
The attempt to elevate
philosophy by grounding its ideas in resources borrowed from new
technical‑scientific domains—whether formal or empirical—betrays a reductionist
stance. In this process, philosophy tends to exclude whatever proves
incompatible, treating it as irrelevant. Such insularity, this “placing in
parentheses” that abstracts away anything that could lead to contradiction,
allows scientistic theory to become autonomous: self‑referential in its
evaluations, detached from its relation to the wider body of knowledge, and
thereby estranged from knowledge itself.
As I noted in Chapter I, reductionism can be
productive, as in Kripke’s profound reflections on reference and existence. Yet
its limits soon appear: once exclusion has taken place, it becomes all too easy
to subdivide the theoretical domain into new sub‑specialties whose plausibility
can only be challenged externally, that is, from what has already been
excluded. In this way, a fertile ground for fragmentation is opened. Without
consilience, each scientistic‑reductionist field evolves in isolation, lacking
dialogue with others, since they no longer are allowed to sustain one another
under the presupposition of deep interconnection.
Here, the path to hyper‑specialization opens:
the proliferation of sub‑theories increasingly remote from any plausible
outcome, counterproductive constructions of Castalia’s world: abstract,
self‑contained, and devoid of concrete relevance, incapable of leading us
forward. As Susan Haack aptly summed it up:
Hyper‑specialization hinders
progress rather than enabling it, for it means that time and energy are
inevitably wasted on a niche of problems that will not survive the half‑baked
theories from which they originated.[61]
To explain
how philosophical hyper‑specialization develops, Haack coined the expression
“premature specialization” to designate the most harmful form of scientistic
fragmentation within the field of knowledge. As she observed, specialization is
welcome in the sciences, whose solid foundations allow for further advances. In
philosophy, however, premature specialization occurs upon foundations which,
though dogmatically accepted by their practitioners, lack solidity, especially
since other competing groups choose equally precarious foundations.
The result
is that the “funny hypotheses” these philosophers invent, together with the
mini‑theories that follow, lead nowhere, serving only to occupy their adherents
for a good number of years. Haack ironically describes them as forming
self‑promoting cliques (“little gangs, niches, cartels, and fiefdoms”),
citation cartels among peers, and producers of niche literature whose
hermeticism makes it accessible only to their accomplices. In the end, she writes,
boredom sets in and the “funny hypothesis” is replaced by a new conjecture equally
sterile[62] –
without any problem ever being resolved.
Worse still is when these mini‑theories
persist, subdivide, and multiply without end, giving rise to a proliferation of
mini-sub‑theories. A striking example is the metalinguistic account of
proper‑name reference in the philosophy of language. According to this
decades‑old proposal, a proper name refers by means of a description such as
“the bearer of ‘N’,” with N being the name itself. This idea is manifestly
inadequate: it fails to distinguish one proper name from another, since all are
intended to designate a possible bearer. To be told that the bearer of the name
‘Aristotle’ is the one to whom the name ‘Aristotle’ refers is merely to learn
that the name designates a certain individual, without shedding any light on
how the name is actually used to refer to Aristotle.
Nevertheless, even today, dozens of
theoretical variations continue to proliferate from this initially implausible
reductionist conjecture, sustaining a specialized discourse that appears futile
to all but those few specialists who have invested years of effort in it.
Similar patterns can be observed across other domains of philosophy, including
metaphysics, epistemology, and philosophy of mind.
The problem with these procedures is that
they are far from innocuous. Beyond serving as intellectual exercises that
enable specialists to debate, publish, convene at conferences, and secure
grants, they readily obstruct the emergence of disruptive innovations –
innovations capable of reshaping entire fields by reworking their foundations –
precisely because such breakthroughs would undermine the very industry of
philosophical trivialities on which these practices depend.[63]
Before
proceeding, I should note that my discussion here is restricted to disruptive
innovations in central areas. Adaptive or incremental innovations, including
new interpretations and reconstructions, are generally welcomed by the philosophical
status quo, as they are readily assessable and do not threaten the work
of specialists or the entrenched intellectual hierarchy. This is true, for
example, of the excellent introductions published by Routledge[64], or
by Oxford University Press[65],
or by Kathrin Koslicki’s reconstruction of Aristotelian hylomorphism.[66]
Nor do I object to targeted studies related to an emerging scientific field, where
philosophical questions naturally arise. Likewise, para-philosophical and historical
studies can be of great value, as much as applied philosophy, as the
extraordinary Stanford Encyclopedia of Philosophy demonstrates. But these
are not my concern here.
Returning to the point: although one might
concede that the proliferation of mini‑theories born of positivist
fragmentation can serve to “keep the conversation going” (as Richard Rorty puts
it), offering at least some motivational value for scientists and educated
audiences, in practice they have increasingly functioned as obstacles rather
than stimuli to the emergence of truly disruptive internal developments, which,
ultimately, are the only innovations that can be considered genuinely indispensable.
I have a personal experience that illustrates
why it is so difficult to produce disruptive work today. I refer to my book How
Do Proper Names Really Work?, published in 2023. It represents the
culmination of a research project begun around 2007, from which several other
publications also emerged. I believe this book offers a concrete example of how
the ever‑expanding array of hypotheses and theories, born of premature
specialization in theories of reference, can be dismantled through a careful reconfiguration
of the theoretical foundations long treated as untouchable, including much of
the legacy of sacralized figures such as Saul Kripke and Hilary Putnam.
Interestingly, the complex theory that emerged from this investigation bears no
resemblance to the fragmentary, highly formal, and abstract approaches to which
we have grown accustomed. Nor does it fit into familiar molds; rather, it
approximates science, not through scientistic mimicry of method, but through
plausibility, explanatory richness, internal coherence, and freedom from
reductionist contrivances.
It is impossible to explain this theory in any
detail here, but I can offer a glimpse. It is grounded in rule‑based schemes
for identifying proper names, which replace the traditional bundles of
descriptions and prove extremely flexible in application. When properly
associated with names, these schemes are completed by definite descriptions
that transform them into rigid designators. This, in turn, dissolves the
metaphysical contrast central to Kripke: the sharp distinction between proper
names as rigid designators (referring to the same object in any possible
world in which it exists) and definite descriptions as accidental or flaccid
designators (which may refer to different objects in different possible
worlds). For example, “the husband of Pythias” could, in some worlds, designate
another person or no one at all, even if Aristotle himself existed there, unlike
the proper name “Aristotle,” which consistently refers to that individual.[67]
To give a concrete example, the name
‘Aristotle’ (or its equivalent) comes to be summarized by the following complex
definite description, which serves as the expression of its identification
rule:
THE person who satisfies (i) sufficiently
and (ii) more than any other candidate, (iii) the locating condition
of having been born in Stagira in 384 B.C., the son of the court physician,
having traveled to Athens at the age of 17, and having studied with Plato for
the next 20 years, etc., and/or the characterizing condition of having
authored the Aristotelian corpus, etc.
This
description-rule (presented here in condensed form) is itself a complex definite
description, since it begins with ‘the’. Yet it is sufficiently flexible to
identify Aristotle in any possible world in which he can be definitively given
as existing, which makes the name “Aristotle” a rigid designator. Definite
descriptions, by contrast, function as accidental designators only when tied to
the identification rule of a proper name. This explains why definite
descriptions that are not associated with any proper name can themselves become
rigid designators. For example: “the rafflesia discovered by Dr. Joseph Arnold
on May 20, 1818” is a definite description unlinked to any proper name;
therefore, it applies to the same flower in every possible world in which it
was discovered under those circumstances, making the description rigid. Other
descriptions—such as “the tutor of Aristotle” or “the founder of the Lyceum”, are
merely auxiliary. They are generally useful only insofar as most
speakers, who lack sufficient knowledge to master the identification rule, can
nonetheless insert the name correctly into discourse and, in this extended
sense, “refer” to him through what P. F. Strawson called “borrowing of
reference.”
What distinguishes this theory in terms of
scientific character is its operationality: if the identification rule for a
proper name were implemented as a computer program, together with the data
concerning the conditions of its application, one could reasonably expect the
system to be capable of recognizing the bearer of the name. This would be
impracticable for earlier theories, which relied on still precarious
foundations derived from two opposing camps, led respectively by John Searle
(descriptivist internalism, with an empiricist tendency) and Saul Kripke (causal-historical
externalism, with a formalist tendency).
Finally, the difference between formally
oriented theories such as Kripke’s and my own may be compared to that between
the digital computer, which operates with discrete elements, and the analog
computer, which works with continuous quantities. Our brain is an analog
computer, and so too, it seems, are its mechanisms of reference. Hence the
necessity of introducing elements of indeterminacy that are inevitable in the
referential act.
As far as I know, my 2023 book has received
no attention from specialists in the field, who are almost all externalists. I
suppose this is because it did not (and, I guess, could not) emerge from the
top-down hierarchy, which has long since become infertile and unhealthy. The De
Gruyter editor, Christopher Shields, wrote to me that the American journal Notre
Dame Philosophical Review (the most influential review journal) rarely reviews
works published by the German press De Gruyter. I complained in a letter
to the journal’s editor, sending him the original. He apologized and promised
to forward the book to the editorial board. Naturally, nothing came of it,
leaving me adrift – rather like Kafka’s character in The Castle, a
situation not unfamiliar to me. Distrust? Corporatism? The maintenance of the
status quo of the Anglo-American mainstream, formally oriented and politically
constrained? Philosophical communities are exclusivist. One need only consult
the bibliography of a history of philosophy written in Italian to see this:
most quoted works are by Italian historians. Then, I find myself reflecting on
the repressive effect of originality in countries culturally colonized, such as
Brazil, which, in their best, merely import and mimic what comes from abroad...
But that is beside the point. The fact remains that the Anglo-American critical
society of ideas, like others, does not satisfy certain conditions of
consensual legitimacy such as those of Merton and Habermas beyond its borders and,
unfortunately, not even within them! Moreover, this case illustrates an effect
that Susan Haack denounced: fragmentary philosophy, composed of theoretical
conjectures that accumulate and multiply in increasingly scholastic
discussions, becomes a barrier to the evaluation and acceptance of robust
philosophical theories.
Against the claim
that the core of analytic philosophy has fallen into stagnation, someone has
objected to me that there are novelties, such as the logic of grounding,
the knowledge-first approach, and enactivism, which are, after
all, very significant acquisitions! Yet this is but another illusion, reminding
us of Wittgenstein’s remark that a small age tends to see the world from its
own tiny perspective. Let us see…
It is true that the logic of
grounding (ontological fundamentation) provides a more precise and refined
instrument for clarifying themes already present in Plato, as the grounding of
sensible reality in Ideas or Forms. Yet, while its contribution to logical
inquiry is undeniable, it remains essentially a technical advance: a new tool
available to philosophers rather than a philosophically disruptive innovation.
The
Knowledge-First approach originates in Williamson’s Knowledge and Its Limits,
whose central thesis is that knowledge is a primitive, non-analyzable mental
state: it entails belief, though belief does not entail knowledge. Moreover,
knowledge constitutes evidence and is inherently “world-involving.” This means
that, although it is a mental state, it is externalist in character, since it
is necessarily connected to truth and thereby to the facts of the world.
The Knowledge-First approach is motivated by
the perceived failure of attempts to resolve Gettier’s challenge to the
traditional definition of knowledge as justified true belief. In my view,
however, this represents a profound confusion, better understood as an
ingenious intellectual exercise than as genuine philosophical progress.
Williamson’s sophistication is undeniable; yet his work often repackages
familiar points (as in the anti-luminosity argument) or leans on long-standing
confusions, such as the many strained efforts to rescue the traditional
definition of knowledge from Gettierian counterexamples.
In my view, Gettier’s celebrated argument
against the traditional tripartite definition of knowledge as justified true
belief is far less damaging than we are told. It appears to refute the
traditional definition by presenting cases in which the believed proposition is
true, there is a reasonable justification, but there is no knowledge. For
example, I look at my watch and see that it reads noon.[68] The bells of the nearby
church strike twelve. But then I recall that yesterday afternoon, my watch was
running slow. I look again more carefully and realize that the watch is, in fact,
stopped. It must have stopped at midnight the previous day… By mere coincidence,
it shows the correct time. The traditional conditions of knowledge are
satisfied: It is true that it is noon, I believed in it, and I had a reasonable
justification for that claim, but now I see that they were insufficient. Another
example is the barn case. Mary is driving in a rural area full of perfect fake barns
posted there as scenery for a movie. Mary does not know that. She looks at the
first barn and says, “What a beautiful red barn I am seeing there!” By chance,
this is the only real barn in the area. What she says is true, she believes in
it, and she has a good justification, since it looks precisely like a barn.
However, since she identified the right barn only by chance, she does not
really know that what she sees is a real barn. But the condition of justified true belief is
satisfied.
I am convinced that Gettier’s problem was
already solved by Robert Fogelin, and that his solution was strengthened by the
author of the present text.[69] The genuine solution emerges
from a more complex and refined dialogical reformulation of the traditional
tripartite account. The key point is this: the justification offered by the speaker
a for his knowledge of proposition p must be accepted as
sufficient to render the proposition true by an evaluator b who
possesses more complete information at the moment of his evaluation. The
evaluator b can be a person who knows the region well and heard Mary’s utterance,
and in the first example is myself some seconds after my first judgement. The
more complete information reveals that the initially offered justification was insufficient
to make p true. The solution, therefore, is to require that the justification
given by a, for evaluator b of a’s knowledge-claim at the time
of evaluation t, must be considered by b as sufficient to make p
true. Formally, instead of
aKp = p & aCp & aJCp (the traditional tripartite definition),
where K = knowledge, C = belief, and J =
justification, what I proposed was:
aKp = (J & J ~> p) & aCp & (aJCp & J ∈ J*t),
where ‘t’ is the time of
evaluation, J*t is the set {J,… Jn} of justifications individually accepted or
acceptable by evaluator b at time t of his evaluation as being
sufficient for the truth of p, and where ‘~>’ indicates that the
justification on the left guarantees the truth of p with a probability
equal to 1 (in the case of formal knowledge) or with a probability sufficiently
close to 1 to allow the pretention of knowledge (in the case of empirical knowledge).
This solution does not lead to relativism, insofar
as we assume fairness in the dialogical situation, which can be achieved by
satisfying conditions like (1) to (12) for an ideal speech situation.[70] This solution is closer
to a pragmatic, Peircean approach: knowledge is what survives fair, informed
evaluation over time. This solution arguably dissolves Gettier’s problem
instead of solving it. Nonetheless, people continue discussing it as if the
Gettier problem were a genuine hindrance. (For a less abridged exposition of my
view, see Appendix I.)
This isn’t the only case in which we find philosophical
problems with straightforward solutions that remain unacknowledged by the
philosophical community. The reason is simple: accepting them would mean the
conversation could no longer be “kept going.”
Another
striking example is Wittgenstein’s sceptical puzzle about rule-following.[71] Suppose one wishes to
teach a student the rule “add 2” in the sequence of natural numbers. At first,
the student seems to have learned it correctly: when the teacher says 23, the
student replies 25. But later, when the teacher says 1,000, the student answers
1,004; when the teacher says 1,003, the student answers 1,007. The student has
misunderstood the rule, believing that after 1,000, one should add 4; after
10,000, add 8; and so on. Wittgenstein’s point is that, as it seems, there are
infinitely many ways to misinterpret a rule.
Kripke sharpened this paradox with his
famous “quus” example: a student interprets the rule “plus” as a deviant
operation that works like addition until the sum exceeds 57, at which point the
answer is always 5.[72] Kripke’s book on this
problem is notoriously intricate, replete with unsuccessful attempts to solve
it, culminating in an obscure proposed solution.
Yet the
solution, which I attribute to Craig DeLancey (2004), is surprisingly easy.[73] Humans are biologically
predisposed to interpret rules in the simplest possible way. By ‘simplicity’,
DeLancey means computational economy: a Turing machine would require a longer,
more complex program to generate the deviant interpretations imagined by
Wittgenstein and Kripke than to generate the standard one. Even the normative
force of the rules, the “oughtiness” to follow them, comes from our biological predispositions,
insofar as they are socially reinforced by communal dispositions.
Nevertheless, academic philosophers continue
debating Kripke’s spider web, less out of genuine perplexity than out of
deference to authority. After all, his treatment of Wittgenstein’s riddle provides
endless material for discussion.[74] As Susan Haack wrote, quoting
C. S. Peirce: “whom any discovery that brought quietus to a vexed question would
evidently vex because it would end the fun of arguing around it and about it
and over it”.[75]
The resistance against a constructive solution isn’t because it is weak, but
because most philosophers often prefer endlessly revisiting puzzles rather than
accepting closure. In that sense, DeLancey’s view is less about “fixing” the
definition and more about coming back to the epistemological game already played
by philosophers from Plato to Hegel.
Now, a word about epistemic externalism: in
my view, it is either wrong or it is a misname, since closer scrutiny shows
that externalism, defined as the view according to which justification depends
on factors external to the subject’s awareness, does not only point to the
world that externally to us caused the knowledge, which is an obviousness
common to all views, but simply toward more nuanced forms of internalism.
Consider, for example, reliabilism: the influential and doubtless important
thesis that a belief qualifies as knowledge when it is produced by a process
reliably conducive to truth. On this view, a third party, B, may ascertain
that A knows that p insofar as A’s belief arises from a reliable
cognitive mechanism, even if A has forgotten or never possessed reflective
access to that mechanism. It is said that this differs from internalism because
the latter requires that the subject be conscious of the justification. Examples
include the process of seeing something under normal conditions, which is
reliable, and the ordinary memory of recent facts, which is likewise reliable.
Notwithstanding, it is sufficient to adopt a
more refined and comprehensive conception of epistemic internalism, one in
which the reliable process itself is internalized as a mental justificatory
condition, even if completely unconscious. On this account, the justification
that A possesses for knowing that B perceives an approaching car is that the
event in question must generate A’s true belief that B visually apprehends the
car’s approach. Such justification is internal, just as B’s reason for
refraining from crossing the street, namely, his perceptual awareness of the
oncoming vehicle, constitutes a reliable internal justification. The same analysis
applies to memory: insofar as memory is a reliable cognitive process, the
appeal to its past veridicality provides an internal justification, even though
it is, as any knowledge of the external world, obviously grounded in external
facts. Ordinary cases, in which the subject has either forgotten the
justificatory basis or never explicitly possessed it, likewise demand
internalist interpretation. A person may know how to compute a square root
without recalling when or how she acquired the procedure; she knows that she
knows because she remembers having successfully employed the method on previous
occasions, thereby rendering it reliable and internally justified. A well-known
illustration of externalism is the case of chick sexers who are able to determine
the sex of newly hatched chicks merely by looking at and handling them. Suppose
they cannot even explain how they succeed in identifying the sex, though the
procedure proves correct in the vast majority of cases.[76] Here we have well known example of a practice
justified by its reliability. Properly considered, chick sexers would, in the
example, have a sound justification for claiming that they know the probable
sex of a chick, even if they cannot articulate the basis of their judgment.
The justification is straightforward: the procedure generally works; hence, it
is reliable; hence, they have a justification. However, such justification is
obviously internal, even if it derives from the experience of an external regularity.
Descending now to the animal realm, one may
attribute knowledge to a dog who anticipates its owner’s arrival at dusk and
runs to the door. Although the animal cannot articulate a justification, we ascribe
knowledge on the basis of its repeated experiential correlation between dusk
and the owner’s return. In this sense, we justify the attribution by
recognizing that the dog possesses an internal justification of which it lacks any
possible conscious awareness. The animal has no cognitive means to extract a
reliable justification for its Pavlovian passive conditioning, but we say the
dog “knows” his owner is coming. Would we say the same about a case of active
conditioning like that of a pigeon that learned to peck a green button in order
to gain a corn kernel? Surely, the pigeon “knows”, and it knows because a non-conscious
reliable process occurs in the pigeons’ mind, as small as it may be. The real
difference is that this is clearly a case of learned knowing how and not
of any propositional knowing that.
In any conceivable way, as properly understood,
reliabilism always collapses into one or another form of internalism, which
means that the term ‘external’ is gratuitous, serving only to create the
impression of a discovery that never in fact occurred. The dispute turns out to
be merely verbal. If the reliabilist agrees with me on this point, then all
that remains is to dispense with a terminology that suggests the discovery of
something essentially different from internal justification.[77] Moreover, from the fact
that the roots of knowledge are external, it does not follow that knowledge, as
such, must possess any external, non-mental component. This would be a genetic
fallacy, which applies even more forcefully to semantic externalism.[78]
Consider now the case of enactivism. According
to this view, cognition is not computation, but embodied action.[79] The mind is not merely the
passive reception of information, but the “bringing forth” (the enacting) of a world
through the bodily interaction with the environment. Because of this, the mind
cannot be explained by a computational model, since computers do not have
bodies. Thus, for example, seeing isn’t merely the processing of visual inputs
but an active skill involving movements such as turning the head, adjusting
focus, and interacting with the environment. The role of mental representation,
on the other hand, is downplayed. In its most radical form, enactivism permits
cognition without representation.[80] For instance, the amoeba
moves towards its nutrients, and the newborn closes its hands when something touches
their palms.
In my view, this calls for a Wittgensteinian
therapy, since the originality of the enactivist account rests on improper
extensions of words such as ‘cognition’ and ‘mind’ far beyond their established
usage. It is obvious that computational models alone cannot explain the
workings of the mind; it is equally obvious that the mind requires a living
body to interact with the external environment, and that this interaction is
essential. But none of this entails that the body is an extension of the mind.
To correct the examples: seeing is a process
internal to the brain and the mind. It depends on sensory-motor actions—turning
the head, adjusting focus—that belong to the body, not to the brain or the
mind. An amoeba moves toward nutrients without representation, cognition, or
mind. The apprehension reflex of the newborn occurs without representation, but
also without cognition.
The dream
of a practicing Buddhist such as Francisco Varela – the person who introduced
the notion of enactivism – would be that the mind extends to the body and
beyond to the world (as nirvana approaches). Unfortunately, reality is harsher
than we would like to admit. As T. S. Eliot wrote: “human kind cannot bear very
much reality”.[81]
In contrast, Jean Piaget, an old-fashioned
serious researcher, investigated the sensorimotor stage in children from birth
to two years of age without compromise. He did not embrace the option of
diminishing or rejecting the role of symbolic representation. On the contrary,
his research traced the progressive development of cognitive reasoning in
children, culminating in the formal operational stage at age eleven and beyond.
Escapism, understood as the tendency to avoid reality through
argumentative fantasy, is a common trend in contemporary philosophy.[82] A prominent example of
this strategy, akin to enactivism, is the “extended mind,” which many enactivist
embrace warmly. According to this view, the mind is not confined to what occurs
within the brain.
Consider the following case:
Person A remembers the date of a concert. Person B, who has a poor memory,
writes the same date in a notebook. On the basis of this information, both A
and B attend the concert. The extended‑mind theorist concludes (1) that what is
written in B’s notebook constitutes part of B’s belief, in the same way that
A’s memory does. From this, they derive the corollary (2): the notebook
containing the date is a part of B’s mind, albeit located outside B’s body.
The problem is that recognizing the mind’s capacity to make use of external
resources – from the calculator to artificial intelligence – which can assist
it and even exponentially expand its possibilities, is not the same as claiming
that these external resources constitute part of the mind. However, the concept
of mind was always understood as the seat of thought, awareness, and feeling, and
was subsequently extended to include memory, attention, and intellectual
capacity, while excluding notebooks, calculators, and AI. The impression of a
“discovery” here arises from a misuse of language: a primitive anthropomorphic
projection that recalls the Paleolithic belief that plants contained spirits.
To yield to such a temptation in our own time, however, is an intellectually immature
attitude.[83]
None of this compels me to disagree with Haack’s message in its essence.
The current academic philosophy – fragmented[84] and hand-to-mouth – far from
making people more critical, instead restricts and stupefies them.
The final diagnosis is one of decline
or, as Haack prefers, of a genuine “intellectual disaster” whose roots lie in
the disappearance of profound philosophical innovation within universities that
hold hegemony over scientific and cultural production. Yet it remains
legitimate to hope that philosophy – by its very nature, almost inevitably
disruptive – may rise again, like the phoenix from its ashes. (Without forgetting,
of course, that for this to occur, the phoenix must first be consumed by its
own fire.)
Haack identified well the proximate causes
of this decline – the more remote causes, I believe, are of another order. She
observed that, prior to the Second World War, there was spare space in
philosophical journals for publications. An ethic prevailed according to which
one should publish only when having something important to say. The ideology of
publish or perish, now multiplied by the Internet, radically altered this
landscape, virtually paralyzing the possibility of the unexpected – something
that transcends the almost automated evaluation of editors constrained by the
pressure for innovation and their own hyperspecialized reviewers. (What editor
today would accept to publish a book in the style of the Tractatus
Logico-Philosophicus? Who would have the intellect of Gilbert Ryle as the editor
of Mind?) Alongside this, Haack observed symptoms of intellectual
corruption, such as the publication of “salami articles” (papers sliced into minimal
units and often co‑authored by multiple writers), and the emergence of perverse
incentives, exemplified by the absurd “philosophy olympiads.” She also noted
that the contemporary American university is increasingly managed by CEOs, who
must demonstrate results and compel universal engagement with research. To this
I would add the recycling of ideas: arguments presented thirdly or more years
ago are presented again, in somewhat different language and context, probably even
without the author’s awareness. In this atmosphere, philosophy has come to be
treated as if it were or should be a progressive science – or worse, as if it
were a semi-technical investigation in constant development: everyone is
expected to be a philosopher and to produce innovations, forgetting that the
learning of philosophy is a cumulative process that can demand decades
to reach maturity.[85] Plato
already knew that, as he demanded that philosopher candidates should first
learn sciences and arts, and even live the life of common people, until they
were about fifty years old, when they could already be good philosophers (Republic
521c-540a). He also criticized young people doing philosophy, since they lack
maturity and stability of character (Republic 485a-487a).
Yet where everyone is required
to be a philosopher, no one can truly be a philosopher. Philosophy, in
its highest sense, demands intellectual commitment, time for creative leisure,
a broad and diversified scientific and humanistic culture, the long acquisition
of knowledge through projects that may require many years of reflection, and, in
addition, some kind of talent. A system that demands constant productivity in
shared projects renders this ideal unattainable. The result is a minimalist persiflage
of genuine philosophical labor—something that anyone can perform without
significant preparation.
It is as if all who study music
were obliged to compose, or as if all who learn painting were required to be
painters. Such demands are possible, of course, but only at the expense of quality.
Beethoven and Michelangelo were not merely individuals of exceptional talent (though
many may possess the same potential) but, above all, they were consciously and
wholly committed to an ideal of aesthetic greatness and perfection,[86]
sustained by an environment conducive to their flourishing. In some measure,
the same holds true for Plato and Aristotle, or Kant and Hegel – figures whose
philosophical achievement was inseparable from such commitment and context.
Increasingly, this dimension is lacking in our excessively technocratic world.
It may be worth recalling what
befell philosophy and also Austrian and German science after the Second World War.
Although Paris was the center of the arts in the first half of the twentieth
century, Vienna was the heart of science and culture more broadly, particularly
through the University of Vienna. With the rise of Nazism, its best scientists –
many of them Jewish – were forced into exile. Freud, a native of Vienna, sought
refuge in England; Karl Popper in New Zealand; and most members of the Vienna
Circle in the United States. Kurt Gödel, the mathematical genius who formulated
the incompleteness theorems, though not himself Jewish, was closely connected
to Jewish intellectuals and, in 1939, was assaulted by young Nazis in the
center of Vienna. His wife, Adele, saved him with remarkable courage, armed only
with an umbrella, and it is not unlikely that, thanks to her, he, who was not very
clever about the ways of the world, managed to reach Princeton in time, where
he joined Albert Einstein. So was the University of Vienna emptied of its
talents.
Curious, however, is what
happened afterward. Following the war, none of them were invited to return to
Vienna. Those who replaced them, out of a mixture of vanity, envy, and fear of
exposing their own mediocrity, did not wish to live once more in the shadow of
far more talented figures. The few who dared to return, such as the great physicist
Erwin Schrödinger, were received coldly by the academic community. The result
was devastating: deprived of its leading minds, the University of Vienna never
regained its former level. A similar decline occurred across German‑speaking
universities. It is also striking that the two best German philosophers of the
second half of the twentieth century, Habermas and Tugendhat, were born and spent
their childhoods before the war, in a cultural climate profoundly different from
what followed.
These examples reveal that the
culture which shapes human consciousness is fragile – difficult to bring forth
and even more difficult to preserve. If the university’s hierarchical structure
does not renew itself creatively, it ceases to be a living source of cultural
innovation.[87]
And if this holds true for the sciences, it applies all the more to so suspect
an activity as philosophy, whose task is the cultivation of critical intellect
and which often must assume the role of questioning the unquestionable.
Since criticism of present philosophy is a
delicate matter that does not fall within the central aim of the present text,
I shall not dwell on it. I merely recall Harry Frankfurt’s brief study of the
phenomenon he termed bullshit, understood as intellectual constructions
that are at times extraordinarily complex and sophisticated, yet produced
without any commitment to truth. This phenomenon, he writes, is a collateral
effect of the broadening of access to culture, which has given rise to an ever-growing
number of individuals who, though cultivated, have nothing of real relevance to
say.[88]
Obviously,
I am not suggesting that the present stagnation of Anglophone analytic
philosophy – virtually the only surviving strand – stems from bad faith or from
the frivolity of mere manufacturers of bullshit. The explanation is more profound.
It is more accurate to think that a social group distorted by a system is thereby
led into unconsciousness of its own distortions, which may result in
individuals doing unproductive work without the slightest awareness of that.
Freud conducted a study on the phenomenon of
religious belief, which he regarded as a collective repetition-neurosis.[89]
He noted that silly rites and implausible stories, when shared by many, acquire
strength as though by collective hypnosis. In truth, any mass movement is
subject to a kind of collective blindness. As Hannah Arendt observed, Adolf
Eichmann bore no personal animosity toward the Jews. Yet he was part of a system
and saw it as his duty, as a public official, to carry out with maximum
efficiency the order to organize the deportation of Jews to the extermination
camps.[90]
Consider, for example, the contemporary
philosophical community's priority to complexity over plausibility. Even Kant
had already observed that this priority is misguided. Yet no one seems to notice.
Moreover, this is a story of decline that began in the citadel's most distinguished
quarters, for decadence always begins at the top. So, no one of comparable
stature has taken the place of Searle, Kripke, Dummett, or Habermas. But here
we encounter another problem: what, precisely, is the decadence of a culture?
How does it occur? And why?
A case of philosophical cultural decline was
described in an unparalleled way by Edward Gibbon in his history of the decline
and fall of the Roman Empire:
The authority of
Plato and Aristotle, of Zeno and Epicurus, still reigned in the schools; and
their systems, transmitted with blind deference from one generation of disciples
to the next, thwarted any generous attempt to exercise the powers or expand the
boundaries of the human mind. The masterpieces of poets and orators, instead of
igniting inspiration on their own, gave rise only to cold and servile imitations.
The very name of “poet” had been almost forgotten; that of “orator” was usurped
by the sophists. A cloud of critics, compilers, and commentators darkened the
face of knowledge, and soon, the decline of genius was followed by the corruption
of taste.[91]
To
describe the phases of decline, I will appeal to the vague concept of ‘level’,
which designates breath, depth, and abstraction, often capable of creating new theoretical frameworks. (In this
sense, A. J. Ayer considered himself a second-order philosopher if compared
with Russell and Wittgenstein). Here is how I believe its mechanism can be described
in terms of a successive lowering of philosophical standards:
Phase 1: At the outset, there emerged philosophers of
the highest order, the founding fathers of analytic philosophy: Frege and Wittgenstein,
the two only geniuses at the summit, followed by Russell – people who were sufficiently
intelligent to engage in dialogue with Leibniz. They belonged to a world apart,
one still profoundly hierarchical and elitist (in both its most admirable and
its most problematic aspects). It was the European world, marked by deep sociocultural
fissures both internal and external, which, through its colonial rivalries, was
driven into the two World Wars. These very contradictions provided the ferment
for the great cultural achievements that defined the first half of the twentieth
century.
Phase 2: In the subsequent stage, philosophers of the second rank emerged. Doubtless,
they were extraordinary theorists such as J. L. Austin and P. F. Strawson at
Oxford, alongside the Vienna Circle positivists, notably Rudolf Carnap, and, in
England, A. J. Ayer. Their contributions were significant, yet they also tended
to dismiss the intellectual legacy of the founding fathers. One example is Strawson’s
not fully convincing criticism of Russell’s theory of definite descriptions.
However, in my view, the most formidable misstep lay in their rejection of Wittgenstein’s
semantic principle of verification, a critique directed at what they ultimately
failed to grasp in its entirety. Nevertheless, this rejection has been transmitted
to the present as “inherited wisdom” (see Chapter VII, Section 4).
Phase 3: Afterward was the turn of the American analytical philosophers,
influenced by the logical positivists who had emigrated to the United States, fleeing
from nazism, such as Carnap himself. Their intentions were less noble and more
“pragmatic.” They felt compelled to challenge their European benefactors,
devising intelligent and imaginative strategies to confront the “inherited wisdom”
and to establish new theories, as brilliant as intuitively implausible. To this
end, they inevitably adopted the strategy of divide and conquer, relying
largely on creative challenges in the style of Hume, albeit of far lesser
quality.[92]
Figures such as W. V. O. Quine and Donald Davidson appeared, later followed by
the restrictive genius of Saul Kripke, by Hilary Putnam, David Kaplan, and many
others – all of them scientistic philosophers of the second or third rank, not
due to any lack of capacity or imagination, but rather because of the inevitably
reductionist character of the formalist orientation that constitutes their
argumentative procedures (see Chapter III). The danger inherent in this “Humean”
procedure of challenging inherited wisdom through new and more questionable
doctrines lies in the loss of consilience; in doing so, the philosopher risks
sawing off the very branch upon which he sits.
Although
with relevant insights, like Kripke’s distinction between rigid and non-rigid
designators, the pathognomonic mark of error in this context lay in the
profoundly counterintuitive character of their challenges, whose justification
could not withstand, in its entirety, a proper critique of language. Hence
their fixation on syntax and on a simplified semantics, with the near exclusion
of pragmatics (Ch. II, sec. 3). A philosopher of pragmatic orientation, such as
the later Wittgenstein, with his “linguistic therapy,” was therefore anathematized
– and continues to be so even today. Indeed, the dreamed triumph was to analyze
ordinary concepts as though they belonged to quantum physics. This is not to
say that they lacked perspicacity. After all, even John Searle and Ernst
Tugendhat, isolated defenders of older ideas, were not prepared to effectively refute
Kripke and Putnam (a task that fell solely to me).
With
the institutional acceptance of the ideas of the Phase 3 philosophers, a new
“inherited wisdom” of somewhat inferior level was established, encouraging somewhat
gratuitous inventiveness. I refer not only to the rejection of verificationism,
but also to theses such as the indeterminacy of translation and the
inscrutability of reference (Quine), the rejection of the analytic–synthetic
distinction (Quine), inventions such as the necessary a posteriori and the contingent
a priori (Kripke), the referential function of the external causal-historical
chain (Kripke), semantic externalism (Putnam), an externalist challenge to Frege’s
view of indexicals (Perry), the elimination of the tripartite definition of knowledge
on the basis of the Gettier problem (Lehrer, Nozick), together with the epistemic
externalism (Goldman), and in philosophy of science, the thesis thar electrons
and quarks are non-real constructions of our minds (Bas Van Fraassen). There
are important half-truths within these errors, which contribute to the persistence
of confusion.
Phase 4: In the attempt to astonish by overturning not only common sense but
also sound reason, the process was carried forward by the heirs of the Phase 3
group, accumulating new “inherited wisdoms” until reaching the level of the indefensible
– that is, preposterous confusions such as knowledge-first (Williamson), childlike
sugestions such as the extended mind argument (Chalmers), or ignoble arguments
such as the claim that, since meaning lies outside the head, as Putnam
demonstrated, and since the locus of meaning is the mind, then the mind itself
must also lie outside the head (McDowell),[93]
culminating in absurdities such as dialetheism, according to which, if certain
propositions fail to be either true or false, it is because they are both true
and false at the same time (Priest).[94]
By casting forth these challenges from the lofty
heights of their podiums, with no critics left to confront them, these
latter-day philosophers eliminate most of the foundations upon which any deeper
reflection might have been constructed, thereby undermining the integrative
force that only consilience could engender. In so doing, they opened an immense
space for generally unfounded, superficial, and unrestrained niche speculation,
often taking refuge in hermetic formalistic elaborations ever more remote from
anything that might rightly be called philosophically significant.
Phase 5: Assuming the process continues, it is easy to foresee the future fate
of anglophonic philosophy. Having the philosophers already severed all the main
branches upon which they once sat, their cascading fall from the tree of wisdom
will be inevitable. Philosophers of the third, fourth, and indeed of any
conceivable rank will be bound to proliferate in multitudes. Within this apocryphal
milieu, any invention, no matter how shallow, will be readily accepted, provided
it conforms to the standards established by the accumulation of more and more
deceptive “inherited wisdoms,” fitting neatly into one of the many niches of
technically admissible “funny hypotheses.” After all, just as nothing useful
can be produced upon false foundations, so too from them everything may follow (ex
falso quodlibet). In this way, philosophy will finally become democratic:
“one philosopher, one vote” – a glass-bead game, accessible to any sufficiently
unconscious and uncommitted inhabitants of a wasteland of self-indulgence.[95]
The irrelevance
of the game, though unnoticed internally, will be readily perceived from the outside
by any minimally lucid observer. Those who possess sufficient discernment and preserved
intellectual integrity not to be deceived will keep their distance. Thus, if
the process continues, what will remain at the forefront of creativity will be those
impervious in their disconnection from reality, albeit endowed with imagination
and computational capacity, an attribute that must not be confused with intelligence,
here understood as the “capacity to apprehend truth.”[96]
In this way, Analytic philosophy will be transformed into a Tower of Babel of innumerable
tongues: a headless turkey spinning around aimlessly.
Let us now cast a glance at the more distant
causes of decline. In the search for explanation, we may recall what Max Weber
foresaw as the possible outcome of the disenchantment of the world (Entzauberung
der Welt). In his account, the world was once perceived as infused with magic
and largely governed by religious institutions. Over time, however, it was progressively
demystified, especially with the rise of capitalism, which either changed the
role of those institutions to foster it through the Protestant ethic or eroded
that role, often through nihilism. Disenchantment goes hand in hand with the bureaucratization,
rationalization, and desacralization of human life. Although
Weber acknowledged both the inevitability and advantages of this process, he also
regarded it negatively, as a loss of communal bonds. Human beings, trapped in
the “iron cage” of bureaucratization, become mere cogs in a vast machinery: small
cogs whose greatest ambition is to become larger ones. At the end of the
process, those who are intellectual cogs turn into “specialists without spirit,
sensualists without heart, nulities that imagine themselves to have attained a
level of civilization never before achieved.”[97]
In
this light, analytic philosophy may be seen as imprisoned within callous
bureaucratic institutions whose hierarchical rigidity inhibits upheaval and fosters
harmless fragmentation into countless micro‑philosophical theories. Such
proliferation obscures the very face of knowledge, for great philosophical problems
cannot be resolved through small solutions.
Yet disenchantment alone may not suffice to
explain the confinement of culture and philosophy within an iron cage. Philosophers
influenced by Marxism, such as Theodor Adorno and Herbert Marcuse[98], would add to Weber’s account the phenomenon
of the culture industry of later capitalismo. The system was designed to
keep individuals distracted and passive, alienating them from their human
essence so they might better serve the machinery of late capitalism.[99] Moreover, they would easily argue that the
culture industry has penetrated universities, estranging researchers from modes
of thought with genuine critical potential, modes that could challenge the
narrow techno‑scientific utility now demanded. Natural scientists are exposed
to scientistic philosophizing that suggests nothing exists beyond their
restricted universe, while scholars in the human sciences, philosophers
included, are encouraged to amuse themselves with “funny hypotheses” of a
micro‑philosophical nature, leaving them alienated from any form of totalizing,
potentially critical thought. Moreover, the conditions for authentic consensus are
constrained by a closed intellectual hierarchy that excludes potentially
dissonant voices.
The historian Jacques Barzun suggested that
we are living at the end of a civilizational arc that began with the Renaissance.[100] It
seems plausible. Furthermore, this end is coming together with the end of the
Pax Americana that began after the Second World War, which is also the end of the
illusion of liberal democratic capitalism, for the better, though what comes
next is, for now, terra incognita.[101]
Must
the future of our philosophy culminate in a kind of High Middle Ages, as did
the Roman decline described by Gibbon? Or, as “the polar night of icy darkness”[102]
described by Max Weber? Not necessarily. History shows that
dusk is followed by night, but night is followed by dawn. Weber himself
believed that society ultimately holds the key to opening the iron cage of
rationalization and bureaucratization imposed within capitalist Society, perhaps,
according to him, through “a great rebirth of old ideas and ideals.”[103] Consequently,
the present lack of disruptive innovations in philosophy need not be regarded
as an irreversible fate.
In the second part of her article, Susan Haack proposed an alternative
path that I have, in fact, been attempting to follow. She emphasized the
importance of a sufficiently comprehensive treatment of problems and of a
procedure by successive approximations[104]
guided by the assumption of consilience. Instead of dividing in order to conquer,
one should try conquer a reasonably broad issue so as not to need to divide in
arbitrary ways. As Wittgenstein once observed to himself:
Do not get involved in partial problems, but always take flight toward
where there is a free view of the whole, of the great single problem, even if
that vision is not yet clear.[105]
We
can compare this procedure to the art of painting: it begins with the
conception as a whole, a vague display of shapes, colors, light, and shadows...
Gradually, the forms are outlined with greater precision, errors are detected
and corrected, details and shades are added, and what at first seemed like incomprehensible
smudges is transformed into clear, convincing, and truly beautiful images. They
may be oil paintings, collages, frescoes... Habermas’s work, for example, seems
to me comparable to a series of large panels with some moments of great
density, such as that of his universal pragmatics.
Yet to work with philosophy in this way
requires the right atmosphere, and that atmosphere is not given to us. As
Jacques Barzun argued, we live in a period of cultural decline and exhaustion,
in which complex problems are treated as if they were simple. This tendency
extends easily to our present hand-to-mouth philosophy, which, as a rule, lacks
the patience and depth that genuine thought demands.
A common explanation for
philosophy’s contemporary difficulties is the claim that the exponential growth
of knowledge has rendered its traditional trajectory impossible. Yet this can
be doubted. What has advanced exponentially is not so much science itself as
applied science and technology. And who can say whether a social crisis,
profound enough to be salutar, combined with AI and other technological innovations,
might not restore high culture from its current disgrace? After all, Hegel, writing
in an age of intense political and religious conflict, once observed: “The
necessity of philosophy can only arise in times of crisis, when the power of
unification has vanished from human life and oppositions, having lost their
living resemblance and reciprocal reaction, become independent.”[106]
[1] From the “Warning Note” appeared after Bertrand
Russell’s intellectual autobiography entitled My Philosophical Development.
[2] “Now when we consider the whole, such and such
a form realized in this flesh and these bones, so that this is Callias or Socrates—they
differ by virtue of their matter (for matter is different in different individuals),
but they are the same in form; for the form is indivisible.” Metaphysics
1034a 5–8 (my italics). Now, if the form is indivisible, it must be capable of
participating in a diversity of substances, which seems unreasonable for Aristoteles.
For an exposition of Aristotle’s Platonic relapses, see W. K. C. Guthrie, A
History of Greek Philosophy, vol. V, chap. XIII. See also A. E. Taylor, Aristotle.
[3] A. Kenny, Aquinas on Mind, cap. 1, p.
4.
[4] J. L. Austin, Philosophical Papers,
p. 232.
[5] The book entitled How to do Things with Words was posthumously published in 1962.
[6] Philosophische Untersuchungen, I, sec. 126.
[7] See Auguste Comte, Cours de Philosophie
Positive, Oevres, vol. I. "I do not follow his classification in detail,
since he committed at least two obvious errors: the inclusion of astronomy (an
applied science) among the sciences I call basic, and the exclusion of
psychology, which was still practically nonexistent as a science in his time.
The principles of classification, however, remain valid.
[8] It should be noted that ‘sociology’ is better understood
in a broad sense, as conceived by Durkheim, for whom it encompassed political
economy, demography, the history of law, the history of religions, and related fields.
[9] I use this expression instead of ‘epistemic
rupture’ since the latter is usually understood as a rupture occurring inside
and not at the beginning of a science.
[10] J. R. Searle noted that it is a mistake to believe
that, because objects of internal experience have an ontologically subjective
mode of existence, they must also be epistemically subjective, preventing their
access by science. Examples: pain, pleasure, visual experiences, beliefs, intentions...
are ontologically subjective phenomena, but epistemically objective. See his Mind,
Language and Society: Philosophy in the Real World, pp. 43-45.
[11] Fragment 2, Diels-Kranz 28 B2.
[12] Aristotle, Metaphysics 1005b 19 ss.
[13] Aristotle, Physics, Book
VI, 2.
[14] G. S. Kirk, J. E. Raven & M. Schofield, The Presocratic Philosophers, pp.
133-134. See discussion in W. K. C. Guthrie, A History of Greek Philosophy,
vol. I, p. 103.
[15] Karl Popper, “Back to the Pre-Socratics”, in
his book, Conjectures and Refutations, p. 138.
[16] Anthony Kenny, A New History of Western
Philosophy, vol. I, p. 25.
[17] The first to develop this hypothesis, now
discredited among cosmologists, was R. C. Tolman in his classic Relativity,
Thermodynamics, and Cosmology, sec. 174, p. 439 (1934). Tolman’s suggestion
is now questioned, and other, even more ambitious and equally hypothetical
ideas have emerged, such as the Big Bang caused by the collision between three-dimensional
membranes.
[18] G. S. Kirk, J. E. Raven & M. Schonfield
(eds.), The Presocratic Philosophers, pp. 140-142.
[19] See Anthony Kenny, A New History of Western
Philosophy, vol. I, pp. 22-23. In reference to Darwin’s salutation, Kenny directs
the reader to the appendix of the sixth edition of The Origin of Species.
[20] Plato, Republic, IV, 446a ss.
[21] Plato, Phaedrus 246a ss.
[22] Paul D. McLean: The Triune Brain in Evolution: Role in Paleocerebral
Functions.
[23] Consider, for instance, Jeremy Genovese’s influential
critical article, “Snakes and Ladders: A Reappraisal of the Triune Brain
Hypothesis.” According to him, brain evolution should be seen as adaptive,
mosaic, and context-dependent, and not layered, as in McLean’s theory. But why
not a dual framework? One about ranking, the other about process? If not, we
should ask whether Genovese's anti-antropomorphic argument isn’t, to a certain
extent, conflating ecological success with neural sophistication. For if he is right,
the insect nervous system should be more developed than that of the homo
neanderthalensis, since it has demonstrated greater adaptive success.
[24] S. Freud, The Ego and the Id.
[25] Tha is, in that which in his Physics
was not metaphysics, and also in On the Heavens.
[26] Aquinas on Mind, pp. 4-5.
[27] For Kant, a priori knowledge is that which
is independent of sensory experience, as well as necessary and universal. Kritik
der Reinen Vernunft, Einführung. B 1-3.
[28] Como é sabido, a existência de Deus, da Alma e da liberdade
foi para Kant, postulada pela razão prática, ainda que não pudesse sê-lo pela razão
pura. Para ele a moralidade dependia da aceitação desses postulados. (Ver a segunda
parte da Crítica da Razão Prática.) Sobre essa mudança Russell notou sarcasticamente
que, embora Kant tenha sido acordado de seu sono dogmático por David Hume, ele logo
descobriu um sonífero que lhe permitisse dormir outra vez, acrescentando que a maioria
das pessoas nunca consegue se libertar das verdades auridas no ventre materno. Cf
Bertrand Russell: A History of Western Philosophy.
[29] This is a reference, not to the ambitious common sense
that contradicts science (such as “The sun revolves around the earth” or “time
is always the same for any observer”), but to the everyday common sense, which
is presupposed even for our learning of science, such as “my body exists,” “There
are other human beings,” or “The earth has existed for a long time.” It is continuous
with science, which would be impossible if we rejected its presuppositions. D.
M. Armstrong called it “moorean common sense”, in deference to G. E. Moore’s article,
“A Defense of Common Sense.” See Claudio Costa, Philosophical Semantics:
Reintegrating Theoretical Philosophy, cap. II. See also Susan Haack, “The Long
Arm of Common Sense”.
[30] Keith Lehrer, Theory of Knowledge.
See also William James, Some Problems of Philosophy, p. 23.
[31] Aquinas on Mind, p. 5.
[32] Aquinas on Mind, p. 9. I agree with
Kenny’s motivation, but not with his conclusion. My aim is to show that the
belief that the progressist thesis endangers the scope of philosophy confuses
the nature of scientific answers (i.e., answers consensually attainable) that
may eventually replace the central problems of philosophy – which are questions
whose ultimate nature we do not know – with the undertakings of already
existing particular sciences, such as physics, whose nature we already know.
[33] Friedrich Nietzsche set forth demystifying
insights on this question in Human, All Too Human, cap. IV, sec. 165.
[34] Walter Isaacson, Einstein, His Life and Universe,
p. 122.
[35] See J. Passmore, “Philosophy”, in Paul Edwards,
The Encyclopedia of Philosophy, vol. VI, pp. 219-20.
[36] The Logic of Scientific Discovery, Part I, Chap. I, 6.
[37] See K. R. Popper, Conjectures and
Refutations, pp. 339–340. The standard example of decisive falsification
employed by Popper was the deflection of starlight observed during the 1919
eclipse. Ironically, precisely this test would later be considered too unreliable
to be probative. (Cf.
Martin Gardner, Relativity Explained, Appendix, pp. 96-7).
[38] See K. R. Popper, The
Logic of Scientific Inquiry, cap. II.
[39] “What is Science?”,
p. 42 (my italics). Science, as a corpus of knowledge, as what
scientists do, and as an institution, wrote Ziman, “cannot be treated separately,
any more than a solid can be reconstructed from its projection onto different
Cartesian planes.”
(ibid. p. 42).
[40] John Ziman, Public Knowledge, p. 24
[41] The Sociology of Science, cap. 13, p.
267 ss.
[42] The sociology of science, p. 270.
[43] “Wahrheitstheorien” (1972). See also
Truth and Justification.
[44] Some philosophers of science downplay the role of truth
in science. Bas van Fraassen (as an anti-realist), for instance, replaces the truth
of a theory with its empirical adequacy as the acceptance of truth regarding
observables and not the ultimate truth. But one can believe to know the truth
of inobservables (as a realist) without commitment to any ultimate truth, as if
empirical adequacy were nothing more than the acceptance of truth by scientists
involved in a research program. See his The Scientific Image, p. 12.
[45] There are explainable exceptions,
such as Nietzsche. Perhaps the most curious was Wittgenstein, who knew almost nothing
from history of philosophy but had excellent ears and practically ran the weekly
lectures at Cambridge, where the best of analytical philosophy gathered. With
one foot in the university and the other in the world of life, which he experienced
in depth, he easily perceived the academic longing to go far beyond the limits
of natural language and the idleness of their attempts, hence inventing his
"therapeutic philosophy”. However, although Nietzsche and Wittgenstein
didn’t mastered a tradition, they founded new philosophical ways to see
tradition.
[46] Scientificism as the result of overconfidence
in formalisms can be found, for instance, in Scott Soames’ book, Reference
and Description: The Case Against Two-Dimensionalism. Overconfidence in empirical
science leading to reductionism can be exemplified in Sam Harris’s popular book
Free Will, which explicitly defends the view that Libet’s findings show
our sense of conscious choice is illusory.
[47] Kevin Mulligan, Peter
Simons, and Barry Smith, “What is Wrong with Contemporary Philosophy?”, p. 4.
See also Dennett, “High-Order Truths about Schmess”
[48] Ver Susan Haack “The Fragmentation of Philosophy: The Road
to its Reintegration.”
[49] “Afterword: Must do Better”, in The Philosophy
of Philosophy, pp. 249-280.
[50] Consilience: The Unity of Knowledge.
[51] Susan Haack: “The Fragmentation of Philosophy: The Road
to its Reintegration”, in The Fragmentation of Philosophy, p. 15. Em seu
uso do conceito de consiliência, Haack foi influenciada pelo trabalho do biólogo
Edward Wilson.
[52] Form, Matter, Substance.
[53] Continental philosophy,
or what remains of it, has likewise been in decline. Consider the case of three
of its current exponents, such as Slavoj Žižek, Markus Gabriel, and Quentin Meillassoux.
Žižek, influenced by Hegel, Marx, and Jacques Lacan, has advanced an imaginative
and socially relevant critique. Yet his theoretical approach becomes “Lacanian”
in the sense that it remains entangled in expressive conceptual entanglements
without overcoming them. (As one critic observed, “intelligent enough to formulate
incisive critiques, but not sufficiently so to construct a consistent theory”.)
Markus Gabriel, in turn, draws upon a wide range of historical and contemporary
texts, which he remasters in order to produce “pseudo-thaumas” – effects
of wonder – in a juvenile public, more impressionable than demanding.
Meillassoux, finally, elaborates elegant intellectual fantasies that, at bottom,
continue the postmodernist tradition. His intricate provocations, when closely
examined, appear far removed from the profound originality of Hume, the philosopher
upon whom he relies. Originality is truly explosive only when combined with relevance.
[54] Jenny Teichman, “Don’t be Cruel or Reasonable”, in Polemical
Papers, p. 134. D. W. Hamlyn: Uma história da filosofia ocidental, p.
398. (O original em inglês foi publicado em 1987.)
[55] Susan Haack, “Scientistic Philosophy: No; Scientific
Philosophy: Yes.”
[56] Susan Haack. “Fragmentation of Philosophy:
The Road to Reintegration”, in Reintegrating Philosophy, cap. 1, p. 9.
[57] Susan Haack, “Fragmentation of Philosophy:
The Road to Reintegration”, in Reintegration of Philosophy, cap. 1, 1.3.
See also Kevin Mulligan, Peter Simons, and Barry Smith, “What is Wrong with Contemporary
Philosophy”. A defense of fragmentation as inevitable was presented by Scott Soames
in The Analytic Tradition in Philosophy, vol. 3, Appendix.
[58] Susan Haack, “Fragmentation of Philosophy:
The Road to Reintegration”.
[59] As Jenny Teichman noted, “British and American philosophy
has recengly become extraordinarily scholastic, obsessed with questions about how
many philosophers can sito n a niggle”, p. 134.
[60] Blue book, p. 18 (1933-1934)
[61] Susan Haack, “The Fragmentation of Philosophy;
the Road to its Reintegration, p. 20.
[62] Susan Haack, “Fragmentation of Philosophy: The Road to
Reintegration”, p. 21. It is curious to note that metaphilosophers belonging to
the current mainstream, such as Timothy Williamson (2022), or the authors of An
Introduction to Metaphilosophy (2013), do not cite the well-researched and
controversial metaphilosophical texts of Susan Haack. I assume they lack good counterarguments.
[63] See Susan Haack, “Fragmentation of Philosophy: The Road to
Reintegration”, in Reintegration of Philosophy, pp. 5-14.
[64] For instance, the book Marx, from James
Edwards and Brian Leiter (2025), and Leibniz, from Nicholas Jolley (2005), which
I have the pleasure of reading.
[65] Claude Panaccio, Ockham’s Nominalism (1923).
[66] See Form, Matter, Substance (2023).
[67] There are possible worlds in which someone
else married Pythias, but there cannot be a possible world in which Aristotle
is not the referent of ‘Aristotle’.
[68] This example was
first presented by Bertrand Russell, who saw in it only a bad justification, and
not a challenge to the tripartite definition.
[69] I perceived the obvious solution as soon
as I learned the problem. But I considered it too obvious not to have been noticed
before, so I went on to investigate the historical responses to the problem.
The most elaborate one I found is in Fogelin’s book. What remained for me was
to refine and formalize his solution. After my article was published in Ratio
(2010), I sent it to Fogelin, who believed that this version had strengthened
what he himself thought… See Chapter V of my book Lines of Thought.
[70] Here one could object that my conception
of knowledge is contextualist: what assures knowledge is what we know now and
better. Indeed, there is no absolute knowledge and truth, but we can compare pretensions
of knowledge and truth.
[71] Philosophical Investigations I, sec. 201.
[72] Wittgenstein on Rules and Private
Language, p. 9
[73] “Simplicity and the Skeptical Challenge to Meaning”.
[74] I discussed the case in my book, The Philosophical
Semantics, pp. 346-349.
[75] Susan Haack, Manifesto of a Passionate
Moderate, p. 188.
[76] In fact, they
usually do know! It’s about finding a small protrusion that males generally
have in the cloaca, as well as slight differences in the colour and length of
the feathers.
[77] If reliabilism is understood as the possession
of information that renders an internal justification reliable, whether in the
first or even in the third person, then it should be regarded as a significant
addition to the traditional definition, which requires justification for the
proposition that is known.
[78] For an explanation of why Putnam’s semantic externalism
does not work, see Claudio Ferreira-Costa, How Do Proper Names Really Work?,
pp.228-236.
[79] Francisco Varela, Evan Thompson, Eleanor
Rosch, The Embodied Mind, p. 213. This is the grounding book on enactivism,
published in 1991.
[80] Daniel Hutto &
Erik Myin, in Radicalizing Enactivism: Basic Minds Without Content
(2014).
[81] T. S. Eliot, “Four Quartets”, p. 172.
Sigmund Freud would speak of lack of principle of reality.
[82] The idea of the extended mind was formally
introduced in the article “The Extended Mind” (1998) by Andy Clark and David
Chalmers.
[83] The strategy of
reasoning can be extended to the laughable: “if the intestine has the function
of digesting aliments, then preparing food also belongs to the intestine.” “If
the dialysis machine cleans the blood, then it is a real kidney.”
[84] As Kevin Mulligan, Peter Simons, and Barry
Smith have already observed, the principal divisions of philosophy are: analytic
philosophy, continental philosophy, and the history of philosophy. The
difficulty is that these domains do not communicate with one another, which
proves limiting for innovative philosophical work – work that ought to draw
upon the best that each of these traditions has to offer. See “What is Wrong with Contemporary
Philosophy?”
[85] Kant
published his Critique at 57; Habermas published his Theory of Communicative
Action at 52; Searle's best book, Intentionality, was published at 51There
are, however, exceptions: Berkeley, Hume, Schelling, and Kripke published their
best work as they were still young. Curiously, they did not publish works of
the same caliber as those of their youth.
[86] Nietzsche, who considered the issue in depth, observed
that genius can be "mediocre," recalling the immense difficulty Beethoven
had in composing, which required him to rewrite verses countless times until they
became incomparable. See his Menschlich alzu menschlich, chap. IV.
[87] This occurs not only in philosophy. Physicists
like Carlos Rovelli have criticized the lack of disruptive breakthroughs in
theoretical physics in the last 60 years. See “Is Bad Philosophy Holding Back
Physics?”
[88] Harry Frankfurt: On Bullshit. Curiously, this is also
not a book cited by current metaphilosophers.
[89] See the text by Sigmund Freud entitled The
Future of an Illusion (Die Zukunft einer Illusion).
[90] See Hanna Arendt, Eichmann in Jerusalem.
[91] The History of the Decline and Fall of the
Roman Empire, cap. 2 (On genius).
[92] John Searle described the procedure as follows: If
your argument leads to an absurd conclusion, don’t blame the premises: declare it
a discovery!
[93] “Putnam on Mind and Meaning”.
[94] I know this because I have refuted them.
My refutation of anti-verificationism rests upon the verificationism proposed by
Wittgenstein, which was distorted by the Vienna Circle. The latter constructed
a rigid straw man only to then, quite rightly, reject it (see my Philosophical
Semantics, Chap. V). The defense of a modified version of the traditional
definition of knowledge – capable of dissolving the Gettier problem without remainder
– can be found in Lines of Thought, ch. V. My defense of a Fregean view
of indexical utterances against John Perry’s essential indexicals (Lines of Thought,
chap. IV). My critique of Kripke, together with the critique of the necessary a
posteriori, supplemented by the development of a more consistent and refined
theory of reference, is presented in How Do Proper Names Really Work? As
for Hilary Putnam’s semantic externalism, I have carefully dismantled it in
chapter 8 of Cognitivismo semântico: filosofia da linguagem sob nova chave.
(According to Searle, Putnam himself, at the end of his life, confessed to him that
he no longer believed in his argument for externalism.) As for dialetheism, the
errors were so pervasive that I lacked the patience to refute it in writing;
the best that can be said in its favour is that it shows how far an error can lead.
[95] “Everybody shall produce written research
in order to live, and it shall be decreed a knowledge explosion” (Jacques
Barzun), cited by Susan Haack, in her Manifesto of a Passionate Moderate, pp.
188, 192.
[96] In this original sense, intelligence (from
intus legere, “to read within”) must not be confused with the means it
employs – skills such as those measured by IQ.
[97] Max Weber: The Protestant Ethic
and the Spirit of Capitalism. p. 182.
[98] See Theodor Adorno and Max Horkheimer, Dialectic of Enlightenment.
See also One-Dimensional Man: Studies in the Advanced Capitalist Society.
[99] According to Herbert Marcuse,
technological society resists high culture because such culture encourages
critical reflection and does not contribute to productivity. In his view, late
capitalism produces repressive desublimation: instinctual drives are
released, but only in ways that reinforce consumerism and efficiency. Thus,
sexual pleasure displayed in a luxury car is celebrated, since it sustains the
machinery of production and consumption, while the romantic love of Tristan und
Isolde is dismissed as laughable, because it generates neither profit nor
efficiency and carries a potentially critical force. See One-Dimensional Man:
Studies in the Advanced Capitalist Society.
[100] Jackes Barzun, From
Dawn to Decadence: 500 Years of Western Cultural Life, 1500 to the Present.
[101] We have reasons to
believe that the present Chinese system is a licit alternative to our populist democracies,
as something that, in its meritocratic hierarchy, remembers Plato’s Republic.
That this system can be refined, and that can, in the end, bring something like
a confederation of nations justly satisfying free human wishes, is our most
optimistic expectation.
[102] Max Weber, Political Writings, xvi.
[103] Max Weber, The protestantic Ethik and the Spirit
of Capitalism, p. 236.
[104] Susan Haack,
“Scientistic Philosophy, No; Scientific Philosophy, Yes”,
p. 30.
[105] Personal notebooks, 1931.
[106] Differenz des Fichteschen und Schellingschen Systems
der Philosophie,
S. 9

Nenhum comentário:
Postar um comentário