Philosopher’s Carnival No. 146

Hello new friends, philosophers, and likeminded internet creatures. This month TPM is hosting the Philosopher’s Carnival.

Something feels wrong with the state of philosophy today. From whence hast this sense of ill-boding come?

For this month’s Carnival, we shall survey a selection of recent posts that are loosely arranged around the theme of existential threats to contemporary philosophy. I focus on four. Pre-theoretic intuitions seem a little less credible as sources of evidence. Talk about possible worlds seems just a bit less scientific. The very idea of rationality looks as though it is being taken over by cognate disciplines, like cognitive science and psychology. And some of the most talented philosophers of the last generation have taken up arms against a scientific theory that enjoys a strong consensus. Some of these threats are disturbing, while others are eminently solvable. All of them deserve wider attention.

1. Philosophical intuitions

Over at Psychology TodayPaul Thagard argued that armchair philosophy is dogmatic. He lists eleven unwritten rules that he believes are a part of the culture of analytic philosophy. Accompanying each of these dogmas he proposes a remedy, ostensibly from the point of view of the sciences. [Full disclosure: Paul and I know each other well, and often work together.]

Paul’s list is successful in capturing some of the worries that are sometimes expressed about contemporary analytic philosophy. It acts as a bellwether, a succinct statement of defiance. Unfortunately, I do not believe that most of the items on the list hit their target. But I do think that two points in particular cut close to the bone:

3. [Analytic philosophers believe that] People’s intuitions are evidence for philosophical conclusions. Natural alternative: evaluate intuitions critically to determine their psychological causes, which are often more tied to prejudices and errors than truth. Don’t trust your intuitions.

4. [Analytic philosophers believe that] Thought experiments are a good way of generating intuitive evidence. Natural alternative: use thought experiments only as a way of generating hypotheses, and evaluate hypotheses objectively by considering evidence derived from systematic observations and controlled experiments.

From what I understand, Paul is not arguing against the classics in analytic philosophy. (e.g., Carnap was not an intuition-monger.) He’s also obviously not arguing against the influential strain of analytic philosophers that are descendants of Quine — indeed, he is one of those philosophers. Rather, I think Paul is worried that contemporary analytic philosophers have gotten a bit too comfortable in trusting their pre-theoretic intuitions when they are prompted to respond to cases for the purpose of delineating concepts.

As Catarina Dutilh Novaes points out, some recent commentators have argued that no prominent philosophers have ever treated pre-theoretic intuitions as a source of evidence. If that’s true, then it would turn out that Paul is entirely off base about the role of intuition in philosophy.

Unfortunately, there is persuasive evidence that some influential philosophers have treated some pre-theoretic intuitions as being a source of evidence about the structure of concepts. For example, Saul Kripke (in Naming & Necessity, 1972:p.42) explained that intuitiveness is the reason why there is a distinction between necessity and contingency in the first place: “Some philosophers think that something’s having intuitive content is very inconclusive evidence in favor of it. I think it is very heavy evidence in favor of it, myself. I really don’t know, in a way, what more conclusive evidence one can have about anything, ultimately speaking”.

2. Philosophical necessity

Let’s consider another item from Paul’s list of dogmas:

8. There are necessary truths that apply to all possible worlds. Natural alternative: recognize that it is hard enough to figure out what is true in this world, and there is no reliable way of establishing what is true in all possible worlds, so abandon the concept of necessity.

In this passage Paul makes a radical claim. He argues that we should do away with the very idea of necessity. What might he be worried about?

To make a claim about the necessity of something is to make a claim about its truth across all possible worlds. Granted, our talk about possible worlds sounds kind of spooky, but [arguably] it is really just a pragmatic intellectual device, a harmless way of speaking. If you like, you could replace the idea of a ‘possible world’ with a ‘state-space’. When computer scientists at Waterloo learn modal logic, they replace one idiom with another — seemingly without incident.

If possible worlds semantics is just a way of speaking, then it would not be objectionable. Indeed, the language of possible worlds seems to be cooked into the way we reason about things. Consider counterfactual claims, like “If Oswald hadn’t shot Kennedy, nobody else would’ve.” These claims are easy to make and come naturally to us. You don’t need a degree in philosophy to talk about how things could have been, you just need some knowledge of a language and an active imagination.

But when you slow down and take a closer look at what has been said there, you will see that the counterfactual claim involves discussion of a possible (imaginary) world where Kennedy had not been shot. We seem to be talking about what that possible world looks like. Does that mean that this other possible world is real — that we’re making reference to this other universe, in roughly the same way we might refer to the sun or the sky? Well, if so, then that sounds like it would be a turn toward spooky metaphysics.

Hence, some philosophers seem to have gone a bit too far in their enthusiasm for the metaphysics of possible worlds. As Ross Cameron reminds us, David K. Lewis argued that possible worlds are real:

For Lewis, a world at which there are blue swans is a world with blue swans as parts, and so a world with round squares is a world with round squares as parts.  And so, to believe in the latter world is to believe in round squares.  And this is to raise a metaphysical problem, for now one must admit into one’s ontology objects which could not exist.  In brief, impossible worlds for Lewis are problematic because of how he thinks worlds represent: they represent something being the case by being that way, whereas his opponents think worlds represent in some indirect manner, by describing things to be that way, or picturing them to be that way, or etc.

And to make matters worse, some people even argue that impossible worlds are real, ostensibly for similar reasons. Some people…

…like Lewis’s account of possibilia but are impressed by the arguments for the need for impossibilia, so want to extend Lewis’s ontology to include impossible worlds.

Much like the Red Queen, proponents of this view want to do impossible things before breakfast. The only difference is that they evidently want to keep at it all day long.

Cameron argues that there is a difference between different kinds of impossibility, and that at least one form of impossibility cannot be part of our ontology. If you’re feeling dangerous, you can posit impossible concrete things, e.g., round squares. But you cannot say that there are worlds where “2+2=5″ and still call yourself a friend of Lewis:

For Lewis, ‘2+2=4’ is necessary not because there’s a number system that is a part of each world and which behaves the same way at each world; rather it’s necessary that 2+2=4 because the numbers are not part of any world – they stand beyond the realm of the concreta, and so varying what happens from one portion of concrete reality to another cannot result in variation as to whether 2+2 is 4.

While Cameron presents us with a cogent rebuttal to the impossibilist, his objection still leaves open the possibility that there are impossible worlds — at least, so long as the impossible worlds involve exotic concrete entities like the square circle and not incoherent abstracta.

So what we need is a scientifically credible account of necessity and possibility. In a whirlwind of a post over at LessWrong, Eliezer Yudkowsky argues that when we reason using counterfactuals, we are making a mixed reference which involves reference to both logical laws and the actual world.

[I]n one sense, “If Oswald hadn’t shot Kennedy, nobody else would’ve” is a fact; it’s a mixed reference that starts with the causal model of the actual universe where [Oswald was a lone agent], and proceeds from there to the logical operation of counterfactual surgery to yield an answer which, like ‘six’ for the product of apples on the table, is not actually present anywhere in the universe.

Yudkowsky argues that this is part of what he calls the ‘great reductionist project’ in scientific explanation. For Yudkowsky, counterfactual reasoning is quite important to the project and prospects of a certain form of science. Moreover, claims about counterfactuals can even be true. But unlike Lewis, Yudkowsky doesn’t need to argue that counterfactuals (or counterpossibles) are really real. This puts Yudkowsky on some pretty strong footing. If he is right, then it is hardly any problem for science (cognitive or otherwise) if we make use of a semantics of possible worlds.

Notice, for Yudkowski’s project to work, there has to be such a thing as a distinction between abstracta and concreta in the first place, such that both are the sorts of things we’re able to refer to. But what, exactly, does the distinction between abstract and concrete mean? Is it perhaps just another way of upsetting Quine by talking about the analytic and the synthetic?

In a two-part analysis of reference [here, then here], Tristan Haze at Sprachlogik suggests that we can understand referring activity as contact between nodes belonging to distinct language-systems. In his vernacular, reference to abstract propositions involves the direct comparison of two language-systems, while reference to concrete propositions involves the coordination of systems in terms of a particular object. But I worry that unless we learn more about the causal and representational underpinnings of a ‘language-system‘, there is no principled reason that stops us from inferring that his theory of reference is actually just a comparison of languages. And if so, then it would be well-trod territory.

3. Philosophical rationality

But let’s get back to Paul’s list. Paul seems to think that philosophy has drifted too far away from contemporary cognitive science. He worries that philosophical expertise is potentially cramped by cognitive biases.

Similarly, at LessWrong, Lukeprog worries that philosophers are not taking psychology very seriously.

Because it tackles so many questions that can’t be answered by masses of evidence or definitive experiments, philosophy needs to trust your rationality even though it shouldn’t: we generally are as “stupid and self-deceiving” as science assumes we are. We’re “predictably irrational” and all that.

But hey! Maybe philosophers are prepared for this. Since philosophy is so much more demanding of one’s rationality, perhaps the field has built top-notch rationality training into the standard philosophy curriculum?

Alas, it doesn’t seem so. I don’t see much Kahneman & Tversky in philosophy syllabi — just light-weight “critical thinking” classes and lists of informal fallacies. But even classes in human bias might not improve things much due to the sophistication effect: someone with a sophisticated knowledge of fallacies and biases might just have more ammunition with which to attack views they don’t like. So what’s really needed is regular habits training for genuine curiositymotivated cognition mitigation, and so on.

In some sense or other, Luke is surely correct. Philosophers really should be paying close attention to the antecedents of (ir)rationality, and really should be training their students to do exactly that. Awareness of cognitive illusions must be a part of the philosopher’s toolkit.

But does that mean that cognitive science should be a part of the epistemologist’s domain of research? The answers looks controversial. Prompted by a post by Leah LebrescoEli Horowitz at Rust Belt Philosophy argues that we also need to take care that we don’t just conflate cognitive biases with fallacies. Instead, Horowitz argues that we ought to make a careful distinction between cognitive psychology and epistemology. In a discussion of a cognitive bias that Lebresco calls the ‘ugh field’, Horowitz writes:

On its face, this sort of thing looks as though it’s relevant to epistemology or reasoning: it identifies a flaw in human cognition, supports the proposed flaw with (allusions to) fairly solid cognitive psychology, and then proceeds to offer solutions. In reality, however, the problem is not one of reasoning as such and the solutions aren’t at all epistemological in nature… it’s something that’s relevant to producing a good reasoning environmentreviewing a reasoning process, or some such thing, not something that’s relevant to reasoning itself.

In principle, Eli’s point is sound. There is, after all, at least a superficial difference between dispositions to (in)correctness, and actual facts about (in)correctness. But even if you think he is making an important distinction, Leah seems to be making a useful practical point about how philosophers can benefit from a change in pedagogy. Knowledge of cognitive biases really should be a part of the introductory curriculum. Development of the proper reasoning environment is, for all practical purposes, of major methodological interest to those who teach how to reason effectively. So it seems that in order to do better philosophy, philosophers must be prepared to do some psychology.

4. Philosophical anti-Darwinism

The eminent philosopher Thomas Nagel recently published a critique of Darwinian accounts of evolution through natural selection. In this effort, Nagel joins Jerry Fodor and Alvin Plantiga, who have also published philosophical worries about Darwinism. The works in this subgenre have by and large been thought to be lacking in empirical and scholarly rigor. This trend has caused a great disturbance in the profession, as philosophical epistemologists and philosophers of science are especially sensitive to ridicule they face from scientists who write in the popular press.

Enter Mohan Matthen. Writing at NewAPPS, Mohan worries that some of the leading lights of the profession are not living up to expectations.

Why exactly are Alvin Plantinga and Tom Nagel reviewing each other? And could we have expected a more dismal intellectual result than Plantinga on Nagel’s Mind and Cosmos in the New Republic? When two self-perceived victims get together, you get a chorus of hurt: For recommending an Intelligent Design manifesto as Book of the Year, Plantinga moans, “Nagel paid the predictable price; he was said to be arrogant, dangerous to children, a disgrace, hypocritical, ignorant, mind-polluting, reprehensible, stupid, unscientific, and in general a less than wholly upstanding citizen of the republic of letters.”

My heart goes out to anybody who utters such a wail, knowing that he is himself held in precisely the same low esteem. My mind, however, remains steely and cold.

Plantinga writes, “Nagel supports the commonsense view that the probability of [life evolving by natural selection] in the time available is extremely low.” And this, he says, is “right on target.” This is an extremely substantive scientific claim—and given Plantinga’s mention of “genetic mutation”, “time available,” etc., it would seem that he recognizes this. So you might hope that he and Nagel had examined the scientific evidence in some detail, for nothing else would justify their assertions on this point. Sadly, neither produces anything resembling an argument for their venturesome conclusion, nor even any substantial citation of the scientific evidence. They seem to think that the estimation of such probabilities is well within the domain of a priori philosophical thought. (Just to be clear: it isn’t.)

Coda

Pre-theoretic intuitions are here to stay, so we have to moderate how we think about their evidential role. The metaphysics of modality cannot be dismissed out of hand — we need necessity. But we also need for the idea of necessity to be tempered by our best scientific practices.

The year is at its nadir. November was purgatory, as all Novembers are. But now December has arrived, and the nights have crowded out the days. And an accompanying darkness has descended upon philosophy. Though the wind howls and the winter continues unabated, we can find comfort in patience. Spring cannot be far off.

Issue No.147 of the Philosopher’s Carnival will be hosted by Philosophy & Polity. See you next year.

About the author

65 Comments.

  1. I’m not a professional philosopher, but isn’t taking up arms against the consensus view one of the things that philosophers should do?

    After all, if philosophers don’t take up arms against the consensus, who (except cranks) will?

    It is important that someone besides cranks takes up arms against the consensus, since every consensus should be questioned and requestioned and rerequestioned.

  2. What a fascinating series of comments

    Just a few comments.

    In setting restrictions, does not one risk the danger of pre-establishing an outcome? It seems to me that fences not only keep in, they keep out, also.

    Although I do enjoy TPM, I fear that philosophy, which should lead common sense to wisdom, is becoming an exercise of the mind for academics.

    I read many philosophers and thoughts, yet I find philosophy distancing itself from the masses.

    Mortimer Adler, now deceased, has almost forty books published, on various concepts of philosophy. He was noted for his ability to write for the common man. He wrote several writings on the dangers that philosophy seemed to be encountering as a result of the direction it was taking.

    His personal journey, as a philosopher lead him from Judaism, to paganism, to Christianity. What was impressive to me, was the journey he followed in his search for wisdom.

    In looking at the internet, I believe that around 700,000 degrees in philosophy are held by deists/theists. One of my most satisfying reading is Hitchens and Dawkins, however.

    It seems to me, that philosophy, if it is to regain it’s dignity with the common man, must re-orient itself in the search for wisdom. They should no longer allow the sciences to push them aside, and philosophers themselves must question everything, including belief, since so much of mankind has religious beliefs.

    Philosophers are an interesting lot, they are as varied as the different areas of thought.

    I am anticipating some good observation with TPM.

  3. “Something feels wrong with the state of philosophy today. From whence hast this sense of ill-boding come?”

    To the substance later, but my potential delight in seeing ‘whence’ deployed was tempered since it was used incorrectly.

    The second quoted sentence could have been simply written: “Whence this sense of ill-boding?”. The whole sense of ‘where did it come from’ is denoted by ‘whence’.

    Next one might address, ‘whither’ as in ‘whither Philosophy’ ; rather apropos, no?

  4. Hi Tim,

    Although I do enjoy TPM, I fear that philosophy, which should lead common sense to wisdom, is becoming an exercise of the mind for academics.

    I agree with you on this. And thank you for the reference to Adler. But I do believe that many of these technical discussions are part of the ‘pursuit of wisdom’ project.

    It seems to me that the big worry you’re expressing is not over wisdom, but over accessibility. Granted, some of the above post might require some training to figure out. That’s hard to avoid; unlike most posts, a half-decent offering for the Philosopher’s Carnival ought to be responsive to the technical aspects of the ongoing conversation. Regardless, my hope is that some of the above might get all cleared up after a bit of back-and-forth.

    Hi DrCaf,

    The second quoted sentence could have been simply written: “Whence this sense of ill-boding?”. The whole sense of ‘where did it come from’ is denoted by ‘whence’.

    Could have, but my understanding is that it can be used just as a substitute for ‘where’, not always strictly synonymous with ‘from where’. (That’s what my handy Google dictionary search tells me anyway.) But even if that’s a mistaken interpretation of the use of the word, the crime is one of redundancy, not grammar. So it’s not ‘incorrect’ as much as it is ‘uncooperative’ or ‘irritating’.

  5. Just FYI–Mohan Matthen’s review of Nagel’s book (a special 4 page review) will appear in the next issue of The Philosophers’ Magazine.

  6. For those interested, here’s a brandnew SEP entry on intuition: http://plato.stanford.edu/entries/intuition/

  7. ‘From whence’ was thought well enough for the King James Bible and can also be found in the works of Shakespeare and Dryden. Samuel Johnson described this mode of speech as ‘vitious’ but was guilty of using it in the Dictionary in which he criticised the practice (and elsewhere). Dickens and Twain are amongst the ‘greats’ that have employed it since.

    Mr Nelson stands in good company.

  8. Wow. Seems philosophy is generally living up to its reputation for flakiness. Glad to see that some good philosophers are calling out some of the bunk.

    Coincidentally there’s a recent post on Jerry Coyne’s site complaining about the seedier side of evolutionary psychology.

    It’s nice when professionals try to tidy up their own house.

  9. I love this post! It is great to see someone getting at the meat of why Philosophy is important and why even philosophers can be misled by the cognitive errors the field is intended to shed light on.

    A couple of thoughts- 1. Discussions of Cognitive Psychology versus a Philosophy of Cognition seem to me to confuse two separate things. If rationality is possible without a human brain- an idea inherent to both the existence of the discipline of Philosophy and to the notion of alien or machine intelligence- then what we seek in Philosophies of Cognition is not what Cognitive Psychology offers us, a how of human intelligence, but a universal formulation of rational cognition. This would be knowledge that could lead us to better understand what it is the human brain is so capable of simulating, if anything,despite the proclivities of biology and psychology.

    And 2.- There is a passage alluding to a possible abandonment of the concept of necessity. Let’s assume that makes sense in any possible world. Would we not abandon necessity because there was good reason to do so? Would that not make it, well, necessary? Is that not a reductio in absurdum?

  10. I enjoyed reading the essay points The points are philosophically provocative and that’s a good thing. Philosophy in Quinian logic terms of language meanings ought to regard itself as utilitarian I would think. Concepts and ideas expressed in language are phenomenal and have what truth-value in ontological relativity as they have. While the Universe(1) and supporting quantum phenomenalism may have deep Platonic forms that are unknown or unknowable in the human cognitive paradigm, philosophers are interested in that sort of thing anyway.

    I am not an academic philosopher and have the idea that Darwinianism tends to be used with a politically correct mutually exclusive and exhaustive relationship with faith, the concept of a transcendent God and so forth. That isn’t a very philosophical dialectic.

    Solid-state quantum mechanical phenomena seem to have a temporal order and logical material change and evolution seems to be an implicit aspect of the existence of spacetime, mass and concatenated or interacting dimensions, yet that does not necessitate there beihng no deeper transcending explanation for the fact of mass than evolution as the end-for-itself.

    If philosophy is constricted a little by the dogmatic political correctness of scientific Darwinism and cognitive scientific language it should be simple enough to be aware of Sartre’s ideas about epistemology and the history of science nearly always becoming surpassed by deeper knowledge making the past knowledge actually false or woefully incomplete.

    The humor of phenomenalism and constructions of ideas produced with mind is in the conservido being as valid as the libido in searching for eternal verities rather than going with the flow of temporality social zeitgeist.

  11. As for naturalistic or (better) scientistic anti-intuitionism:

    “My own belief is that a rationalist conception of a priori justification is important and indeed essential for dealing with most or all philosophical issues, that philosophy is a priori if it has any intellectual standing at all.”

    (BonJour, Laurence. In Defense of Pure Reason: A Rationalist Account of A Priori Justification. Cambridge: Cambridge University Press, 1998. p. 106)

  12. Gary, I do not know if very many people make that inference about evolution and faith. e.g., even Dawkins falls short of saying that evolutionary theory is ‘mutually exclusive’ with faith. He thinks that it makes faith in an Abrahamic God implausible, not impossible.

    Myron, Bonjour’s stuff is quite interesting, and makes sense from within his conception of the dialectic. He thinks that if you deny that there are apriori reasons, you are unable to make empirical claims about currently unobserved events (e.g., things in the distant past, distance in space). He thinks that you can only make this kind of inference by a complicated sort of ‘if-then’ reasoning. e.g., roughly: “If things are going the way I know them, then they’ll keep going that way.” And [if you reject that] then, skepticism.

    I can tell you right off the bat that Paul’s counterproposal would be that we engage in inference to the best explanation, which is grounded in conceptual coherence, and probably implemented on a cognitive level in terms of a model of parallel constraint satisfaction. Presumably, Bonjour would say that the logical structure of ‘inference to the best explanation’ must be expressed in something like a conditional form. But I don’t know if that captures the richness of inference to the best explanation.

  13. There is a distinction between logical or conceptual intuition (analytic intuition) and ontological or metaphysical intuition (synthetic intuition). A moderate empiricist can accept the former as a reliable epistemic source sui generis and reject the latter, claiming that synthetic knowledge about the world cannot be acquired a priori, by means of rational intuition or intellectual insight alone.

  14. Although I am tempted to accept a distinction between analytic uses of sentences and synthetic uses of sentences, I am not even remotely persuaded that pre-theoretic intuitions can be classified in that way. Here is a brief sketch of why I think so.

    1. Pre-theoretic intuitions are either a sui generis category, or they are not.

    2. If pre-theoretic intuitions are a sui generis category, then they only earn additional qualifiers (like rational, physical, and so on) retrospectively, i.e., once we’ve grown out of a state where intuitions alone are guiding our inferences. Such a taxonomy of implicit states would be through experience, and hence, of no help to the apriorist.

    3. If they are not a sui generis category, then there’s really no point in defending the use of intuitions as a form of evidence, as opposed to full-fledged beliefs or judgments.

    4. Since I do think there is in fact such a thing as a sui generis category that we might call ‘pre-theoretic intuition’, I don’t see how it is even intelligible for the apriorist to craft a taxonomy of implicit states.

  15. What is Thagard’s argument? From what I can tell, he claims that contemporary analytic philosophy has these dogmas without providing any indication of who, if anyone, defends or maintains those claims. It seems excessively charitable to say that is an argument.

  16. Psychologists use “intuition” more broadly than the philosophers do. Not any old hunch is an intuition in the epistemologically relevant sense. The philosophers’ intuitions are reflective intuitions. Pre-reflective, prima-facie intuitions are doubtless much less reliable. Only those propositions which force themselves upon us as necessarily true after explicit reflection on them are intuited. This is not to say that reflective intuitions are infallible, since the irresistible appearance of necessary truth might be deceptive. But fallibility alone doesn’t exclude reliability.

    I fail to see how radical empiricists can reject even logical intuition as an experience-independent source of knowledge without thereby depriving themselves of the possibility of argumentation. For example, on what alternative epistemic basis can they justifiedly accept modus ponens?

    “[R]epudiation of the reliance on a priori insight seems to amount to intellectual suicide.”

    (BonJour, Laurence. In Defense of Pure Reason: A Rationalist Account of A Priori Justification. Cambridge: Cambridge University Press, 1998. p. 115)

  17. The assertion that to intuit a proposition is simply to believe or to be disposed to believe it seems implausible to me.
    See: http://plato.stanford.edu/entries/intuition/#IntBel

    My (apparently) intuiting that p is the cause of my believing that p, but (apparent) intuitings as intellectual experiences are distinct from dispositional states of belief.

  18. Hi Myron,

    Not any old hunch is an intuition in the epistemologically relevant sense. The philosophers’ intuitions are reflective intuitions. Pre-reflective, prima-facie intuitions are doubtless much less reliable.

    No, not always. Philosophers make use of both pre-theoretic and theoretic intuitions. Often, the pre-theoretic intuitions are the ‘first impressions’ you get when you first learn material, when your mind has a grasp on the concept but has not yet been fully conditioned to accept the role it plays in a specialized academic discourse. Such intuitions can be quite valuable, if they can survive the rigors of experience, reflection, and dialogue.

    Of course, pre-theoretic intuitions are not by themselves a basic source of evidence. But they’re important nonetheless; you might say they are ‘probative evidence’. Analogy: intuitions are a ‘source of evidence’ in the same way that TV is a ‘source of entertainment’. It’s unreliable, but sometimes it’s all you’ve got to work with.

    Anyway, you’re correct in thinking that theoretical intuitions are what some (non-mediocre) philosophers treat as minimal evidence. But that’s not the same as treating intuition as a standalone “sui generis” category. Worse, as I suggested above, the distinction between theoretical and pre-theoretical intuitions is only known through experience and not apriori. That’s a problem for the self-styled rationalist.

    I fail to see how radical empiricists can reject even logical intuition as an experience-independent source of knowledge without thereby depriving themselves of the possibility of argumentation. For example, on what alternative epistemic basis can they justifiedly accept modus ponens?

    I’m not a ‘radical empiricist’, so I couldn’t tell you. I can say that I reject your assumption that intuitions are just the same as reflective intuitions, for the philosopher (or for any actual human beings). So I think that intuitions about prompts concerning logical sentences are not always of the theoretical kind.

    Though of course, that does not mean that such pre-theoretic intuitions are sufficient reason for believing modus ponens. Logical sentences are worthy of endorsement and belief just in case you’ve also gone through willful attempts at doubt, consideration of what follows from what, and conversation with different kinds of people. I don’t know, or care, if all of these desiderata count as ‘experience-(in)dependent’ in the same way. They just seem to be what’s going on in practice, in the midst of things.

    The assertion that to intuit a proposition is simply to believe or to be disposed to believe it seems implausible to me.

    OK, but I didn’t make any assertion of that kind.

    Finally, if I can make a plea: please stop quoting Bonjour’s ‘bon mots’ unless you think it really adds something to the conversation!

  19. According to epistemological rationalism, intuition or intellection (“intellectual vision”) is a basic epistemic faculty in virtue of which we can naturally acquire knowledge about the modal essence of reality, i.e. about necessary truths. It is innate to mentally normal people (but not pre-linguistically usable) and prior to taxonomic differentiations between types of intuition. The intuitable truths are necessary in the logical or ontological/metaphysical sense, not in the nomological sense. That is, laws of nature are not discoverable a priori. And, as Kripke has taught us, not all ontologically/metaphysically necessary truths are knowable a priori.

    As for the aspect of necessity, some argue that intuitions needn’t be accompanied by explicit modal awareness, so that there’s a difference between intuitively “seeing” a proposition’s truth and intuitively “seeing” its necessary truth. Can’t an intelligent child or logically uneducated adult intuit simple truths without ever having thought about the concept of necessity? I think they can. However, I also think that even in these cases, the irresistible impression of obviousness involves a sense of alternativelessness, at least in the sense that the thought that the proposition in question could have been false never occurs to the intuiter.
    There is a pre-philosophical (but not pre-linguistic) understanding of the modal auxiliaries and thus of modal strength in addition to mere factuality.

    From the “naturalistic”, i.e. empiristic-scientistic, perspective, anti-intuitionism or anti-apriorism goes hand in hand with modal skepticism or even modal nihilism.
    Of course, the biggest problem for (ontological) naturalists/materialists is to find truthmakers for logically and ontologically necessary truths in a world fundamentally consisting of nothing but matter and energy.
    But that the going gets pretty tough when it comes to the logic, metaphysics, and epistemology of modality is no convincing reason to “abandon the concept of necessity” (Thagard).

    What’s left of metaphysics and ontology if pure reason is rejected as a source of justification/knowledge and modal aspects are regarded as inscrutable or irrelevant?!

    For example, Jonathan Lowe holds that “[t]he a priori part [of ontology] is devoted to exploring the realm of metaphysical possibility, seeking to establish what kinds of things could exist and, more importantly, co-exist to make up a single possible world. The empirically conditioned part seeks to establish, on the basis of empirical evidence and informed by our most successful scientific theories, what kinds of things do exist in this, the actual world. But the two tasks are not independent: in particular, the second task depends upon the first. We are in no position to be able to judge what kinds of things actually do exist, even in the light of the most scientifically well-informed experience, unless we can effectively determine what kinds of things could exist, because empirical evidence can only be evidence for the existence of something whose existence is antecedently possible.”
    (Lowe, E. J. The Four-Category Ontology: A Metaphysical Foundation for Natural Science. Oxford: Oxford University Press, 2006. pp. 4-5)

    Adherents of metaphilosophical scientism such as Thagard, who have turned the motto “Philosophia ancilla theologiae!” into “Philosophia ancilla scientiae!”, certainly reject Lowe’s approach. But I don’t side with them. I’m a pro-scientific but non-scientistic naturalist who demands that metaphysical/ontological theses or theories be fully consistent with the well-confirmed scientific theories available. But I refuse to sweepingly reject intuition/intellection as an epistemic source sui generis and to emigrate from the “philosophers’ paradise”, i.e. the modal sphere of possible worlds.

  20. “…abandon the concept of necessity.” P. Thagard

    The concept of logical, ontological/metaphysical, or nomological necessity, or all of them?

  21. On the one hand, I think you pose useful challenges to Paul’s approach. As I said, I’m sure that competent adults are capable of cognizing a distinction between necessarily true and contingently true. I’m also sure that this distinction is a useful one.

    On the other hand, the rationalist account of this capacity does not look at all appealing.

    What you call ‘intellection’ is not a faculty, but a handsome predicate that we use to describe a variety of mental processes that tend to produce successes in representing things in (what we assume is) the right kind of way. And in general, the language of ‘faculty’ implies something like modularity, which is often quite misleading. A more plausible case would involve a description in terms of cognitive ‘processes’.

    Even so, the idea of rationalism seems troubled. Consider, for instance, your characterization of ‘intellection’ so far.

    First, you describe intellection as being both rational and intuitive. Granted, that’s preferable to describing rational intuition as “intuition in the philosopher’s sense” or whatever, which is misleading, since philosophers use both theoretic and pre-theoretic forms of intuition in productive ways. But it sounds to me like it is a concession: that we’re no longer speaking of intuition itself, but this other introspectable sort of thing.

    Second, you describe the fruits of intellection in terms of alternativelessness, “at least in the sense that the thought that the proposition in question could have been false never occurs to the intuiter.” But that can’t be right — that’s just prejudice. You have to actually have reason to be convinced at the futility of attempting to doubt, and that requires some experience in doubting. So an appeal to intellection with respect to some case only proves itself a virtue once it survives a suitable attempt at doubt. But then it’s not just intuition that’s doing the work, it’s also an interaction with doubt. So either dubitability is a part of the nature of the intellectual process, or dubitability is another process entirely.

    Third, to the extent that you are depending upon distinctively semantic competence, you’re making a tacit appeal to remembered testimony of others. The testimony of others takes on the form of an information profile, and is drawn upon a corpus of ostensibly grammatical sentences. You take that as a basis for your intellection. That’s another process, I guess.

    So that’s a bunch of epistemically relevant mental processes. Intellection, plus some form of effortful engagement, plus some kind of semantic competence.

    Also, though it’s true that many naturalists have collapsed the idea of possible worlds, the idea of naturalism does not preclude reference to modality. See, for instance, Yudkowsky’s account of counterfactuals in terms of mixed reference (in the OP), which is true and factive by way of performing a ‘logical’ operation over states of affairs in the actual world. (Though the nature of that logical operation is perhaps up for some dispute. I digress.)

    Like Yudkowsky, I do not have a problem with the idea of necessity, or of a possible worlds semantics. What I have a problem with is talking about the contents of inaccessible worlds as if they had any metaphysical significance. I mean, once you accept that I have been rigidly designated, I suppose it is logically possible that there is a world where I was born as a badger. But who cares? That’s not telling us anything interesting about the metaphysics or ontology of me. The interesting stuff must bear some relation to existence or the actual world in order to be fruitful. Lowe’s first condition depends upon the second just as much as the second upon the first.

    But I refuse to sweepingly reject intuition/intellection as an epistemic source sui generis and to emigrate from the “philosophers’ paradise”, i.e. the modal sphere of possible worlds.

    Well, sure, but what does that mean? I refuse to reject television as a source of entertainment, because when it’s good it’s better than anything. But I still acknowledge that television is mostly awful.

    Philosophy is like that — an ounce of paradise hidden within a ton of purgatory. But man, the paradise sure is something, isn’t it?

  22. The modal auxiliaries are part of ordinary natural language. And even intelligent children can comprehend the basic difference between what just is (happens to be) and what can/cannot/must be.

    I should have mentioned that I use “intellection” in a specific, narrow sense.
    Pure reason as a source of justification or knowledge has three main aspects:

    1. reflection: the act of (deep) thinking
    2. intellection: the (cognitive) experience of (properly and fully) understanding a thought-content (proposition or state of affairs)
    3. intuition: the (cognitive) experience of being appeared to by a thought-content as a necessary truth or fact.

    Rational, reflective intuition as a veridical case of knowledge acquisition involves 1-3: I start thinking deeply about some proposition/state of affairs, then I come to properly and fully understand it, and then I “see” its necessary truth.
    Of course, at least in the case of metaphysical/ontological intuitions, my understanding might be improper or incomplete and my “truth-seeing” might turn out to be nonveridical or deceptive, so that they aren’t infallible. (Are logical or conceptual intuitions infallible? How could my intuition-based beliefs that if p & q, then p, and that bachelors are unmarried turn out to be false?)

    As for the question of dubitability/indubitability, I think an intuition (qua cognitive experience) has in and by itself antisceptical force; that is, its propositional content forces itself upon the intuiter as indubitably true. The intuition prevents you from doubting its truth independently of any actual “doubting-experience”. Intuitions are “doubt-suppressors”, psychologically speaking. This is, of course, not to say that their antisceptical force is absolutely irresistible, that people cannot help but credulously and uncritically succumb to their apparent rational insights, being unable to regard them with epistemological suspicion.

    I wrote: “…at least in the sense that the thought that the proposition in question could have been false never occurs to the intuiter.”
    I should have written: “…at least in the sense that the thought that the proposition in question could have been false doesn’t occur to the intuiter, ceteris paribus.”

    That is, unless there are strong external reasons which make me doubt the veridicality or reliability of my intuitions, I just don’t doubt them, because their propositional content seems undoubtful to me. I’m aware that the anti-intuitionists will object that there are very strong external reasons to doubt their veridicality or reliability always, in principle.

    By the way, it is false and misleading to characterize intuition as (totally) experience-independent, since intuitions qua intuitings are experiences themselves, cognitive experiences. Nevertheless, it is independent of other sorts of experience: perceptual/introspectional/memorial/testimonial experience. However, this is still not completely correct, because intuition entails intellection, and intellection presupposes linguistic knowledge and semantic competence, the non-innate possession of which depends on learning-experience. But this fact alone doesn’t deprive intuition of its a priori character.

  23. There are some questions that you have to ask about what it takes to ‘fully’ comprehend a proposition. Often, we comprehend seemingly ordinary inference-chains and propositions in a less than stellar way. e.g.:

    (Are logical or conceptual intuitions infallible? How could my intuition-based beliefs that if p & q, then p, and that bachelors are unmarried turn out to be false?)

    “If (p & q), then p” does indeed look to be infallible. But that’s only because we try to come up with counter-cases and find the effort unsatisfying. [e.g., I could make an argument that subsective predicates are an instantiation of that form — “If he’s a good burglar, then he’s good”. As it happens, I don’t really like that argument very much; something about it seems dishonest. But it’s worth giving it a shot.]

    And sometimes a touch of skepticism can yield rich philosophical rewards. Consider “All bachelors are unmarried”. Now think of the married man who thinks he is a widow, and who has just hit the dating scene — but unbeknownst to him, as a matter of fact his beloved wife is still alive and in hiding. He’s married and he’s a bachelor. Surprising but true, it seems to me.

    Intuitions are “doubt-suppressors”, psychologically speaking.

    In my idiom, that cannot be right. Only dogmatists consider intuitions in isolation to have ‘doubt-suppressive’ force. Similarly, it seems to me that slogans like “intuition entails intellection” are misleading, since they end up making a simple thing out to be more sophisticated than it is.

    But presumably, this is a merely verbal difference. From what you have said, you’re not referring to intuitions here, but to theoretic intuitions of a competent language user who has been prompted by semantic or mathematical sentences. That mouthful complicated intellectual construct serves as something close to pro tanto or minimal evidence of the truth of a proposition. The question is, how?

    Well, as far as that goes, I’m sure these complex seemings can have great probative force, and that it can be productive to rely upon them. I’m also sure that in some instances, our theoretic intuitions may reliably be truth-tracking in such a way that we can use them as a guide when we endorse sentences. e.g., when doing arithmetic, exotic meta-doubts about foundations of mathematics are not required, or even useful.

    However, part of the assumed objectivity of the truth of those sentences (and the sense that we completely understand the meaningful implications of the sentence) comes from the assumption that headstrong experts in the intellectual community have gone through the process of doubting the truth of those sentences, and that they have not come up without a revision which does a better job at explaining their subject matter. The assumption that there are no external defeaters does actual work, here, up and beyond the contribution of intuition — indeed, the ‘ceteris parabus’ clause appears to be a concession of that effect. Once you cook the assumption that there are no defeaters into your endorsement of intuitions, you have made a measure of doubt obligatory.

    There is at least one sense in which this story is not consistent with the proposal that, for a virtuous knower, “the thought that the proposition in question could have been false doesn’t occur to the intuiter, ceteris paribus.” Any person who is not occasionally inclined to consider whether or not the things they say are false, shall never have a complete grasp of what they are saying. You have to know the boundaries of your country before you can know the whole of it.

    it is false and misleading to characterize intuition as (totally) experience-independent

    I don’t want to have that conversation. Draw the apriori/aposteriori line wherever you want, and so long as it’s not outlandish, I’ll follow. Just be consistent about it. So, e.g., you seem to be characterizing experience in an inclusive way (reminiscent of pre-Humean empiricism), while also characterizing apriori intuitions partly in terms of such experiences. I think this cannot be done.

    If it turns out that intuitions are apriori, then fine. The considered articulation of my claim is much more modest: whatever intuitions are, they’re not apriori and sui generis and carry real evidential weight. Intuitions might be apriori and sui generis, but then they’re only probative evidence. Intuitions might be sui generis and carry evidential weight, but not be apriori. And they might be apriori and carry evidential weight, but not be sui generis. You can only pick two.

  24. Myron, BLS Nelson, etc.,

    It would be useful to those of us who are not professional philosophers for you to remember what an opportunity you have to teach the not-necessarily-unenlightened among your lay audience what some of the fine-tuning of philosophical vocabulary may mean when we see words we think we know but may not fully understand the nuances of as applied to your usage. As an example, there has been a generous use in this thread of the term “sui generis”, which I take from general usage to mean something like “familially unique”. This works well with how the term is used in biology and law, but is it really what philosophers mean? Can’t say for sure.

    The issue here is the nature of social constructs such as language. In instances where something like a word has a frequent and general usage its “meaning” can be well known to many people, in as much as it is aired often in the light of a social pressure to a kind of conformity. Philosophical language, however, is derived from a kind of “intelligent design” in which there may be a very specific meaning to a term that is deeply understood by only one person, while a great many initiated readers have a vaguer idea, and even intelligent lay readers are tantalized but uninformed.

    Is there a deeper philosophical point to this? I think there is. If, in Philosophy, we seek something called “truth” a universality is implied in the quest. It is something that would exist independently of symbolic terminology. Yet, because of the inherent granularity of pre-existent symbolic terminology, when we have intellectually grasped some image of a concept the process of draping a symbol over it for communications purposes must create inexact matches. Full descriptions of the concept require the draping of several symbols so that the specific intent of our usage can be more and more clearly painted in successive descriptions. Like astronomers seeking an image of the face of Pluto out of the variability of the individual units of an image sixteen pixels square we seek to generate a clearer and clearer image of this novelty.

    It is not enough to know a truth. Knowledge and dissemination outside the confines of some narrow group of the initiated is a key to the purpose of Philosophy. We not-quite-philosophers want to understand.

  25. To say that intuition is sui generis (of its own kind) is to say that it is irreducible to and irreplaceable by anything else, e.g. belief.
    See: http://plato.stanford.edu/entries/intuition/#IntSuiGenSta

  26. One final quotation:

    “Many philosophers enjoy the pastime of ‘intuition bashing’, and in support of it they are fond of invoking the empirical findings of cognitive psychologists. Although these studies evidently bear on ‘intuition’ in a less discriminating use of the term (e.g., as a term for uncritical belief), they tell us little about intuition in the relevant sense. When empirical cognitive psychology turns its attention to intuition in this sense, it will be no surprise if it should reveal that a subject’s intuitions can be fallible locally. From the paradoxes, we already knew that they were. Nor will it be a great surprise if more sustained empirical studies should uncover evidence that a subject’s intuitions can be fallible in a more holistic way. Countless works taken from the history of logic, mathematics, and philosophy already give some indication that this might be so. Will empirical studies reveal that intuitions lack the strong modal tie to the truth that I mentioned a moment ago? Surely such a discovery is out of the question. Human beings only approximate the relevant cognitive conditions, and they do this only by working collectively over historical time. This quest is something we are living through as an intellectual culture. Our efforts have never even reached equilibrium and perhaps never will. The very idea of our conducting an empirical test (i.e., a psychology experiment) for the hypothesized tie to the truth is misconceived. Moreover, even if our intellectual culture were always to fail, that would not refute the thesis of a strong modal tie. The cognitive conditions of human beings working collectively over historical time might fall short. The thesis that intuitions have the indicated strong modal tie to the truth is a philosophical (conceptual) thesis not open to empirical confirmation or refutation. The defense of it is philosophical, ultimately resting on intuitions.”

    (Bealer, George. “Intuition and the Autonomy of Philosophy.” In Rethinking Intuition: The Psychology of Intuition and Its Role in Philosophical Inquiry, edited by Michael R. DePaul and William Ramsey, 201-239. Lanham, MD: Rowman&Littlefield, 1998. p. 202)

  27. Hey Lee,

    You’re absolutely right, of course, that these sorts of discussions often get a bit technical. I try as a matter of policy to discard Latin for plain English, but in conversation if someone chooses to use the Latin form, I feel obliged to follow along for the sake of demonstrating that I’m being appropriately responsive.

    But not to worry: the use of ‘sui generis’ in this context is not far at all from the sense in which it is used in other disciplines. Don’t allow the worry about technical expertise to stop you from having the sense that you’ve got the basic idea.

    You’re also correct in pointing out that the quibble over intuitions has wider ramifications. If you deny that intuitions play any role in proper philosophical inquiry, you as an individual end up being unable to do any philosophy at all. If you claim that intuitiveness is ultimate evidence in favor of the truth of a claim, you become an elitist who is potentially unmoored from common sense. Both positions end up alienating people from philosophy.

    From Paul’s point of view, intuitions play a role. But their use must be restricted. Intuitions are suggestive, but they do not serve as evidence. They are perhaps more like circumstantial evidence, to borrow the legal phrase.

    Myron,

    I enjoyed that passage from the DePaul volume when I read it five years ago. I don’t need a refresher. Thanks.

  28. Myron’s use of the Bealer quotation speaks obliquely to my point about the use of Cognitive Psychology in attacks on intuition. To put it simply, Cognitive Psychology can’t really tell us how intelligence/cognition is done generally any more than study of our Solar System could tell us generally how planets formed around the “average” star. It can tell us how biological systems on the one planet we know to have self-aware biological systems do what we recognize as cognition.

    Intuition can lead us to explore other possible means whereby non-biological systems might do something else we might also recognize (re-cognize) as cognition.

  29. Not so. Some of those working in artificial intelligence are not at all concerned about replicating the human sort of intellect. The orthodoxy for quite a while has been to try to develop an AI that can pass the Turing test (i.e., finding “other possible means whereby non-biological systems might do something else we might also recognize… as cognition”). Everybody is all up in that business.

    The difference between cognitive scientists and armchair philosophers is that the cognitive scientists are expected to actually do stuff, to tell us something about what it takes to pass the Turing test. For their part, the philosophers of mind fall into two categories: either they provide insightful but rigorous commentary on the ongoing work and future prospects of cognitive science (which is great), or they are pumping out stale lectures about the Chinese Room or the Language of Thought (not so great). But, again, it’s all about finding inhuman forms of intelligence which suitably approximate human behavior.

  30. One way in which the question of “intuition” arises in philosophy is when a philosopher is unable to justify a premise of his argument, but accepts that premise as true because it feels right. Justifications have to stop somewhere, and it would be a mistake to keep demanding justifications ad infinitum. Our own judgement inevitably plays a crucial role in our beliefs, and much of the judgemental process goes on at a subconscious level. We can’t check the steps of that process in the way that we can check the steps of a deductive argument. All we can do is skeptically scrutinise our own beliefs to the best of our ability. My concern, however, is that philosophers are often too complacent about premises that feel right (“intuitions”) and fail to subject them to sufficient skeptical scrutiny.

    I consider this to be separate from (if related to) the issue of philosophers appealing to intuitions as evidence. Intuitions may have a role to play as evidence in an inference to the best explanation. But such evidence must be judged realistically on a case-by-case basis, and weighed against other evidence.

  31. BLS Nelson, the criticism is well taken but I am troubled by aspects of both the limits of assumptions inherent in cognitive psychology, such as the seeming assurance that cognition arises from the brain, and in the assumption inherent in the Turing Test itself (that a simulation/modelling of cognition [something we see all the time in literature]sufficient to make us believe we are talking to a human being is representative of successful cognition).

    For the first concern it is only necessary to point to the cognitive limits of so-called “primitive” people prior to their introduction to concepts common to Western culture. The processing capacity to recognize linear perspective in two-dimensional representations is present in such people, but the conceptual capacity is not. This suggests a programmatic structure exists in the culture to which Westerners are exposed. An assumption that any element of cognition is not similarly derived from cultural programming seems to me quite bold.

    The second issue I find troubling in that some humans can’t pass the Turing test. A person displaying certain traits of stroke, autism, or Asperger Syndrome may be fully capable of cognition even at a high level but the nature of their capacity to interact with other intelligent entities seems so stilted or abstracted we could easily mistake it for a machine simulation of intelligence. Here, in fact, the programmatic capacity simply is not supported by available processing capacity.

    “Armchair philosopher” or not, it seems to me people from outside the nominal boundaries on the conversation about cognition might have observations that could apply well to the progress of the field.

  32. The considerations that Thagard cites in his article, such as they are, are obviously dialectically lame. Certainly, no one who believes in a priori knowledge of modal truths and has thought about it for more than a minute is going to be convinced otherwise by the observation that “it’s hard enough to figure out what’s true in this world (irrelevant) and there is no reliable way of establishing what is true in all possible worlds (question-begging).” Of course, given the venue this is perhaps to be expected: Psychology Today is a popular market publication, not a peer-reviewed journal. But nonetheless, I agree with Anon that it is excessively charitable to call Thagard’s list of claims an argument.

  33. If you are intesreted in following through on the Carnival’s promise to get higher-quality posts, avoiding that nonsense over at lesswrong would be a start. I mean, what does a paragraph like this even mean?

    “This is why philosophers need rationality training even more desperately than scientists do. Philosophy asks you to get the right answer without evidence that shouts in your ear. The less evidence you have, or the harder it is to interpret, the more rationality you need to get the right answer. (As likelihood ratios get smaller, your priors need to be better and your updates more accurate.)”

  34. Richard,

    Although we commonly assume that all arguments must have a foundation, this perspective can be contested. The alternative view is called ‘infinitism’, most recently brought to life through the work of Peter Klein. For the infinitist, there is indeed an infinite regress to justification… and that’s a good thing.

    Also, before you argue that intuition is part of IBE, you have to say something about what inference to the best explanation (IBE) itself actually consists in. As it happens, Paul is probably best known in philosophy for the stance he takes on IBE. According to Paul, inference to the best explanation consists in simplicity, consilience, and analogical quality. I struggle to see how intuition plays a main role in IBE, considered in that way.

    Lee,

    I am not a dualist, and I expect most fruitful research will take monism for granted. But you have to recognize that there are dualists and dualist-sympathizers out there who are doing research.

    I don’t acknowledge the existence of ‘primitive’ peoples in any deep sense. But there certainly are different forms of culture and social organization in the world. Moreover, I think members of these different cultures can be said to have conceptual schemes which differ from members of other cultures. So I do share your general sense that the social level of explanation is a rich one, and deserves to be taken seriously.

    Although it is interesting to think about cases of humans failing the Turing test, I don’t see how these cases demonstrate that intuitions count as evidence.

  35. Vanitas,

    The quality of philosophy is only competently gauged by attentive criticism. As a matter of fact, the quoted sentences can be easily rationally reconstructed into argumentative form. Anyone who has gone through an introductory class in critical thinking would not be hard pressed to discover its point.

    Here is a suitable paraphrase, cleaned up (parenthetical comment removed):

    P1. Philosophy asks you to get the right answer without evidence.
    P2. Getting the right answer without evidence requires a great deal of rationality training.
    C1. Philosophy requires a great deal of rationality training.

    P1. Science asks you to get the right answer with evidence.
    P2. Getting the right answer with evidence requires only a bit of rationality training.
    C2. Science requires only a bit of rationality training.

    So that’s an argument. Is it right? Well, if you think about it, you’ll find that there are good reasons to think that this argument is substantially in error. One might want to say that the less we pay attention to evidence, the less rational we are.

    Think, for example, of the infamous Monty Hall case. [Click here for an explanation, if you’re not familiar with the case.]

    Before the thought-experiment was widely disseminated, many apriorists claimed that the correct answer to the problem was, “It doesn’t matter what door you open”. That’s an intuitive answer, and it’s also an answer that some highly intelligent and rational people gave. But as it turns out, the answer is oh so very, very wrong.

    The take-home message that many people come up with is that, contrary to appearances, the apriorists were actually irrational for having thought the way they did. Indeed, one might say: they were irrational precisely because they did not have enough evidence. Perhaps we don’t know what reason even means until we’re juggling competing evidential considerations.

    But that answer is eminently contestable. I would prefer to say that the apriorists were rational in their own way. The problem is that reason is not as useful as we sometimes make it out to be. So I guess I would agree that philosophy requires a great deal of rationality training, though I am unsure whether or not that actually takes us to the right answer.

  36. Vanitas,

    Broadly, Luke (at LessWrong) is making the point that philosophical judgements are more difficult to get right than scientific ones, because the evidence speaks less clearly in philosophy, so we need to be even more careful not to be misled by our cognitive biases.

    I would add that the problem is aggravated by the fact that philosophers often have some degree of aversion to evidential thinking, because of a traditional emphasis on “a priori knowledge”. I broadly support the call for a more scientific, naturalized approach to philosophy, though I disagree with Paul Thagard and Luke on some specific points.

  37. BLS Nelson, Nor do I believe in “primitive” peoples. We are all anatomically modern Homo Sapiens. But even arguments in prior posts of this site note the lack in, say, Socrates’s time of certain concepts we now take for granted. Like enzymes piecing proteins together out of amino acids present in an environment, cognition is more than the individual mechanisms of which it is made. Then something happens that makes even that more into something more still.

  38. Lee, okay, but what has that got to do with the apriori in philosophy? By focusing on the cultural role that concepts have, you’ve arguably given evidence for us to believe that concept learning is dramatically dependent upon experience.

  39. Intuition is not unlike mutation in biology. In fact, I believe the algorithm of its function in human thought operates in a very similar manner to mutation in evolution. One intuits (mutates and communicates) a cognitive construct that may not have existed before, and its operation in the environment of philosophical thought (or in science or some other area of thought) shows it to have some form of viability that keeps it operating in that environment. This process is easiest to observe in the sciences, since the value of a valid idea so clearly presents itself in that environment.

    Kepler, for example, had sought for all of his adult life evidence that would confirm the long-held intuition that the paths of the planets MUST be perfect circles, but when confronted with Tycho Brahe’s very precise observations he had to intuit a never-before-proposed concept that planets moved in ellipses, along with other concepts about motion that Newton and Leibnitz would later intuit new mathematical concepts (The Calculus) to calculate with greater ease.

    This intuitive/mutational process establishes a communicable cognitive formulation that solves problems in the field, again, in a manner not unlike the problem-solving inherent in successful biological mutation.

    In a sense one can’t divorce the success of an idea from experience AFTER the concept enters the cognitive vocabulary. This is certainly true of explanations of planetary motion, which arguably describes an a-priori truth, but other concepts, like the notion of “rights” or money-like media of exchange, describe the results of intuitions that have a success entirely apart from an underlying pre-existent reality. A biological example of such a mutation is the novelty of photosynthesis, an invention that could quite possibly exist only on Earth. It was an utter disaster for the anaerobic life forms preceding it, but its success, if you think about it, is the foundation of Philosophy.

    I believe this is different from what I see in most presentations of a-priori knowledge in Philosophy. These ideas, at least as they are presented for novices and armchair philosophers, are usually founded in the structures of communication themselves (All bachelors are unmarried. The boundaries of all circles/spheres are equidistant from their centers. Etc… ) rather than in underlying fundamental realities for which we seek to find communicable descriptions. Contrast that to mathematical principles such as the Pythagorean Theorem or simple number theories by which the universe (at least the knowable universe) seems to contain a truth that is easily reducible to a communicable form before the communicable formulation is devised.

    (You might object to the notion of circles not being a priori by noting that circles and spheres exist in nature. My response would be that the process by which a bubble or other circular processes evince their forms is independent of the abstraction by which human beings define and communicate the form in mathematics.)

  40. Lee, I think it could go either way. Pre-theoretic intuitions are not necessarily creative. Often, intuitions just conform to prejudices and stereotypes that have been absorbed from the surrounding culture. For every ‘Keplerian’ intuition, there are another 10 which are just regurgitations.

    There are at least two kinds of aprioricity, the alethic and the normative. By ‘alethic’, we mean something like ‘conforms to the structure of what’s real'; by ‘normative’, we mean something like ‘what we ought to infer about things’. I guess the latter is what you mean by structures of communication. But some philosophers sure did think that claims about (e.g.) geometry were alethic. So I don’t think it’s any good to deny that some people out there really do make the argument that geometry is aprioristic.

  41. BLS Nelson,
    I’ll happily admit to a level of ignorance about the fine tuning of kinds of intuition and kinds of aprioricity. In defense of my assertion above about geometric forms I would say the definition of a circle is designed by human minds to be readily communicable. While it defines something similar in form to things that may be found in nature, and the definition itself provides a kind of conceptual seeming perfection (exactly the thing that was so tantalizing to astronomers after Ptolemy), neither perfect circular forms or the seeming perfection derived from the definition are found in nature.

    Another aspect of Kepler’s insight was that he debunked the result of one of the most persistent junk intuitions of the type to which you refer. Recent particle physics literature has begun to point to another similarly “beautiful” scientific intuition from forty years ago, “Supersymmetry”,which may also be falling to the results of the Large Hadron Collider at CERN. In that case a great many well-respected physicists have built careers on theoretical approaches to theory in Supersymmetry.

    We tend to look at such intuitions as these as being a part of some formal theoretical approach to major issues. I would suggest, though, that we all utilize such intuitions to build conceptual models of the world in which we live, even in real time for situations like conversations. Generally we will all-but-unconsciously cast elements thus constructed out of our cosmology (small sense) as they become untenable so long as they are unimportant to our social environment. But we will also, through experience in our environment, grow more skilled in the intuiting of things like the internal emotional states of other people, the nature of our physical world, etc.

    Because of my theory (intuition, admittedly) that our minds work in this way I would tend to flip your assessment of our propensity for error in intuition to guess that our successful intuitions probably outnumber our unsuccessful ones by ten to one. Where we do well will be at the core of our capacity for repeated experience (such as you would have in being able to tell when someone understands or does not understand what you are saying by their facial expressions, emotional state, and spoken responses) as opposed to the fringes of that experience (such as being able to examine from a theoretical construction how a thought might be developed and [also in a theoretical construction of an imagined thought-environment] internally tested).

  42. Hey Lee, to clarify, in my “1 in 10″ remark I didn’t mean to suggest that the contrast is between veridical and false intuitions, but between creative and dopey intuitions.

    I agree with that intuitions are in some sense indispensable or unavoidable, and that we all use them. I also agree that intuitions can be used profitably, in science and philosophy and everyday life.

    However, I don’t agree that intuitions tend to be true over false. I’d be happy to grant you for the sake of argument that we may have more true than false beliefs. But intuition alone isn’t the thing that tells us the beliefs are true. (Mind you, there are exceptions depending on what kinds of prompts we’re talking about. e.g., I don’t see any reason to deny the Chomskian thesis that syntactic intuitions are reliable representations of certain parts of the speaker’s idiolect.)

    Moreover, I would make a very strong claim here. Bare intuitive contents (as opposed to theoretic intuitions or beliefs) do not ever have any truth-conditions. So if I said, “I pre-theoretically intuit that (p)”, I am implying that I am not quite sure what it would take for something to be (p). It could be that (p) is the thing to be done; it could be that (p) is attractive; it could be that (p) is true. Intuition differs from belief in that beliefs involve some schematic knowledge of the conditions under which the proposition might be satisfied, while intuitions do not.

    Of course, when it comes to everyday life experience, one hopes that intuitions are giving us reports based on some kind of process of induction. But even in the best case scenario, everyday life experience is limited by the things we pay attention to and the variety of experiences that are open to us. So, e.g., everyday life trains us to believe that it is more intuitive to call James Bond a bachelor than to honor Tarzan with that title, since our tacit mental model of ‘bachelor’ looks more like Bond than like Tarzan. We might even think that our mental model is an inductively valid one. Unfortunately, that inductively supported intuition by itself neither makes Tarzan a bachelor, nor does it exclude him from bachelorhood. The negative intuition about the proposition that ‘Tarzan is a bachelor’ is just a something we have to figure out how to cope with, and that’s pretty much all the intuition tells you.

  43. BLS Nelson,

    In the whole foregoing discussion it has seemed my own idea about what an “intuition”, in a philosophical sense, might be was not dissimilar to what a philosopher’s idea would be. Now I’m not so sure.

    As can be seen from my previous two comments my own concept of an intuition deals principally with the framework of cognitive modelling. This arises from the principal intellectual quest of my adult life- making sense of the persistence of the arts in human societies. In your last paragraph you deal with intuition as though it were principally about classification. This would be attractive to someone who sees language as the principal difference between human thought and the cognitive skills of chimpanzees and, indeed, there is no ultimate truth content in classification for nominative purposes. But what if language is a symptom of the difference between humans and chimps and not the cause of that difference?

    It is well known within the Cognitive Psychology and AI communities that animals have their bodies strongly reflected in models or “maps” in their brains. Remove a mouse’s whiskers, for example, and very specific structures on the surface of its brain will visibly atrophy. If you have ever had an injury to a body part that caused that part not to function properly you will have experienced a mismatch of this internal mapping with the function it predicts. From personal experience I can tell you the mismatch is emotionally distressing (which I take to be a clue to the function of “emotion”) and likely to cause a good deal of conscious focus on addressing the disparity.

    The body mapping above, augmented with a very limited instinctive modelling of the environment, may be adequate for the operation of most animals. It clearly is not the limit of the modelling humans do. We (at least most of us) have an “other” map that relates our personhood to people. We also have conceptual maps for familial, extended familial, and what I will call “tribal” functions. We have maps for spatial and environmental functions. Then humans also seem to have an instinctive grand social function/literary/story map that more often than not bundles rationalizations of both the natural and social environments into a pat explanations file to which we refer when we run up against things that might compromise nominal functionality with distracting questions.

    Where you and I part company is in believing intuition would not have a truth function. The reason for this would be that, in my formulation, virtually none of the mapping I’ve described above, even including the immediate body map of a human, is inherent or instinctive. Instead, element by element, a person intuits for conundrums of experience solutions to cognitive structural issues. “What keeps scratching my face at the same time as I get this sensation from an extremity?” (Fingers) “The soft, nurturing parent seems to repeat a sound grouping I’ve heard myself make and hug me when I make it.”

    Bit by bit, by this process, we intuit models of a cosmos, some of which seem to work and we keep, some of which seem to work less well and we discard. That is the “world” in which any given human actually cognitively lives. Clearly, in this scenario, any pre-theoretic intuition would either contribute to a more accurate match between the cognitive individual cosmos and the “real” (apriori) world or a less accurate match even though its truth function could not be tested prior to being used in the cosmological model.

  44. Lee, I can see why you’d read it that way, but that wasn’t my intention. I would like to say that you can’t have a belief about the truth of things without making reference to a language, but you can have an intuition in favor of a something without requiring any reference to language. Intentionality, or aboutness, is not just a human thing; non-human animals certainly have cognitive maps.

    The mouse’s intuition to go left in a maze might be reliable (under suitable circumstances), because the place cells in the mouse’s brain are firing. The mouse’s internal map of the maze has the right kind of fit with their environment. But the mouse’s intuitions are not veridical; they’re just apt in some primitive way. Contrast that with the case of humans musing over abstract propositions, and you start to see a stark difference. Humans face a special problem — we have too many maps. Our intuitions concerning abstract propositions do not track any one of them.

    I don’t doubt that people are able to believe true things. Where you and I disagree is what intuitions are doing. In your examples, I would say that the intuitive sense that (p) is apt can eventually go on to become a belief that (p) is true. But the intuitive sense that (p) is apt can also go on to become an intention to bring about (p), which hasn’t necessarily got anything to do with the truth of (p). Our intuitions alone might produce a sense of harmony or disharmony with the world, but that’s not quite the same as bringing people into an accurate or inaccurate relationship with the world. They’re vitally different sorts of functions.

  45. Of crucial importance is “the attempt to articulate more precisely the exact nature of intuitions or provide a principled taxonomy of the various kinds of intuitions. This would enable various philosophers and psychologists to avoid arguing at cross-purposes and allow for further exploration of the psychological and epistemological parallels between perception and intuition.http://plato.stanford.edu/entries/intuition/

    A rational/intellectual intuition in the epistemologically relevant sense is different from a “physical intuition” (G. Bealer), an animal instinction, a naive expectation, a spontaneous decision, and a mere supposition.

  46. Myron, I do not think there is in fact such a thing as a sui generis category that we might call ‘pre-theoretic intuition’ which has evidential weight. So, with peace to George Bealer, I don’t see how it is even intelligible for the apriorist to craft a taxonomy of evidentially significant implicit states. The entire point is that, in this respect, the apriorist’s “purposes” are unsatisfiable.

    Moreover, just at the philosophical or epistemological level, “radical empiricism” and apriorism are certainly not the only games in town. As a kind of common-sense holist about justification, I believe in neither doctrine.

    Finally: I don’t know why you’ve ignored me the last two times, but just to make myself totally 100% crystal clear — I’m pretty fed up with your quoting practices. I have read Bealer multiple times with interest, understood his argument, and am explicitly arguing against it. I have read Bonjour, and am arguing against him. By ignoring the fact that this is an actual argument, you are talking over me instead of to me. This is unreasonable and generally assholish behavior. Stop it.

  47. BLS Nelson,

    Are you claiming that there are no violable truth functions in, for example, wordless images as opposed to words? Or is it possible you class with words and other forms of symbolic representation any form of intentional communication? A word-only Philosophy would be troubling to me because the internal communications between populations of neurons within a brain are themselves both highly symbolic and must be constantly policed by internal mechanisms, many of them consciously maintained, to assure verity.

    I know this to be the case because of personal experience as a portraitist. In such experience the clunkiness and pixelation of linguistic functioning significantly interferes with the “truthfulness” of a likeness. (Never mind a high native error rate in nervous communication…)

    Isn’t a Philosophy limited only to what can be expressed in words so deeply constrained that even those who might have a genuinely truthful deep intuition of reality could not, without developing a language so severely specialized that only they and the angels would be able to comprehend what is being expressed therein, SAY what their intuition is?

  48. Hey Lee, definitely not! Typically, we don’t say that a portrait is ‘true’ or ‘untrue’. Nobody in natural language says “The Mona Lisa is true” when what they mean is “‘The Mona Lisa’ resembled Mona Lisa.” Truth is a feature of sentences, or to quasi-sentential things like propositions, not images or whatever.

    I think pre-theoretic intuitions are interesting in large part because they appear to stand at a distance from language.

    Suppose you were Leonardo daVinci, and one day you gazed upon that painting. You might be struck with the intellectual sense (intuition) that ‘the Mona Lisa’ is somehow apt. That intuition may indeed come to mean that you think ‘the Mona Lisa’ resembles Mona Lisa. Or your intuition may come to mean something else — that the representation is apt to fit an ideal, perhaps. But the standalone pre-theoretic intuition comes with no further instructions. It is part of the nature of these miniature revelations that they are ambiguous and easy to misinterpret.

  49. Hmm, perhaps people outside of the arts would not say that art was true, but I have said it numerous times of art, and the Impressionist Degas very famously said the same of his first encounter with the work of Berthe Morisot.

    Leonardo would have said the Mona Lisa WAS the very expression of truth, in as much as the commonplace aesthetics of the day contained the claim that beauty was “truth” and he never released the painting to his client during his lifetime. That’s not to say this makes either my point or yours, because to pursue him for a clear definition of the truth therein contained would have yielded something a lot like your vague intuition and an admonition that truth is ineffable.

    I suppose what I’m troubled by is a lay and scientific definition of “truth” that is something like “consonance with, or fidelity to, reality (something aprioristically constant and independent of our expressions or beliefs)”. The internal coded expressions of a brain to itself can approximate this. So can the intentional communications in a representational image. In fact, I believe one of the reasons realistic ancient (old Stone Age and earlier) art was hidden deep away in caves while abstract images were shown in the open was that , as with language, the abstractions could hide information in the open, whereas the lion paintings of Chauvet Cave say “lion” to us regardless of the language they spoke and the intervening 30,000 years.

    This deeply clouds my regard for language even as it lifts my regard for representational imagery.

  50. Yeah, I wouldn’t know about the usage of ‘truth’ in Renaissance Italy. I was more interested in what you would say if you were put in Leonardo’s position.

    It is interesting to find out that a technical community would describe mere pictures as vehicles of truth. It would be quite interesting to know what epistemic grounding you might appeal to in order to justify this more technical vocabulary. I myself have some experience as a sketch artist, so you say some things that resonate with my experiences. (e.g., it is certainly correct to suggest that language can be “clunky”, and interferes with our ability to figure out whether or not our craft has the quality of a likeness with its object.) But I do not have the temptation to refer to artistic representations as truth-bearers.

    The difference between having grounds to believe a sentence is true, and the mere intuitive sense that some representation is apt, is that artistic representations do not conventionally purport to be representations of causes. By contrast, truth-conditional sentences (e.g., indicatives and declaratives) do purport to refer to causes in some fashion or other. So truth does indeed seem to involve a sense of consonance with causation that you describe as being part of the folk idea.

    That having been said, there’s nothing that precludes art from having that epistemic connection. e.g., if you lived in a culture that was rigidly dedicated to artistic realism (i.e., representing how things are), then I suppose it would make very good sense to say that some art is true or not-true. But that’s a special kind of case.

  51. A big part of this is cognitive scientists, fully aware that they have foundational problems, trying to push their (philosophical) views as the philosophical default by the circular reasoning that if there is a group of researchers pursuing their ideas, those ideas are therefore “scientific,” and philosophers must accept them. I’d counter this by saying that just because you’ve managed to get funding, have journals and perform (often questionable) experiments, it doesn’t follow that you’ve created a successful scientific endeavour.

    Yes, cognitivists are very vocal. Yes, they publish a lot of books full of fantastical claims. But the attempt to enforce their beliefs as the philosophical default and shame anyone who doesn’t share them is really a profoundly disturbing example of the potential dangers of philosophical naturalism. Essentially it has opened the door for one philosophical school that has achieved success in gaining scientific funding to come back and demand that dissenters be purged.

  52. I suspect you are holding images to a different standard than that to which you hold sentences. For example, the vast majority of statements in any language are not expected to be rigorously examined for verity, even if the speaker makes what approximates a truth-claim. (“This is the best day of my life.”- is a statement highly unlikely to be true on any given day. And even on a very good day it would require extensive qualification for specific parameters of goodness.)

    There one sees almost immediately what is necessary for a truth claim- the claim, even a claim made in language,- must be made within a set of parameters by which it may be rigorously judged. Let’s say, then, that I make a qualifier that an image is a “good” representation of Florence from the period of the Renaissance and present an image ( http://www.ctevans.net/Nvcc/Student/Florence/florenc2.gif ). It is not difficult to check the veracity of the congruence of the image with what is known about the facts of historical Florence. We can make the challenge greater by making the qualifier a comparative one- A given contemporary image of renaissance Florence is a more truthful (or accessibly truthful) representation of the city than a written description would be. I believe that is a highly supportable statement when accompanied by many of the images I’ve seen of Florence.

    Clearly the claim is being made with the aid of language. But, as with any truth claim, we may consider this a clause in the construction of the claim that can then bear information by a number of processes. The fact that an image may be used in the process of a truth claim is conceptually identical to the use of any noun, save that the “noun” in this instance would bear far more than the usual load of information.

  53. Joe, no, not really. Paul’s a philosopher who objects to (what he thinks is) bad philosophy. He’s also a cognitive scientist, but there is significant overlap.

    Lee, yes, I certainly do hold images to a different standard than sentences. Semantics does indeed seem to involve making more conventional communicative commitments than semiotics. An indicative or declarative sentence says, on the face of it, that it ought to be taken as truth-bearing; a painting (even that painting of Florence) does not. But even if the standards are different between these two forms of communication, that should not mean that I hold brute images and symbols to a lower standard.

    I agree, of course, that a great many statements in language are not truth-apt. “Ugh,” for instance, has no truth-value. “Pass me the salt” also has no truth-value. However, your example “This is the best day of my life” certainly does have truth-conditions, so long as you assume that ‘best’ has some implicit meaning that can be cashed out in terms of an assessment of states of affairs. Of course, we say it is an exaggerated claim (hence, strictly false). But the point of saying it is to convey a truth, something that is worthy of belief.

    I don’t know if rigorous parameters are required in order to assess the truth of sentences. There are some who argue that; they’re called ‘supervaluationists’. For every sentence, that sentence is true just in case all of its legitimate interpretations (called ‘precisifications’) are true. The problem, it seems to me, is that this risks explaining too much. If I ask someone with auburn hair, “Are you a redhead?”, then you shouldn’t have to break out the color wheel and ask me what precise shades in the palette count as red. You should be able to just say, “Yes”.

    Mind you, the truth of the speaker’s meaning might require that kind of precisification. But that’s at least nominally different from conventional sentence meaning. Speaker’s meaning has a relatively high information load, while sentence meaning has a relatively lower information load.

  54. I feel a little bit dirty asserting some kind of priority claim… but for what its worth, the term “ugh field” is my own coining, not Leah’s, and it gained currency in the Lesswrong community but did not originate there. The point of origination was in conversations circa ’02/’03 at UCSB by the complex systems study group, some of whom helped shape the early culture of Lesswrong.

    Anna Salamon is particularly worth calling out as participating in the same study group and having worked for the last few years developing the larger Lesswrong community. If not for her, the term would probably never have spread farther than UCSB.

    (Related to the larger topic of philosophic vice and its rectification, Anna recently founded the Center For Applied Rationality, a non-profit startup that aims to bootstrap something like an “evidentially sound curriculum for the teaching of efficacious sanity”. Imagine forthright, community-minded lovers of wisdom who noticed the explosion of relevant science to the issues of wisdom cultivation, and the presumptive instrumental utility of actually thinking clearly in the support of good ends, and that’s basically who they are and what they’ve been doing.)

    In the case of “ugh fields” the term fits into broader topics of traditional philosophic interest like “phronesis”, and “weakness of the will”. But these terms generally aren’t optimized to be safe and easy to use in the context of collaboration, philosophic dialogue, and/or therapeutic conversation.

    The term “ugh field” was specifically crafted to function in this sort of context by creating a friendly, sort of silly, and ultimately non-threatening way to talk about objects and task domains (for example, going the the dentist, cleaning the toilette, or thinking about personal failings that double as opportunities for growth) that trigger akrasia more often than random chance would suggest. The connotations of “field” are specifically intended to suggest a mechanical lack of agency (so no one is to blame for the field existing), that might be amenable to simple mechanical adjustments, and which is likely to lead to higher order mechanical will-related-phenomena, such as the accumulation of objects contextually near to the source of the ugh field that ultimately make the field larger as these nearby things add to the “mass” of the “accretion disk”.

    Not to push on the readers emotions too much, but… cluttered desks anyone?

    If you’re sitting at a cluttered desk right now and are feeling anxious from talk of ugh fields, you are in a teachable moment, and I highly recommend that you use the moment to check out P.J. Eby’s video blog post, which will teach you a technique that frequently reduces desk related ugh fields, and leads to clean desks in the process.

  55. Thanks for the clarification, Jennifer. Interesting.

    A friend of mine has done research which is consistent with the stuff in that video. See, e.g., here.

  56. BLS Nelson,
    Either fortunately or unfortunately, depending on one’s interpretations, I grew up in a family rife with conversational precicification demands. “What, exactly, do you mean by that?” “What is the advertizement trying to convince you about good living?” Etc., etc…

    It is one thing to try to tell someone what you want to say, and quite another to try to tell them what you want them to hear. That is one of the reasons I’ve been emphasizing “communication” even as you’ve been emphasizing language. Earlier I tried to bait you into a discussion that would include the internal (necessarily) coded communications within a human brain. That’s because no matter the language we use to “tell the truth” all languages are foreign languages to the organ that must first encode a concept, whether it conforms to a verity or not, into a language, pass it by some medium to another person, and have it re-encoded, interpreted, assessed and judged for conformation to what the receiving brain will recognize as a verity. In all of that stepping along only the concepts actually resident in human brains (at least so far) can actually be directly assessed as “true”, and that really happens in the language of the brain, not English, French, Latin, or any other spoken or written tongue.

  57. Lee, we can go further into the philosophy of language if you like, though it’s probably best to keep things reigned in as close as possible to topics presented in the OP.

    The relationship between communication and language is important, as is the relationship between both forms of communication and internal mental representations. We are clearly disposed towards accepting representations of states of affairs which seem intuitively apt. But just because you think some representation of a situation is apt, that doesn’t mean you know what it would take for a representation to be not apt, or non-conforming.

    In that way, knowledge of aptness is very different from knowledge of truth. Like many folks working in this area, I don’t think it’s any good to talk about an internal sense of truth in the application of concepts without first assuming that the person has a robust experience with error in the application of concepts. Knowledge of truth entails knowledge of falsity.

    But the idea of falsity requires one of two things. Either the language learner has been subject to correction from others, which means they’ve already achieved communication on some rudimentary level. Or the learner has subjected themselves to correction by careful attention to their environment, which means they must be superlatively modest, precocious, and possess an outstanding memory. (Something like the first argument was Wittgenstein’s, and something like the second was Colin McGinn’s.) Either way, by the time the language learner has got some true beliefs they are already interacting with someone else — either another person, or a version of themselves displaced in time.

    And the importance of sociality to linguistic representation implies what I was saying in the second paragraph. Just because you’ve got this system of representations that are encoded in the brain, that doesn’t mean that the representations are truth-apt. They’re just apt in a more generic sense.

  58. BLS Nelson,

    But just because you think some representation of a situation is apt, that doesn’t mean you know what it would take for a representation to be not apt, or non-conforming.

    This, actually, is the subject. If we are talking about the use of intuition as evidence everything hinges on a comprehension of intuiting “truthfully” what is and what is not apt. We know simply from the volume of information developed over the last half-century that our knowledge is vanishingly insignificant compared to our ignorance, but our reliance on knowledge-representative cognitive structures to inform (or build) the model of the universe we use to negotiate the poorly accessible “real” world makes the vastness of that lack of information invisible. What we are left with is what I have called “vertiginous awaqreness”, the sense that a mismatch exists between our cognitive structures and the reality they are intended to represent.

    What we intuit is a change in our cognitive model(s) that is intended to bring it into closer consonance with the real world. It is my belief that the intuition can’t serve as evidence per-se, but that the communication of the change in the model can provide others with indications of possible cognitive formulations they find more truth-apt than their former constructions.

  59. To pre-theoretically intuit that < (p) is true> is just to restate an intuition that (p) is apt. But when presented in this way, the meaning of “truth” is merely disquotational. To say that “I intuit that (p) is true” is to say “I intuit that (p)”. ‘Truth’ functions as an unanalyzed primitive, as a vague term of approbation, and could be eliminated from the sentence without loss of content.

    In contrast, a more explanatory sense of truth plays a real semantic role. In order to say that “(p) is true”, you need to imply that (p) can be translated into one or more meta-languages before you can eliminate the predicate ‘truth’. (And, again, truth functions in conversation in a particular sort of way, i.e., either as a authoritative performance or as a description. e.g., truth plays no direct role in explaining the semantics of “Pass me the salt”, even though that expression does indeed have semantic content.)

    So, since ‘truth’ in the sense that belongs to intuitions has a different role from the sense of ‘truth’ that belongs to sentences, we need a term to disambiguate them. I suggest “aptness”, but do not really care what word we use. “Conformation” also works.

    It is my belief that the intuition can’t serve as evidence per-se

    Agreed! Hooray!

  60. I am drawn to philosophical discussions because my interest in the history of Science keeps dragging me here. A particular example is the long debate between Albert Einstein and Neils Bohr. Einstein was actually the more intuitive scientist, able to see in the evidence of the work of others possibilities that could fundamentally dismantle the universe he thought he knew. Nonetheless he was at heart a classicist who held to the existence of a determinative reality we could ultimately know. Bohr, on the other hand, had set his fortress on the sounder fundamental principle- that we cannot truly know what is, but only what we can say about what is.

    I hear echoes of Bohr’s position in this discussion and, though his is the side that won the 20th Century in fundamental Physics, it troubles me that the walled community of “what we can say” might become the limit of what Philosophy has the spirit to seek.

  61. 2. Philosophical necessity. OK, this section seems to implicitly assume that there exists something other than ‘possible worlds’ e.g. an objective ‘real world’, or did I misunderstand? To rephrase, instead of talking about a world where Oswald didn’t pull the trigger, lets talk about one where Chen Yang wasn’t run over by a car this afternoon. Who is Chen Yang? I don’t know. Neither do you… but there is a good chance that, somewhere in China, he was run over by a car, maybe. But in the end, this is effectively un-knowable for us, and so we live in a world, that is, of necessity, a world where there are unknowable things. We live in an ensemble of possible worlds. Not even the most omniscient uber-being could pinpoint one such world and state that it is exactly *this* world that is the real one, and all others are just imagined impossibilities. (We know that no such Uber-being exists, as this is the content of Goedel’s theorem, more or less — the Uber-being cannot know about it’s own mind :-) In essence, the *only* thing we have are possible worlds; delineating impossibility is a fundamental act of survival for any animal species.

    The point here is that the very essence of ‘necessity’ and ‘impossibility’ are altered, once one realizes that there does not exist any world that can be known perfectly, even in principle. There are only possibilities, and thus the notion of (non-mathematical) necessity is not firm and concrete.

  62. Hi Linas, from what I understand, you are proposing that there is more than one actual possible world. You use the case of poor unknowable Chen Yang as leverage to suggest that there is an ‘ensemble of possible worlds’. So, our world presumably includes both the actual world where Yang is known to be dead, and the one where both his existence and his fate are indeterminate in some deep sense.

    I am, at times, sympathetic to the view that there is more than one actual world. But I do not know if the unknowability of Yang’s fate is good reason to believe that our knowable world is one of many actuals. This may just be a problem with the example; it sounds to me that Yang’s existence and fate are potentially knowable, though this is contrary to the point you’d like to make. It seems more parsimonious (and realistic) to just say in his case that there is only one actual world, and we grasp it incompletely.

    That said, I do think it is possible to defend something like the idea that there is more than one possible actual. Though it will depend on what theoretical commitments you want to make. Principally, it will depend on whether you a realist or an anti-realist about the idea of possible worlds. If you’re an anti-realist about modal terms, there’s really not much stopping you from using the language of possible worlds (apart from good taste, perhaps). In contrast, a realist form of argument would have to make some more serious metaphysical claims. So, for instance, one might argue that Schrodinger’s Cat really does live in both the world where the cat is dead and one where it is alive. Or you might argue that all possible futures are possible actuals.

  63. Now that I read the thing to the end, perhaps I can say something smarter. The essay concludes: “Pre-theoretic intuitions are here to stay…”

    Well, when you practice your profession, your calling all day long, it comes to color your every perception and intuition. Your judgments and reactions become innate. It colors your perceptions, and thus your conclusions. It becomes your weltanshaung.

    If this isn’t innately obvious to the reader, then perhaps there are psychological studies which demonstrate the same.

    Simply being aware of this does not make the coloration, the bias, the “intuitive foundation” go away. It’s taken me decades to learn and develop this intuition, this understanding of the world. It would take significant evidence to alter deep parts of it. It is my internalized model of how the world works. Just how much it should be labelled “cognitive bias” is unclear.

    Perhaps what I write above sounds amateurish, or obvious, or well-trodden ground. Yet, at times, when I read philosophy, I get to wondering “can it possibly be that the author is simply not aware of XYZ? Why is there no mention of it? Why is it not taken into account?” To build an argument, one must lay a common foundation, shared with the reader. Without this shared world-view, any deductions and conclusions will be rejected, argued, ignored, worked-around. (Wail.)

    Insofar that scientists sometimes read philosophy, a failure to share a world-view with them can only lead to critical rejection, and this, I presume, powers this debate.

  64. Hi Nelson,

    Well, I didn’t mean to turn the conversation to quite that topic, but, what the heck. The very concept of a “single possible world” is deeply flawed, and this is well-understood in the sciences. So, for example, Bayesian probabilists talk of their “priors”: that, to talk of the probability of something happening, one must explicitly take into account that one does not know the initial state, the state of the world: this is the very definition of a prior. Intellectually, statisticians do not debate whether there is one universe, or many, as there is no point, for them. Defacto, the notion of a “prior” has many possible worlds built in to it.

    As you point out, physicists have Schrodinger’s Cat, or more precisely, Bell’s theorem, to prove the validity of the quantum Many-World Hypothesis. But that’s the tip of the iceberg: google “Boltzmann brains” to see where physicists have taken it to.

    Climatologists appeal to the “flapping of butterfly wings” to give a simple illustration of a “positive Lyapunov exponent”. Like the Bayesians, they must deal with the possiblity of multiple, different initial conditions for their climate models.

    But this is not just applied science: chaos theory is a branch of pure mathematics, where it is known how to do the mathematical equivalent of mixing white and black paints so thoroughly and completely, that the very knowledge of which atom came from which paint bucket is utterly lost. It is possible to erase knowledge: not just in the physical world, but in the platonic realm of mathematics. (viz “topological mixing”, “strong mixing”, “weak mixing”, ergodicity, etc.)

    Logicians have met and confronted their own crisis of a “single world”, with Godel’s theorem. For them, some things aren’t provable in any “world” (set of axioms) that they care to inhabit. ZFC is one of those worlds, but there are others.

    Computer scientists know its impossible to find out if a Turing machine halts. Some know that this also implies that 3D manifolds cannot be classified. Others know that its impossible to tell whether tow group presentations are in fact one and the same group. These are famous theorems: there are things that are simply unknowable. Its is *possible* that two groups are the same. Or maybe not. Not even an all-powerful God can know (unless God posses an “Oracle Machine” (this is a comp-sci concept) and even then Chaitin’s constant remains un-knowable.

    Its not just the sciences either: in the Humanities, we have journalists, who know it may well be impossible to verify Chen Yang’s car accident. Heck, we don’t know if Banksy created Mr. Brainwash as a hoax, or not; its possible we may never know. Art fraud is all about not knowing.

    I was trying to make an epistomological argument: its fundamentally impossible to now everything about the world. This is foundational bedrock for Bayesians, physicists, climatologists, and mathematicians. Its a bit of a fallacy to speak of just one “possible world”, or “one real world”. Certain things remain unknowable possibilities, forever. And if certain things remain unknowable possibilities forever, isn’t that tautologically equivalent to saying that there are many worlds, and that we inhabit some or all of them?

    So, yes, I’m rejecting the notion of the intuited objective reality, and state that the best we can do is to subjectively gather evidence about the structure of the world(s) we live in. But I was also rejecting what seemed to be, to me, awfully naive arguments about “possibility” and “necessity”: entire branches of science and mathematics and logic have moved away from such naive conceptions; why would sophisticate philosophers cling to what seem to be such naive world-views?

    p.s. I don’t at all mean to ding philosophers. Love the undertaking. Its just that, as I was reading section 2, I scratched my head and thought “surely, these folks must know of XYZ, don’t they? Because they’re not talking in a way that hints that they do, and that’s alarming…”

  65. Hi Linus, don’t worry about being judged, this is a place for friendly chat! My surname is Nelson, you can call me Ben.

    I think it’s quite true that learned expertise often does affect the direction of one’s intuitions. But these sorts of intuitions are sometimes called ‘theoretic intuitions’. In a technical sense, a person who has theoretic intuitions have been indoctrinated. In contrast, ‘pre-theoretic intuitions’ are supposed to be the sorts of things that are accessible to relative novices, in many cases at least.

    As you say, it’s not entirely obvious how to deal with the problem of indoctrination. I have tried to stay honest by maintaining an active imagination, and by expressing my thoughts in old journals and writings. I then use these thoughts as drafts, and attempt to defend the best version of the draft, either in conversation or in my thoughts. I think of this as an ‘informalist’ approach to philosophical activity. So I don’t think philosophers must begin with a common foundation. I think modest intellectual risks are permissible. But on the other hand, I do think that it can be fruitful when people adopt the method of taking common sense and seeing where it leads.

    I don’t know, though, if either approach helps us to see bias more clearly. Sometimes, bias is part of common sense, just as sometimes bias is part of one’s informal point of view. I suppose that biases are tamped down as the explanations of the world become more sophisticated, as we are able to make increasingly high demands from multiple sources of evidence, and as there is a healthy institutional structure underlying all of it to keep business honest.

    On your second post, you seem to be continuing the argument in favor of multiple possible worlds. On the scale between realist and anti-realist views of probabilities in the world, Bayesianism is pretty much on the anti-realist side. But the counter-argument is just that epistemology is not the same as ontology: that as a matter of fact (metaphysics), there is one actual world, and just different and incomplete ways of thinking about that world. So if certain things remain unknowable possibilities, it is not by itself sufficient reason to say that there are many worlds and that we inhabit more than one of them.

    One thing is presumably straight out of the question, though: we do not live in all possible worlds. In some possible world, I was born a badger — but I assure you, I have never been a badger!