Transcribe your podcast
[00:00:14]

Rationally speaking, is a presentation of New York City skeptics dedicated to promoting critical thinking, skeptical inquiry and science education. For more information, please visit us at NYC Skeptic's Doug. Welcome to rationalise being the podcast where we explore the borderlands between reason and nonsense, I am your host, Massimo Polluting. And with me, as always, is my co-host, Julia Gillard. Julia, what are we going to talk about today?

[00:00:47]

Masimo, this episode is a sort of follow up to our previous episode 41.

[00:00:53]

If you didn't catch that, it was an interview with Robert Zaretsky, who's a professor of history, and he was talking about his book, The Philosopher's Coral, which is about the friendship between the falling out of Rousseau and Hume. And a lot of the book is exploring the ways in which research in Hume both took issue with the central project of the Enlightenment, which is about glorifying progress and about trusting reason to solve all problems.

[00:01:19]

And Rousseau and Hume each had their own issues with the idea of reason being the be all and end all they objected to in their own separate ways.

[00:01:28]

So so for this Episode 42, we wanted to talk about the limits of reason.

[00:01:33]

And so we mean that not just in terms of the limits of human ability to reason, like ways in which our brains are not equipped to reason properly, but also the limits of fields that rely on reason like logic and math and science and and the failures of attempts to to justify those fields sort of from first principles.

[00:01:54]

And of course, we should note that this is episode number 42. Listeners will get the answer to the question of life, the universe and everything by the end of this episode. Why? No, wait.

[00:02:04]

That was the degree to which it is possible to get they will get it right. OK, right. So the reason we think that this is an important topic, of course, for the skeptic movement for science, for for critical thinking in general is because the entire movement of skepticism, the entire idea of science, is in fact to use reason to solve human problems. Now, to find out things about the about the world and how the world works.

[00:02:31]

But also in the phrase of the famous phrase of Francis Bacon, one of the early philosophers of science, one of the founders actually of essentially modern science. You know, knowledge is power.

[00:02:41]

And his idea was that by using reason, human beings will be able, first of all, to find out things about nature and then eventually to control nature. And of course, that goes into the relationship between science and technology and improvement in our lives and so on and so forth. Very much the kind of things that Rousseau was highly skeptical of. He thought that modern life wasn't that much of an improvement to begin with, or at least that he was coming with too much trade off with with other other issues, with other more fundamental issues about human nature.

[00:03:13]

And human, on the other hand, had a very different kind of critique of the limits of reason more more from within the point of view of a sceptic. That is, he was pointing out that human reason is in fact limited intrinsically. There are certain things that we know don't work particularly particularly well with human reason.

[00:03:30]

In fact, he was in some sense a precursor of a lot of modern epistemology and a lot of modern cognitive science. And when it comes to the limited reasons.

[00:03:40]

So Rousseau wasn't actually saying reason doesn't do all the things reason can't solve all the problems or get all the answers that you think it can. He was just saying it doesn't make us better off. That's right. That's right. So that's very different. That's right.

[00:03:53]

A different kind of critique. And I think that for this episode, we're going to actually mostly focus on the human side of the equation. That is what is actually get all the answers right. It can get us the answers. But what kind of answers and and how does it work and how does it feel to work?

[00:04:08]

I think it's actually probably appropriate to start with Hume as a bridge to the last episode and in particular to the fact that Hume was in fact, famously the one that showed that there is no ultimate foundation for reason itself, or at least for the kind of reason that he's used reasoning that he's using in scientific investigations. So we talked about this briefly in earlier episodes. There are fundamentally two different kinds of way of reason and reasoning, deductive reasoning and inductive reasoning.

[00:04:41]

Deductive reasoning is what we do. What you do in logic and mathematics mostly. Do you start with certain assumptions and you build a deductive argument, what it's called a deductive argument that leads to conclusions that are certain to the degree to which the assumptions are true. So if you start with two assumptions to conclusion, necessarily follow, this is the way mathematics works and this is the way logic, formal logic works.

[00:05:06]

But that's not the way science works. I mean, scientists do help themselves to deduction, but but largely most of most of science actually works by induction. Now, induction, it's a more sort of complicated beast. There is not no one kind of induction. There's different types of induction. One of them, the classical version, is simply a generalization from from a small sample of occurrences to a broader and broader sample of occurrence. So, in fact, the whole idea of.

[00:05:34]

Generalizing is a type of inductive reasoning, and the puzzle is to figure out what are the what are the legitimate generalizations to make.

[00:05:41]

Right now, there is a lot of discussion, of course, about, for instance, in statistics, as you know, about, you know, how much of a sample size do you need and how representative is the sample size? I do make sure that the sample size is representative in order to do extrapolation, to generalize to the population and so on and so forth. But that's not what bothered him. Youm was interested in a much more fundamental problem.

[00:06:03]

He said, well, how do we justify induction itself? So why do we think that induction actually is a good idea?

[00:06:11]

And it's something that works and. However you put it, it turns out that the answer it has to has to be something along the lines of well within the induction induction is a good idea because a good approach to find out things about the world, because, in fact, it has worked in the past. But you pointed out that that's circular reasoning because now you're trying to justify induction on inductive grounds. OK, now circular reasoning. It's a problem if you don't want to defend a position by reasoning, because, of course, you're assuming the very truth of the thing you're trying to establish.

[00:06:48]

This is known as the problem of induction or resumes, problem of induction. He posed it in the 18th century. And as far as I know, there hasn't been any reasonable answer to that problem. People have tried, most famously, Karl Popper, one of the most influential philosophers of science of the 20th century, the originator of the idea of falsification ism, the idea that a scientific hypothesis in order to counter science has to be falsifiable. That has to be a way to show that it is wrong.

[00:07:18]

If it is in fact wrong. If if something if a statement is not falsifiable, then it's simply not scientific because there is nothing you cannot, even in theory, in principle, prove that it's wrong.

[00:07:29]

What was his answer to the problem?

[00:07:30]

Well, if the falsification was the answer to the problem of induction, because since classification is about demonstrating something is wrong and not demonstrating, not demonstrating that something is correct, but agreed that there is no way to prove that something that our scientific theory is correct. Scientific theories can only survive one test after another, but you can never prove that they're correct. But you can prove all it takes is one big stumble and one big fall, and you proven that it is in fact wrong.

[00:07:59]

Now, what if it stops being wrong tomorrow? I mean.

[00:08:03]

Well, the idea is that once you unless you're wrong about the reasons why was wrong. Right.

[00:08:08]

But those reasons have held in the past. But it seems like Hume would say, well, how do you know they'll continue to hold in the future? That doesn't really seem to get at the heart of the problem.

[00:08:15]

No, it does, because the idea is that let's say that a theory does make let's use what is called a sort of naive or simple falsification, and because then things get really complicated very quickly. But if the idea is that your theory, for instance, let's say Einstein's theory of general relativity makes a prediction, and that prediction is that the light light should be bent by gravitational forces by a certain degree and so on and so forth. You do the observation.

[00:08:43]

It turns out the light is not bent by gravitational forces. Then according to Popper, you don't really need to be bothered in any way any more by that theory, because you simply shown dramatically that that a prediction of that theory turned out to be wrong. You can reconstruct that deductively, which is where we get to the to the Hume's problem induction. So Popper said, you know, if the theory predicts that this particular fact, the fact is incorrect, therefore that you're wrong, that is in fact a valid form of deductive reasoning.

[00:09:16]

But there's no temporal component to that that. No, that's right. There doesn't there's no temporal component.

[00:09:22]

Well, yes, but it's hard to imagine. You know, it's easy to imagine how things can change in the future in the sense that a theory that seemed to be confirmed all of a sudden start diverging from empirical data. It's hard to imagine a theory that action is diverged from the empirical data, then all of a sudden it becomes reconciled with it.

[00:09:39]

But Hume's whole point was induction has always worked in the past. But what if that's not true tomorrow? That seems like at least as much of a fundamental, unimaginable situation. But it's still like, how do you say that?

[00:09:51]

How do you assign a probability to that continuing?

[00:09:53]

That is correct. Now, that is correct. That is why Popper reconstructed his falsification as a form of deductive reasoning. As I said, you know, you can actually put it formally if theory X predicts fact Y, if the sorry, this is the reconstruction theory X is correct, then fact Y should occur. In fact, why should not? A does not occur. Therefore, theory actually is incorrect. That is a deductive, deductively valid form of reasoning, meaning that if the premises are correct, that the conclusion necessarily follows.

[00:10:22]

Now your objection is well, but some of the premises might not be correct. It may not be correct him with. No, no, no.

[00:10:28]

I don't think you would have a problem with that. I think you do have a right the correct objection. But I don't think that you would object on those grounds. Your objection is well, but you just one of the premises of the reasoning is that if fact fact Act X turns out fact and Y turns out to be incorrect. Now, it may be that it appears to be incorrect today, but not tomorrow, that tomorrow we're going to find out that it's actually it was actually correct.

[00:10:50]

That is not a problem for for Pupper because Pupper, um, by reconstructing the reasoning in deductive fash fashion, essentially agreed that, yes, if if one of the premises turns out to be incorrect, of course, the conclusion is wrong. But these achievement in theory and the reason why he was so happy about it was that he had managed to recast scientific. Progress in terms of falsification and then recast falsification is in itself as a form of deductive reasoning, if he had succeeded in that, it would have been absolutely correct that there would have been a way to to avoid the problem of induction because there was there was no induction to be brought in.

[00:11:32]

It was entirely the science would have been established on a deductive basis. The fact is that doesn't work and doesn't work because falsification is and doesn't work. And the reason for justification is doesn't work just because even if you make a prediction and the prediction is wrong, there's plenty of history, plenty of cases in the history of science where the scientists are simply not abandoning that theory because a prediction is wrong and that that's for good reason, because there may be several different explanations for why the prediction turned out to be wrong, as you pointed out.

[00:12:03]

For instance, the fact that the observation might be incorrect. So suppose that and let's go back to my earlier example. Suppose we're talking about the general theory of relativity and that makes this prediction about light having to be bent by gravitational forces. And you try to observe and you don't observe it. You say, no, actually, it's not for paper that will count as a falsification of the theorem, but somebody could reasonably object. Well, maybe it's not the theory that was wrong.

[00:12:32]

Maybe it was just that your instruments measurement instruments were not powerful enough. Maybe there was a deviation and you certainly were not able to measure it. So there's always another possibility, other than the theory being wrong, that could account for the mismatch between the theory and the prediction. If that's the case, then the entire thing, the entirety of falsification falls down. It doesn't it doesn't work, which means that we don't have an account in none inductive account of science.

[00:13:01]

And so science still does work by induction. If it does work by induction. And if Hume was correct that there is no rational way to justify induction, you know, the conclusion is kind of sobering is like, well, science works. It works because it uses induction, but we have no idea how to justify induction.

[00:13:19]

You know, it doesn't it's never bothered me that much.

[00:13:21]

I'm perfectly content to just have an asterisk on every empirical finding we ever have saying, you know, assuming induction continues to work like that because the alternative is just so it's such a sterile line of argument.

[00:13:34]

I mean, you can't go anywhere.

[00:13:35]

You whatever you just did, that seems like a disclaimer.

[00:13:39]

We just have to add everything now. We can just move on and ignore it.

[00:13:42]

Yeah, but it should bother you. So. So here's the thing. In practical, from practical, from a practical perspective, you would agree with you. I mean, Hume was certainly not a radical sceptic. He was definitely not the kind of person. And therefore, we should stop doing science because, you know, we don't know what what we're doing is absolutely essential. We will keep doing it because it seems to work and it does, you know, and does produce results.

[00:14:05]

But we shouldn't be under the illusion that what we're doing is rational, because when you're trying to when you're trying to answer the question, well, rationally, in what sense? What is the ultimate justification for what you're doing, you don't have a good answer for it. And a curious person, I think, should be bothered by that, which is that, wow, this is an interesting thing. There is here's something that seems to be working, in fact, does work, it seems to me, a the ultimate application of reason.

[00:14:30]

And in fact, it turns out we don't have a particularly good reason to believe that it should work.

[00:14:35]

That seems like an overly demanding way to define rational or reasonable, to demand that it have like an ultimate justification. And you can always keep saying, well, why does why is this rational?

[00:14:47]

OK, well, why is rational to be rational and so on. But that's I mean, you could always just keep asking for more reasons.

[00:14:54]

I know. But if you reached bottom, I mean, the idea was that if you if if you were able to give a deductive explanation for why induction works, that's if you were you would have reached bottom like, OK, so so so you would have actually have an answer.

[00:15:09]

I mean, I understand your point of an infinite regress, but this isn't the danger here. The danger isn't of an infinite regress. The idea is that could potentially have been a way to stop that line of questioning and say, OK, well, here it is. I'm giving you a deductive explanation for why induction works. Had that succeeded, which, as I said, was what Popper was trying to do and that succeeded, that would have been it.

[00:15:36]

Now, the thing that makes it I think that makes it particular interesting is that in the meantime, you know, meaning the early part of the 20th century, other people were trying to do the same with logic itself and with mathematics. Right. So this was the famous quest of Bertrand Russell, for instance, was was involved in immortalised in logic comics.

[00:15:55]

Exactly. If anybody wants to. Yes. If anybody wants to look at the funny, interesting, well drawn version of it.

[00:16:04]

That's in logic comics, a lot of comics, all one word with an axe at the end, correct?

[00:16:09]

That is correct. Will add a link in the website naturally.

[00:16:13]

And so. So the question then is explain in logic comics is, was this Bertrand Russell and several other famous prominent magicians of the early part of the 20th century were trying to establish a logical, complete ultimate foundation for mathematics and ultimately for logic itself. And so the idea was similar essentially to the quest that had bothered philosophers since GM's problem of induction to establish they found some kind of foundation for the way in which we proceed at uncovering knowledge. Now, here's the problem.

[00:16:48]

Just in the same way in which you put the problem for science and and paper and many others, it failed to solve it. It turns out that Russell and Company also were not able to find ultimate foundations for logic. And mathematics is famous. Principia Mathematica, which is a huge book and manages to prove that one plus one equals two after several hundred pages of working's. And it's nowhere near, according to Russell himself, anything like what we would need in order to establish true independent foundations for logic and mathematics.

[00:17:23]

And what happened, of course, very soon was that Kurt Godel came into play with his famous incompleteness, theorems and incompleteness. Theorems by GOSAL do show that in fact, that quest simply cannot succeed, that you're not going to have a ultimate foundation for any logical mathematical system. Now, that leaves us in an interesting situation, it seems to me, because it leaves us at the, you know, middle part of the 20th century where the two things that order the three things, let's say that we really seem to be at the foundation, at the bases of science and skepticism and so on.

[00:18:05]

That is the inductive method on the one hand and mathematics and logic on the other hand, which means that that deduction, all of these things lock in ultimate foundation. We cannot justify why they work from a rational perspective.

[00:18:19]

As I said once and I realized that this was the case, that kind of did bother me for quite a bit.

[00:18:25]

You know, I think one of the most interesting and important things that came out of the failure of the quest to justify math within itself was the understanding that math is not it's not really true, per say. It's sort of a useful system that we have established.

[00:18:43]

But there's no I mean, you pick axioms. That's right. But you then derive your mathematics and I say you're not, you know, the mathematics. That's right. But but there's no reason that you would have to pick only those axioms like you could just as easily. We have these axioms because they lead to methods that accurately models the world that we happen to live in or even just have leads to math, math that has some interesting problems.

[00:19:09]

But there's I mean, there are you could add other axioms or take out other axioms and get a totally different that's. In fact, there is there are some axioms which are sort of indeterminately true or false like you.

[00:19:20]

You could choose to include them or not choose to include them. And you get different results either way. You probably heard of the axiom of choice, which is just the the axiom that you can pick. There's some method of picking an element out of any set out of, you know, an infinite set of steps, which seems like that should either be true or false, that there's a way of doing that. But it's not neither true or false.

[00:19:45]

You can either say it's true and then you get a certain kind of math or you can say it's false and you get another one, right?

[00:19:49]

Yeah. So I think that I mean, it is humbling in a way, at least if one were operating on the mindset that there is some ultimate truth with a capital to you. That's right.

[00:19:57]

And it is I think it's humbling also in the sense that, again, the same exact reasoning. Well, it parallel reasoning works also for for logic itself as well as for science. So, for instance, in logic, it's the same idea. Logic is first of all, there's more than one type of logic. In fact, there is essentially an infinite type of logic logics. And they work in the same way. You can start with certain rules or certain assumptions, certain axioms and logic and derive one type of logic.

[00:20:24]

And you can start with different assumptions and different axioms. You can derive another kind of logic. Now, they're not arbitrary. One of the things that we should make clear is that you can't do this arbitrarily because a lot of these axioms, for instance, may lead to incoherence. That may lead to contradictions and things like that. So those are out. So it's not a there is an infinite number of possibilities, but it's not. But there's also an infinite number of ways of doing it wrong.

[00:20:46]

So it's not like, you know, all of a sudden anything everything goes in science that has led or I should say in philosophy of science, because scientists tend to be very pragmatic and not particularly bothered by these things. But in philosophy, science, this kind of line of reasoning, that is that the fact that that as you put it a minute ago, the analogy of mathematics and mathematics is not true in any big capitalist sense of the term. It's just something that works in a particular.

[00:21:14]

Way, you know, and for particular purposes in front of a science that is led to the so-called anti realist school in scientific theories, the a.D.A school essentially says that science is not also is not about truth to models of the way the world really is, because we don't know and we cannot know the way the world really is. It's just about models that work well enough in terms that they give us predictions that we can act on, that we can you know, we make models and scientific theories essentially are models of the world and we don't know and we will never know whether they're true or not.

[00:21:52]

We simply can know that some of these models work better than others.

[00:21:55]

And for for practical purposes, quickly getting back to logic. Yes. Well, we talk about math not being true with capital T, but logic also, I've been astonished to learn, is also not taken to be true with the capital to the fundamental principles of logic, something as as self-evident as, you know, a statement can't be both true and false at the same time. Right. New York City skeptics actually hosted a lecture a few months ago by a famous philosopher named Graham Priest to a colleague of mine at the City University in New York.

[00:22:27]

Right. And and one of his big points is that we should accept the fact that statements can be both true and false. And that's the only way out of certain logical paradoxes.

[00:22:36]

Right. These are world experts in what is called a consistent logic and some suspicious suspicions. But it's really intriguing for the reasons you brought up.

[00:22:46]

Now, all of this, however, I think has done something interesting at the in the middle part of the 20th century, all of these realizing that, you know what, science is based on induction and induction. It's not justifiable rationally. Math and logic are based on deduction. And you run into the same kind of problem. You don't you cannot have an ultimate justification of it.

[00:23:07]

So in other words, this failure of foundational projects, you know, what is the foundation of what is the ultimate foundation of science?

[00:23:16]

Mathematics and logic has led philosophers in, broadly speaking, to a different way of looking at it. And perhaps the most important philosopher in that respect is, is Quine, who wrote a very famous, influential paper in the in the 1950s called The Two Dogmas of Empiricism and Quietens Idea, which I think it's broadly accepted at this point in philosophy of science.

[00:23:40]

And you can always find people disagree.

[00:23:41]

But but the idea is that, look, instead of thinking about a foundation for for for knowledge, we should think about a web of knowledge now, because the foundation metaphor implies the edifice of knowledge or something that implies a foundation and implies the fact that there is there's a point where you start below which you don't, that you have a solid, solid ground of evidence.

[00:24:09]

Exactly. Something is self-evident, rosoff, justified or whatever it is. Well, now we know that that's not the case. So instead of looking a thinking of knowledge as an edifice that you keep building on and on and on, therefore for which you need some kind of foundation, think of it in terms of a web.

[00:24:24]

Now, interestingly, by the way, many, many years after Quine wrote his paper, the the we entered the era of modern computing and database and the largest database of scientific articles in the world is in fact called the Web of Science. Interestingly, they picked up that metaphor.

[00:24:43]

Now, what was coined the idea was that, look, knowledge is actually more like a web where there is all these these threads, some of which are very thick and very, very large and other ones are kind of tenuous and more, more, more smaller. And some of these threads are connected to a bunch of other trends. Some of these trends. On the other hand, the connected only a few things in every these threads represent both factual knowledge about the world as well as theoretical knowledge about the world.

[00:25:10]

So in part of the web is also our the way we reason all the all the scientific instruments that we use and the assumptions that go into building those instruments, as well as ultimately the rules of logic and mathematics itself. All of this makes for these web. And the idea was that when we find something that doesn't work right, what we do is we find a thread that it's not that it's not working anymore. It could be a fact. It could be a hypothesis, could be a theory, or it could be a larger chunk of things.

[00:25:41]

We just cut it out and replace it with something else. But because we have a web, the rest of the web essentially keeps itself up without having an actual foundation like a spider web.

[00:25:53]

Kind of this implicit hierarchy, though, that things at the center of the Web are are more fundamental, if not like the absolute foundation.

[00:26:01]

And so if you were to cut those out like something like, you know, believe in induction, right. Then you lose a whole lot of other stuff.

[00:26:08]

That is correct. Smaller, thinner threads at the at the periphery of the web can be more easily replaced.

[00:26:13]

That's right. Now, that is correct, and now, quite controversially, went so far as saying that, you know, if it turns out that the laws of logic don't work, and particularly one in a particular in a certain number of instances to bet for the laws of logic, you should just cut them out and replace them with something else. Now, other people say, well, wait a minute, that may be the case, but before you cut that kind of thread, you better have really, really no good reason for doing that.

[00:26:41]

So you're right, not certainly not all trends are created equal. Some of them are much more important than than others. But I find a lot of appeal in this model because first of all, it does away with the foundational metaphor.

[00:26:54]

And second of all, it sort of gives you this idea of knowledge as a as an organic thing that keeps growing and that can be constantly replaced almost everywhere, almost everywhere if necessary, of course. Again, you have to have good reasons to replace some of the threads. But it's these open ended quest where you're not going to bet your life on anything, including logic itself. You can say, well, you know what, fine, if it turns out that the principle of contradiction is wrong.

[00:27:18]

OK, fine.

[00:27:19]

What do we do that without the principle of the thing that Grandpre exacting, which is exactly self-evident to the rest of us, right? Yeah, I don't know. I don't know if I'm willing to go that far at the point when someone's telling me that something can be both true and false. My reaction is not, oh, my God, that's amazing. It's OK. How are you defining true and false? Because clearly we're not using the same definitions.

[00:27:38]

Right.

[00:27:38]

But as you know, people like Chris do have to start with real problems. Like, you know, the consistent logic comes out of analysis of contradictory statements and paradoxes. So paradoxes are typically the thing that's standard model logic has a problem with cannot solve. Now, what you can do two things there. Again, you can say, well, we haven't figured out still how to solve the paradox. It must be a solution or after you know, if a paradox has been around for a couple of millennia and nobody's figured out how to solve it, you might want to at least try priests and others way and say, hey, perhaps we should revisit the foundations of logic and bite the bullet and say, well, it's not the paradox is not a paradox.

[00:28:26]

It's just an indication of the limitation of form of a particular kind of logic that we've been using so far.

[00:28:31]

I would at least be a little more comfortable with saying it's indeterminate, like neither true or false or it doesn't make sense to call this either true or false as opposed to saying it's both true and false.

[00:28:39]

That seems like a more logical and more conservative. I guess I. So why don't we move on now and talk about not the limits of certain fields like logic and math and science and philosophy, but the limits of human brains? Because, you know, we it is a sobering thought that.

[00:29:00]

Our brains are not are not optimally designed to reason. And so if we you know, in our quest to understand the world, both inductively and deductively, we we have a bit of a handicap.

[00:29:13]

Absolutely. So so not only now, we're the first part of our discussion sort of showed that even reason under optimal conditions does have limits and does have you know, in this in this, again, youm was right and other people's people were right. But as it turns out, human beings who are the only examples are reasoning beings.

[00:29:34]

And we know of don't work anywhere close to the ideal of a rational agent.

[00:29:40]

I mean, we have all sorts of cognitive biases. We have all sorts of things that can go wrong, particularly the confirmation bias has been in the news quite a bit over, over over the last several weeks or months. Psychologists and cognitive scientists have are finding out the confirmation bias is essentially a human universal. I mean, whenever we have an idea, we tend to selectively remember or look for pieces of evidence that confirm that idea and we discard the filter out pretty much anything that potentially contradicts that idea.

[00:30:18]

Now, there are several suggestions on why that might be the case.

[00:30:24]

For instance, is this these very influential article that came out recently in behavioral and brain sciences and in their volume 34 in 2011? The article is co-authored by Hugo Mersea and then Sperber, and it's entitled Why the Humans Humans Reasoned Arguments for an Argumentative Theory. Essentially, this is evolutionary psychological take on human reasoning. And the idea that Murcheson Sperber put forth is that all these kind of devices are an example of things that go wrong with the human brain. This is the way the human brain is supposed to work.

[00:31:02]

Now, that sounds like weird to go wrong in the sense of deviating from the processes lead to the truth.

[00:31:08]

Right. But what their point is, which is interesting, is that there's the question, the basic assumption that reason is to find out the truth about things.

[00:31:18]

OK, so you can't question the definition of reason, right?

[00:31:21]

You would think. But they say, well, from an evolutionary perspective, it may actually turn out that reason is, in fact, an instrument to convince other people in your group to do things your way. No.

[00:31:31]

Than what you should say is the human brain was designed as an instrument to convince people to do things your way. It was not designed to reason you don't redefine reason as what you do.

[00:31:37]

Anyway, I would agree with you this semantic point, but I would give you this kind of and besides, as you know, I tend to be fairly skeptical of evolutionary psychological explanations in general. We have done a show on that on that topic, not not because they are necessarily wrong. In fact, for all I know, Murchison's might be right. It's just that I think it's it's incredibly difficult to actually test these kind of hypotheses. But let's consider it for a minute.

[00:32:00]

So what they're saying here is that human reason is let's make the distinction between ideal reason, human attempts to resolve this. Thank you. So but humans at times, the reasons are flawed for a very interesting reason. That is, they're not truth conducive. They're not to sort of not because there's something wrong with the human brain, but because the evolutionary purpose, if you want of reason, is not to find out. Truth is to convince other people to do your bidding.

[00:32:32]

So the confirmation bias makes a lot of sense. Right, right. Mustering arguments for your claim without bothering to check whether there are better arguments for another claim.

[00:32:40]

That's right. So so according to Merson and Sperber, instead of being, you know, evolved as philosophers and scientists, we evolved as lawyers. Right?

[00:32:50]

Actually, yeah. That's a nice analogy. Right now. The problem with with I do find the problem with that argument in Jasso, as I said, set aside, how do we actually test these hypotheses? Because then then we run into another yet another discussion about the limits of evolutionary psychology. We already covered that. But let's even ground for us for the sake of argument that this is a reasonable hypothesis. It might be tested some one way or another.

[00:33:14]

The thing that bothered me when I read the article by Mercian and Sperber is that OK? But wouldn't it be marvelous way to convince other people of your point of view to make an argument that it's actually in agreement with the way the world actually is?

[00:33:30]

I mean, after all, if I want to convince you that it is a bad idea to jump off a window of the fifth floor of a building in New York, it helps me. The fact that I am actually that I have gravity on my side. Right. It says, look, my friend here, I can I can throw these stone and see what happens. It doesn't fly. So the argument seems to me becomes much more convincing when I can not only articulate the argument well so I can use rhetorical skills and that I would agree certainly a socially useful skill.

[00:34:00]

But when truth actually happened to be on my side, if I had to trying to convince you to jump out of the window in spite of gravity, it seems to me that I might be able to come up with some rhetorical trick. But once you jump, I lost the argument.

[00:34:14]

Yeah, although those people aren't going to be around to dispute your authority. That is true. But anyway, witness might, sir.

[00:34:20]

Yeah, I mean, that is a dramatic example. But I guess if if the benefit will be down and I go for drama. Yes, you do.

[00:34:28]

And you all listeners can't see his hands, but they are dramatic too.

[00:34:32]

I mean, if the if the benefit that natural selection confers on you for being rhetorically skilled is significantly greater than the benefit it confers on you in terms of like status and, you know, social power for being right, then, you know, you're just selecting so heavily for the former that I could see the ladder kind of falling by the wayside, even though it would help you if you were also right.

[00:34:51]

It just wouldn't help you enough to, you know, have that trait be selected strongly for.

[00:34:55]

I mean, I agree with your with your point, you would think I'm aware that even in that case, there would be also selection for essentially perfect reasoners, that is for for other people. They would be able to survive better. Right. Right.

[00:35:09]

You I would think that if if, in fact, natural selection is favoring sort of rhetorical skills, then it would also automatically favor perfect reason is that will be able to see through your logical fallacies and call your bluff. Apparently that has not been the case.

[00:35:24]

I'm whatever because I like the idea of our ancestors preferring to meet with the people who could decimate their opponents logical arguments the most incisively.

[00:35:36]

That's the king of the Harim. That's right.

[00:35:37]

That is somehow some of know you like that for nerds. There is some appeal to that to that idea.

[00:35:44]

Now, however, that the fact remains that we do know that there are cognitive biases. There's no question, however, they came about, evolutionarily speaking. The fact is they're there and they're very difficult to dislodge. So I read this article recently that that sort of gets to that to that point from a point of view that actually is interesting for our usual discussion. The article was published recently, I think, last year in a philosophy journal called Zygon, and it's by Dorian Ridker.

[00:36:14]

The title of the article is The Design Method for How to Confuse Organisms with Mousetraps, Machine Metaphors and Intelligent Design. So it talks about intelligent design, creationism and the fact that creationists often invoke this analogy between or even identity between living organisms and machines and say, look, if it's a living organism is a machine. Machines are designed for a purpose. So we must have been designed for a purpose. That's the basic the basic idea. Now, the author here makes an interesting point.

[00:36:45]

She says, look, this is not a mistake in the sense of, you know, there's a bunch of stupid people who like to argue in a fallacious way. Demet, the metaphor, first of all, because human beings think by metaphors, we simply cannot avoid thinking and speaking metaphors. There are several interesting examples. And in the paper, for instance, when we say things like, oh, I feel down today. Right. But we don't normally mean that as a metaphor.

[00:37:10]

We mean literally I feel down, but in fact, it is a metaphor. We're not down, really.

[00:37:14]

So interesting when you really examine our speech, just how much of it is metaphor. Exactly. I love doing that.

[00:37:19]

So so on the one hand, the idea is, you know, there is no way not to think by metaphors, not to think by analogy. This is just the way human beings think. So. So one thing we should not fault, for instance, creationists, is for thinking metaphorically or by analogy. The other thing is, well, actually, as it turns out, that particular metaphor is thinking of of a living organism as a machine is very productive, has been very productive.

[00:37:46]

It was, for instance, at the basis of the origin of modern medicine when Descartes famously equated the body of the animal body in the human body to a machine. Part of his point was, look, it's made of pieces and things that work together for a purpose, which means that if something goes wrong, first of all, it means you can understand what it does. You know, the heart is a pump it so it serves the purpose of pumping blood.

[00:38:13]

It also means that you can intervene and fix it, just like with any machine. So not only the metaphor is a natural way to think about it. It's also true that, in fact, it does deliver in a lot of other applications.

[00:38:26]

Of course, it just happened to be the case that thinking about the evolution of the diversity of life on Earth in terms of intelligent design is the wrong way to think about it. But the other here that the article points out that that is a fairly sophisticated achievement of modern science that was inconceivable before Darwin. You know, before Darwin, we didn't have an alternative mechanism. We did not have an alternative explanation. And it is true that very few people really are exposed to or understand the theory of evolution and so that we shouldn't really go around being too cocky about.

[00:39:00]

Thinking that 90 percent of humanity is stupid just because they don't get evolution there, there's an interesting analogy that that these other points out that I thought made a good point. The other. The article has a couple of figures. One of them is the industry, the so-called Miller earlier illusion. The illusion is that situation. I'm sure most people have actually seen somewhere where you have two lines of equal length, but they look different of a different length because in one case, some sort of I don't like segments are coming out of the outside, toward the outside of the line.

[00:39:39]

And in the second case, they're going inside the the eye is fooled into thinking that the two lines are a different length because of the position of these these flaps basically at the at the end of the lines. In reality, if you measured them, they are exactly the same, the same distance now, the same length. Now the other the article says, look, it's not that there is anything wrong. You can't say, oh, vision doesn't work because you cannot tell that the two lines are the same vision work exactly the way it's supposed to work.

[00:40:09]

It works most of the time. It's just that you happen to have bumped into a particular situation where the way in which the visual operators in the and the brain, the human brain is constructed makes a mistake. But it isn't a mistake. And even if you correct that mistake, if you tell the person, look, those two things actually are of the same length, even if you measure the two segments and you realize, you know, measurement in hand that they're the same length, you still cannot avoid seeing them as of a different length.

[00:40:40]

But even once you know that there is a bias there that you're getting things wrong, the brain doesn't correct them, doesn't go right to bias, and still it's still that you still perceive it in the same way.

[00:40:50]

So the basic idea is, look, let's go easy on on these on people who make mistakes because of cognitive biases, because these are, in fact, first of all, not mistakes in in the broader sense of the term. This is just the way the human brain works. And it does work most of the time. The so-called cognitive biases actually are juristic that work most of the time. They allow us to make reasonable decisions. And there are a lot of quickly and there are a lot of circumstances.

[00:41:16]

They fail in certain areas. And in those areas is where you need special training in, you know, logic, critical thinking, mathematics, probability theory or science to figure out what's wrong, because that training is very specialized and it does take time. You know, it's just like the idea is let's let's go a little bit more easy on on people and cut him some slack because they're not stupid. They're just using the brain the way they're supposed to be using it.

[00:41:41]

They just don't have the additional training that it takes to be a scientist or a logician or whatever it is.

[00:41:47]

Before we wrap up, I just want to talk a little bit about the idea that our our brains were not actually designed to reason properly, especially about really complex things that are far outside of what of the environment that, you know, our ancestors grew up in.

[00:42:04]

I do think that it that it throws into doubt our ability to actually solve a lot of the questions that we're interested in in the future.

[00:42:11]

Not to say that we won't, but it does. I mean, even if we can actually collect all of the empirical evidence that we would need to answer all of our questions about the nature of the universe and time and space and consciousness, it seems entirely possible to me that our brains just would not be equipped to, even if we, you know, through experimental results, got the an answer about about time having a distinct beginning. What how how can we hope to really comprehend what it would even mean for time to begin.

[00:42:37]

That's right. Or about, you know, how could we hope to comprehend what it would mean for there to be, you know, near infinite number of universes. So, you know, they're like, I'm somewhat optimistic and that over time, I individually and also we as a species have gotten better at understanding things that seems just, you know, completely unintuitive or even contradictory. Right. But but who knows whether we're actually going to be able to wrap our minds around some of the results that we're going to get in the future?

[00:43:08]

No, that's that's exactly right. And philosophers have a term for that. It's called epistemic limits, and they distinguish between two different kinds of epistemic limits. One is the kind of limit that is posed by, you know, accidents. That is, you know, for instance, we may never be able to figure out how life originated on Earth, not because it's an impossible problem, but because we'd not have we may never find enough clues. Right.

[00:43:32]

To distinguish. Right. So just just happens to be the case. Yeah. I think just even if we did have all the clues.

[00:43:37]

Right, exactly. You know, everything is going to be able to really feel like it understands exactly implications of those clues. That is the second level of epistemic limit. That is, there may be limited intrinsic to the human ability to reason to figure out things. But on top of that, throughout these days, at the beginning of this podcast, we also have discussed epistemic limits that may be intrinsic in the very idea of logic and science itself. Regardless of the limitations of human beings, so I'll have an interesting way to put it like limitations inherent to the discipline of math and then limitations inherent to our ability to understand this discipline.

[00:44:10]

That's right. And then just limitations due to lack some of the clues might have disappeared. All right.

[00:44:15]

On that Fatalis note, we are out of time. So let's wrap up the section and move on to the Russian speaking. Welcome back. Every episode, Julie and I pick a couple of our favorite books, movies, websites or whatever tickles our rational fancy. Let's start as usual with Juristic.

[00:44:47]

Thanks, Massimo. My pick is a book. It's called Feeling Good The New Mood Therapy by David Burns. And it's it's a very readable, popular introduction to the field of cognitive behavioral therapy, which is I think we've discussed on the show before. It's it's the one really solidly validated school of therapy, doctor.

[00:45:08]

But yeah, that's right. Yeah. Of talk therapy. Thank you. And David Burns is one of the founders and popularizers of it's it's a it's been a bestseller since it was published, I think, for the first time in the 70s.

[00:45:20]

And and they've actually done studies where they they've given this book to people who are depressed and and then they've had control groups of people who are depressed, who didn't get this book or who had some other control.

[00:45:31]

And the people who just were given the book and told to read it did significantly better. And, you know, in the long run for future checkups to see if their depression had improved. So and the reason that I think it's appropriate to use this is to pick on our rationality skepticism podcast. Is that so cognitive behavioral therapy, that there's sort of two components intertwined, the cognitive one and the behavioral one. Peyrol one is really more just about training yourself and getting into certain habits and hacking your brain to some degree to get yourself to behave and react the way you want to behave and react.

[00:46:03]

But the cognitive part is really much more about using rationality to make your life happier and more productive and more effective and more efficient.

[00:46:12]

So, so often what will happen, and I've seen this in my own life, in the lives of friends, is that you'll have some negative emotional reaction to some situation and you won't really examine what's at the root of the emotion. It feels like a very natural emotion to have, like, I don't know, meeting someone who's better at something that you care about than you are and feeling bad and and so you won't actually question the negative emotion. And you never you don't try to to get rid of it because it feels like the natural emotion to have.

[00:46:43]

But then when you really dig down, if you really dig down, you find that that giving rise to that emotion are a whole bunch of actual false beliefs about the world or false reasoning about the world. And you can actually, like, cash those out in terms of logical fallacies and cognitive biases that lead to these beliefs. And so the book gives all these very concrete practical exercises for like taking your emotion and and working backwards. So, you know, why do I feel this just, you know, say the first claim that comes to your mind and then.

[00:47:10]

OK, well, what evidence do I have for that? Or like what process of reasoning did I actually use to come to the belief that made me sad or angry or anxious or something?

[00:47:20]

That's very interesting. I wrote a few years ago a book with similar premises, although in a very in a very completely different way, starting out with which which was exploring the relationship between Aristotle and cognitive behavioral. Oh, really? Well, the idea was essentially the same that that that a lot of the stuff that that causes us problems is actually the result of unquestioned assumptions. And that if you request you question those assumptions, you reconstruct your reasoning, your hidden reasoning, then you see that actually you don't need to be bothered by, you know, whatever it is that the problem happens to be.

[00:47:50]

So it's not.

[00:47:51]

Yeah, it's not like meeting someone who's better at something than you are. Did you really think that you were the best in the world? Like, clearly there exist people who are better than you thought.

[00:47:59]

Now, we'll talk about this later. Yes, we'll talk about that later. Yeah. What's your pick? So my my pick is an article that appeared recently in the Chronicle of Higher Education by Dan Horowitz. The peculiar thing about them and Horowitz is that he holds an interesting job in a very unusual job. Is the philosopher in residence at Google?

[00:48:19]

I've never heard about that. Right.

[00:48:21]

So Google has a philosopher in residence, and this guy is actually a technologist. I mean, he started out with a degree in computer science and he started up his own company, which was bought by Google. And and then it turns out that he sort of went back to school and got a Ph.D. in philosophy. And now he is, as it turns out, as I said, Google Dean.

[00:48:41]

He now is the philosopher now the only philosopher who gets fed caviare with, like, shoulder rubs at lunch.

[00:48:48]

It may be, although I found out actually a couple of years ago that one of the major TV networks, I forgot which one, if it was NBC or ABC, also has an in-house philosopher.

[00:48:59]

So, yeah, it's apparently it's a it's a small niche. But but there is there is one. The article is interesting.

[00:49:06]

It's called From Technologists to Philosopher Why you should quit your technology job and get a PhD in the humanities. And it's all about the value, both in practical terms, interestingly, as well as in sort of a broader term in terms of sort of, if you will, meaning of life kind of thing that studying the humanities has. It's really interesting. It's well argued, of course, again, as usual, one doesn't have to agree with everything there, but it is very, very good food for thought.

[00:49:33]

And it's certainly from somebody who has tried bouffe and. He's been successful at both approaches, and he's both, as I said, a technologist and a philosopher. So he's done both sort of the science, the approach to things in the humanities approach to things. And he thinks that his life and career are much better precisely because he managed to do both.

[00:49:52]

So it's interesting.

[00:49:53]

I've talked about in the humanities episode and the focus is always on new material to add and discussion. OK, well, that wraps up another episode of rationally speaking. So join us next time for more explorations on the borderlands between reason and nonsense.

[00:50:16]

The rationally speaking podcast is presented by New York City skeptics for program notes, links, and to get involved in an online conversation about this and other episodes, please visit rationally speaking podcast Dog. This podcast is produced by Benny Pollack and recorded in the heart of Greenwich Village, New York. Our theme, Truth by Todd Rundgren, is used by permission. Thank you for listening.