Transcribe your podcast
[00:00:14]

Rationally speaking, is a presentation of New York City skeptics dedicated to promoting critical thinking, skeptical inquiry and science education. For more information, please visit us at NYC Skeptic's Doug. Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense. I am your host, Massimo Puchi, and with me, as always, is my co-host, Julia Gillard. Julia, what's our topic today?

[00:00:47]

Today is another very special episode of the rationally speaking podcast. That's right, listeners. Our Christmas gift to you all are non-denominational holiday gift to you.

[00:00:58]

Yes, there is is a full hour of Q&A with me and Mossimo tackling the toughest questions that you all throughout us on the rationally speaking blog. We've got a really good line up this time, so let's get started.

[00:01:12]

So I guess this is could be interpreted as a gift or as Santa bringing the call. Right, depending on how you look at it.

[00:01:18]

Yeah, no, I know you're all thinking how do Massimo and Julia always know exactly what I want? We know. We know listeners.

[00:01:25]

So got start. OK, the first question is, it's a great question. It's from Gil. And he asks, can we say that quantitative research is more scientific than qualitative research on average, or does it all depend on the logic and the arguments? Are there any criteria or characteristics of good qualitative research?

[00:01:46]

Yeah, that's a great question. Qualitative research has a bad reputation among some scientists, but I think it's undeserved.

[00:01:54]

I think we should just define it so. Well, I'll tell you how I think of the difference. I'd say that in qualitative research you're collecting non numerical information. So instead of measuring something, you might be either interviewing people or maybe simply observing and describing how some system works like a company or a neighborhood. And usually you're either studying one example, like one case study, or you're you're studying a small group of examples rather than a large group because you're not usually concerning yourself with statistical significance.

[00:02:27]

So this is usually the nature of sociological research, or sometimes it could be used in maybe political science or or economics. It's a social science.

[00:02:37]

Well, it's also yeah, largely that's true. But there are also instances in history of obviously history and a lot of is there not all of it, but it's based on qualitative research. And in fact, even some historical scientists like palaeontology.

[00:02:53]

It's true that you can you can do some quantitative things in paleontology. But a lot of the interesting discoveries, like, for instance, you know, the qualitative discovery of of asteroid hitting the earth 65 million years ago. Yeah, you can measure the amount of iridium in the layers and so on and so forth. But it's a qualitative discovery. I mean, you this year, you don't find the crater when when people found the crater, for instance, someone the satellite identified the crater off the coast of the Yucatan Peninsula.

[00:03:22]

Well, that's a qualitative research in a different sense from what you're talking about. That's true.

[00:03:26]

I guess I'm only thinking of the kinds of research where you're trying to to divine some sort of general rule about what causes what or how how things happen.

[00:03:36]

Whereas for history, I tend to assume that's not the goal.

[00:03:39]

But but it's but I think there is this I think it does make sense to think about quite that research sort of more broadly as anything as you said initially, anything that is not strictly quantitative. Now, even within quantitative research, there are some interesting considerations there. There are some areas, for instance, where it's easy to do quantitative research in certain uncertain aspects of a problem, but that sort of steers you away from what sometimes are the more interesting aspects of the problem and just because you cannot quantify them.

[00:04:08]

So I give you an example. In my original area of research, when I was practicing biology, which was quantitative genetics, quantitative and evolutionary genetics, it's called quantitative genetics for a reason. Can you quantify things? You study what what are called quantitative traits such as height or weight or lence and things like that, things that you can actually measure and derive statistics about. But the thing is, in evolutionary biology, one of the most interesting questions is how do you get novel traits to originate during the evolutionary process?

[00:04:43]

So things that have never been there before and those are much more difficult to study, particularly in part because they're not actually using the quantitative. You can you can tell that an animal is longer than another one or taller or weighs more or whatever it is you can describe. The difference is, let's say, the evolution of skull shape. If you start measuring lengths of particular bone and then other bone and so on and so forth, but when it comes down to entirely novel structures that occasionally do arise in evolution, it's very hard to sort of quantitative them and and the tendency of quantity biologists to gravitate only on things that they can actually measure and do statistics about.

[00:05:23]

Actually, it doesn't mean that they neglect in some sense, other other aspects of research that are equally, if not more interesting, but they're less amenable to quantitative research. Now, that said, there are good reasons why qualitative research also has got its own limitations. Right. I mean, possibly we do want to quantify things and want to extract statistics about it.

[00:05:43]

Yeah, my sense is that qualitative research is really valuable unless it's about a question where you can't do quantitative research like you're looking for, you know, the, um, some historical artifact or your or you're looking for a skeleton of an animal.

[00:06:01]

In general, though, I think it's it's most valuable as sort of preliminary exploratory research to help you form a hypothesis that you can then hopefully in the future test using quantitative methods.

[00:06:11]

But I think usually it can't produce definitive conclusions of its own beyond the particular case study that it's looking at.

[00:06:20]

So, I mean, if you have if you have only one example, you've no way of knowing how representative it is of other cases. Whereas if you were doing quantitative research and you take a large representative sample of the population, then you can get statistical significance and assume that you can generalize from your sample to the population. And it's sort of harder to do that in qualitative research.

[00:06:40]

And to some extent that's true. But, you know, think of things like another example that comes to mind is an anthology. I mean, for a long time we had this question in human intelligence about human evolution, whether humans evolved a bipedal posture first or a large brain. Mm hmm. Well, it only took one discovery to figure it out. And we found we found Lucy. It was clearly bipedal and clearly had a small brain. And there was really not much you need to quantitate about.

[00:07:06]

You can quantify things later and you can actually measure the skull and then you can find it's nice when you find conformations, you can find another two or three skeletons. But really, the question the question was answered definitively the moment we found a single skeleton. And so that's a spectacular example of a qualitative finding that has really not particularly been enriched by quantitative research at all.

[00:07:27]

Right. So that's I mean, that's a nice situation in which all you would need is one example in order to to clinch your theory or to disprove something. So that I think that's a great example of one. Qualitative research is useful.

[00:07:39]

But I do think that a lot there's certainly whole fields in which qualitative research is used to produce general conclusions about human behavior or society, that where it is much more dubious, the connection between your particular case study or case studies and the population as a whole.

[00:07:59]

And I think it also leaves a lot more room for subjectivity.

[00:08:02]

Right, because you if you haven't precisely defined your hypothesis ahead of time, you it's so much easier for the researcher to just interpret what he sees to match what he believes. So, you know, if you if you have, like, a sociological case study and you go into it believing that poverty is caused by racism, then you're likely to focus on the events and the comments and interactions that match that belief and not on any other factors that might be contributing.

[00:08:27]

But that's a perfect example. There is another example that just came to my mind about, however, again, a sort of a cautionary tale about too much emphasis on quantitative. The argument, of course, is never that you shouldn't be doing quantitative research or what their research is somehow or not, you know, not the golden standard, if you can do it. But there are pitfalls in one of those was pointed out in a classic paper, I believe it was in Nature magazine many, many years ago by Richard Lewontin, who is a prominent population geneticist at Harvard.

[00:08:59]

And it was actually, incidentally, also one of the major influences on my early career is really bright guy. And he wrote a paper about the importance of unusual events in biology. And he pointed out that, you know, if we keep focusing on the mean of a population and the distribution of the variance of a population, which are the two fundamental things that quantity biologists are concerned about. Right. What we end up doing is paying much attention to the occasional outlier, which may turn out to be crucial because, again, it's the outlier that might evolve.

[00:09:34]

The new trait that it's way outside of the population mean that is the thing that you cannot even quantify or that it's way out there and individual events may become very important. So here's example. I think the title of the article was something along the lines of, you know, how likely is was Julius Caesar? And the point was, you know, you can do all the quantitative research you want, for instance, on the social structure and military structure of the Roman Republic before it turned into an empire.

[00:10:05]

But the fact of the matter is that you can argue very reasonably that unless the right person was at the right time, president, all those quantitative measures give you nothing in terms of predictive values. Something very unique event had to happen. And you need a qualitative event that to happen and that that person not being there at that at that point, it probably wouldn't have happened.

[00:10:26]

So, you know, that's not to say, again, the quantitative research there, it's uninformative, but it is that if you do only the quantitative part, your. Missing the big elephant in the room by focusing on the little details right now, I that I think that's a good example, too. I think I'm coming at this with a background in social sciences. And I would frequently get frustrated by qualitative research from sort of the softer end of the social science spectrum.

[00:10:50]

But before we leave this topic, it just occurred to me that there is I think there is a way in which single case studies can be, if not conclusive than at least suggestive of a general rule, which is when you when you focus on like a critical case. So when the question that you're asking is about whether a certain phenomenon exists or how widespread it is, and you pick a case specifically in which the conditions for that phenomenon are at their worst.

[00:11:18]

So the least conducive to that phenomenon. And then if you observe the phenomenon, you can be reasonably confident that it's probably happening to some degree elsewhere as well. So I was just thinking of there's a classic study of oligarchy and organizations. Oligarchy is when there's this small connected group that holds most of the power like a royal family or a group of business leaders like the United States right now.

[00:11:38]

You mean. Oh, I'm sorry, I'm not OK. We promised our brains are. No, we're not going there.

[00:11:43]

We're only like ten minutes and I don't want to go through it already. So anyway, so the researcher examined you were looking at all Gakuin organizations. And so his case study, he chose this very horizontally structured grassroots organization with a very pro-democracy ideology on the assumption that those were the conditions which should be the least conducive to oligarchy. So then if we nevertheless observe oligarchy in that case, then it's fair to assume that oligarchy is relatively widespread. So I think, like if you choose your case study conditions carefully, you can come up with suggestive conclusions to generalize about the population.

[00:12:14]

OK, so next question.

[00:12:17]

Oh, here's a nice one from someone named Kevin. He asked I was recently told by my philosophy professor that you can't cite a philosopher, let's say Immanuel Kant, as an expert in an argument.

[00:12:29]

If this is true, what's the point of ethics if you can't be an expert on it right now?

[00:12:34]

I thought that was a very strange claim by that professor.

[00:12:38]

And in fact, you notice in the blog overreactive, OK, I have your quote. Let me read it. You said so please tell your professor of philosophy on my behalf that he or she is an idiot and should resign from the department.

[00:12:51]

Right. But I really feel nothing. Well, let me tell you, actually, there is a follow up comment.

[00:12:55]

What I said that I shouldn't I shouldn't say that he or she is an idiot because I don't know the person. But that particular comment did strike me as idiotic. And it depends, of course, of what one means by expertise. And I think we can have a serious discussion about that with that issue. In this sense, if one means that, you know, nobody can tell expertise as in moral or the moral authority to tell people what to do or not to do, I would agree.

[00:13:22]

I mean, I certainly don't think that that any moral philosopher would want to present himself or herself as the pope and tell him what's right and what is wrong. But, of course, they have expertise. They have expertise in moral reasoning. Moral reasoning is a particular kind of reasoning that is has been expanded and become more sophisticated over literally two and a half millennia since Plato and Socrates started talking about it. There are many arguments in moral reasoning that you can no longer present as valid because they're being criticized thoroughly and encountered.

[00:13:58]

Examples have been discovered and so on and so forth. So there is a body of knowledge, in other words. And when you study moral philosophy, you study a lot of case studies, a lot of thought experiments, a lot of actual actual situations that are presented to people in terms of moral quandaries. And you also study several different approaches that philosophers have devised. You know, typically the ontology which is following certain rules of conduct or consequentialism or virtue ethics.

[00:14:29]

And there are a few other approaches, all of those in order to be understood, just like in any other technical field, they do require expertise. So not only they require expertise in the general field, but in fact, even within the field of ethics, there are people who are expert on virtue ethics, for instance, on consequentialism, and that would be be able to to tell you more and to defend those positions better even than a professional ethicist who is not a virtuoso, a sort of consequentialist.

[00:14:54]

So to me, the whole idea that philosophers don't have expertise, in particular moral philosophy, no expertise sounds silly. And I've noticed this. As you know, I'm not a pretty new to this field. You're only about a year and a half, really. I've been a full time philosopher and I've noticed that a significant number of philosophers do have a tendency for self flagellation. They have a tendency, they push their analysis and their skepticism about things to an extreme degree which involve philosophy, MSDS and what they themselves are doing.

[00:15:26]

I mean, philosophy is the only field that I know of where there is, in fact, which has originated in a field called Meta Philosophy, which is about what? Of about I mean, this is getting we would expect no less from that. It's getting ridiculous. So I think that the idea is, again, the area of expertise here depends on what you mean by expertise, what it means. Can somebody have, you know, the ultimate final saying about the specific moral questions?

[00:15:52]

No, I don't think so. But but if you buy expertise, you mean you know, do are there are people out there that had thought about ethics much more than others that know a lot of the technical literature, that know certain ideas and ways to think about ethical problems? I think the answer is obviously yes.

[00:16:09]

Well, yeah, no, I think that is that is obviously true. But my sense in my sense with this question was that, you know, when when you cite an expert on something, usually the implication is that what the person says is more likely to be true because they are an expert, whereas in philosophy and ethics is no exception.

[00:16:31]

There's so much disagreement and there are certainly people who completely disagree with can't I mean, I don't know what the exact claim was that this person had tried to to cite concerns about. But but I mean, if you don't understand, then what the relationship of expertise is to the likelihood that you're actually right. Because it seems like if if you're going to say as an expert and imply that he's likely to be right about his claims, then, you know, you're right.

[00:16:56]

There are all these other philosophers who you could cite instead and get a totally different conclusion.

[00:17:00]

But that's because that's a misunderstanding and it's a very common one, unfortunately, is a misunderstanding of the nature of philosophy. So if you say, look, philosophers disagree on conclusions, you think that is that the knowledge they are comes from how much of the conclusions, you know, how much the results are, the outcome of the process? You know, and philosophy is not about the outcome. Philosophy is about it's a way of thinking about things. So the expertise in philosophy is about methods, not about outcomes.

[00:17:29]

So an expert in moral thinking is somebody who is familiar with the way in which people think morally and it's familiar with the different approaches you can have to moral thinking. It's not a matter of how many times they get the moral conclusion. Right.

[00:17:43]

OK, so that's a different it's a different way of thinking about expertise entirely. Like unlike in science, where you have to be an expert both on the methods in order to be a scientist and presumably on the result. I mean, if you if you're a molecular biologist and you don't know that DNA has a double helix, then you're a pretty bad molecular biologist. Right.

[00:18:01]

But it seems like we couldn't it's not really fair to call someone a good reason or if they get the wrong answer. And so even if we don't know who has the right answer, if people disagree on the answer, then it seems like some of them at least must be reasoning poorly now.

[00:18:13]

Now, we know we talked about this before. There are there actually is research, even in computer science that clearly shows that for complex issues you can have multiple equally valid solutions and which solution you arrived with. They contradict each other.

[00:18:28]

No, they I necessarily ethical philosophers contradict each other. And that's a fact sometimes.

[00:18:33]

Sometimes they arrive at similar conclusions from very different perspectives. So you can have, you know, a wide variety of conclusions. And you can also reasonably disagree about things simply because a disagreement, the outcome is going to turn out to be different because you're starting from equally reasonable but different starting points. And so you can at that point, you have to go back and talk about the starting points to talk about your axioms, for instance.

[00:19:00]

So maybe this is a good time to talk about one other question that came up. One of our commenters, Ian Pollock, asked, he was referencing a a post that you wrote back in October and in the in the comment thread sorry, the post was called The Limits of Reasonable Discourse. And you are arguing that rational people can disagree. And you compared an argument to basically a topographical map with peaks and valleys where the peaks are like the best answers to the question are to the problem.

[00:19:32]

And so you said that in many cases there may not be a highest peak, but a number of alternative strategies are answers that could be equally good.

[00:19:41]

So and so one of the commenters cited this, this economist or statistician named Robert Alman, who had this theorem, Alman Amines Agreement Theorem, showing that if if two people, two rational people start out with the same prior beliefs and look at the same evidence, they have to arrive at the same conclusion.

[00:20:02]

I mean, I'm in formalizing it. And and so they asked you how that relates to your claim that rational people could arrive at different conclusions.

[00:20:11]

And you said Alman was wrong. And so the commenter was asking you to elaborate on that.

[00:20:15]

Oh, I'm not familiar with them. And in particular, what was the reason I said you must be wrong is because they're actually similar problems in evolutionary biology, for instance, where you can easily show the natural selection as a process that, you know, maximizes local. Fitness can, in fact, end up with very different solutions that are equally valid. I wish that's analogous to rational well thinking, though.

[00:20:41]

Yes, because the analogy.

[00:20:42]

Well, I'm not I'm not suggesting that natural selection is thinking, obviously, but I know that's true and it's analogous to it, meaning that you can you can start with either different assumptions about things or you can use the same assumptions but weigh them differently. And there may be reasons, valid reasons why you're weighing certain assumptions more than others. So, yes, you can share the same assumption of assumptions. You can even agree that the reasoning is fine.

[00:21:08]

Right. So a lot of the disagreement in philosophy does come about because people try to show that somebody's reasoning was, in fact, flawed. Right. And that gets, you know, often gets pretty complicated and pretty technical and people often do succeed. I mean, one of the reasons I often maintain, for instance, that contrary to popular opinion, philosophy does make progress because today, for instance, you couldn't be a consequentialist in the same way in which John Stuart Mill was a consequentialist.

[00:21:33]

It's just not tenable because plenty of objections have been raised to the original versions of utilitarianism, which is a form of consequentialism. And you simply cannot be a 21st century utilitarian in the same way in which John Stuart Mill was a utilitarian or Jeremy Bentham was written down because they did not face a certain number of cogent objections that people have agreed on. In fact, valid objections. Now, that doesn't mean that people have agreed that utilitarianism is a bad idea.

[00:21:57]

They just agreed that the original version had insufficient, well demonstrated, well, well established objections. And then people have to come up with counterarguments. Now, that's different from what we're talking about in the last few minutes, but that is an example of how people can, in fact, reasonably disagree and show that somebody else was wrong. Now, what about the situation where people can reasonably disagree and there is no way to determine whether somebody is right or wrong at that particular point?

[00:22:26]

That's what I was arguing, that there are a lot of shiftwork, complex situations where, you know, you look at your assumptions and they seem reasonable and you look at the at the so the premises of an argument, for instance, or two different arguments reaching different conclusions.

[00:22:41]

You look at the premises and they seem reasonable. You look at the structure, the argument that that seems sound and the whole thing seems valid and yet you reach different conclusions. Well, the answer of that could be that, A, we have missed something somewhere. But that doesn't mean that either one of those two people was unreasonable or irrational, just means that they put forth the best argument that they could and they simply missed something, in which case there is there is something to be discovered.

[00:23:08]

What was maybe there wasn't Hinden assumption, for instance, it was not explicitly made explicit that would actually put a floor or a dent into the argument. Or perhaps there was the argument was intricate enough that it's not easy to figure out where the mistake is.

[00:23:22]

I mean, look, there are some means. One of them was reasoning incorrectly, though, right? If the. Well, I mean, if you're saying that starting from exactly the same premises and using exactly the same argument, you have to reach the same conclusion that that's not exactly the same argument with the same evidence in front of you.

[00:23:37]

Oh, that I don't think it's true. You know, you can have the evidence, both reason because against the right, because you can weigh the evidence differently.

[00:23:44]

But what is different and what's your justification for weighing it differently than someone else? Right.

[00:23:50]

You can have a justification, but since the justification may turn out turn turn, for instance, on assumptions of values, and if I can show that my values are better than yours, then we we're going to reach different conclusions.

[00:24:02]

OK, so if we're talking about questions of values, then, yes, we're talking about this. Right.

[00:24:07]

We're talking about morality. So, yes, we're talking about values. Well, yeah. Although it seems like the ethical philosophers are actually claiming that they're objectively right about what's right or wrong. They're not just saying this is my value. That's a different question.

[00:24:24]

So one can be objectively right or wrong, meaning that if, in fact your assumptions are correct or if one buys into your assumptions and if your reasoning was correct, then you are in fact, objectively right. But as I said, one can question your assumptions because what I'm going to reject that assumption, which means that although your reasoning is sound, your conclusions can still be rejected. It's very similar as I meant. As you know, I made that analogy several times.

[00:24:49]

It's very similar, although not identical for sure.

[00:24:52]

With mathematics in mathematics, you study certain axioms which usually you don't defend. You just assume that certain things are right and then you derive the consequences from from those assumptions. Now you can make mistakes at several points. You can make a mistake in the actual reasoning, in the mathematical proof. And sometimes that's not easy to find because some of these proofs are sophisticated. Mathematical proofs are, you know, tens or hundreds of pages long, for instance, like the discovery of Fermat's Theorem several years ago.

[00:25:20]

That's I understand it's almost a thousand pages long of mathematical proof that good luck finding a single mistake if there is one there. But the other thing is somebody. You could say, well, I'm going to I'm going to use a different pastilla, I'm going to use a different assumption, a different set of axioms to start, and I'm going to see where that leads. Now, you can't say that if you change your assumptions, one of those is one of those two people is wrong.

[00:25:46]

And you can say is that assuming that they made that the reason correctly, they reached different conclusions because they started from different assumptions that can both be right, then it's a question of, well, which set of assumptions are you more willing to buy or not? Now, some set of assumptions can be defended rationally. Others you can they can just be assumed as a matter of provisional situation or because they feel intuitively correct. And I understand that that's something that doesn't satisfy the sort of the rationalistic, hyper rationalistic mind of some of our listeners.

[00:26:19]

But that's the fact I mean, even the mathematicians sometimes start out with assumptions that they cannot justify. They just look. Well, this seems reasonable and they derive what they can derive.

[00:26:30]

OK, I'm sure this will come up at some points or multiple points in Future podcast. But let's move on for now.

[00:26:37]

Let's take a question from Richard the Bear, who asked whether a skepticisms names.

[00:26:45]

Yeah, go ahead. It's especially hard when they include odd punctuation in their name. I don't know how to even pronounce some of these names, but really the bear, I can pronounce that.

[00:26:53]

So he asked if skepticism is really just another name for intelligence, who now adding words, you think, well, OK, so I mean, intelligence literally is obviously too broad because it includes things like creativity and, you know, mathematical skill and emotional savvy.

[00:27:12]

But I think what Riggi was referring to was rationality and critical thinking. Is that is that really all we mean by skepticism?

[00:27:20]

And, you know, this is this is actually at the heart of a big controversy, I think, in the skeptic movement, because some people do think of skepticism as just being another word for rational, critical thinking.

[00:27:34]

And then a lot of these people would say that that logically leads to the conclusion that a God is either logically incoherent or extremely unlikely.

[00:27:44]

And so they would say if you if you call yourself a skeptic, but you're also atheist, then you're you're not being consistent in applying your skepticism. And then there's another camp that says no skepticism is just they define it more narrowly as a method of scientific inquiry. And so it's only applicable to empirically testable claims. And so this camp says, you know, if you can't test for the existence of a God, then then it doesn't fall under the heading of skepticism.

[00:28:11]

So then it's not inconsistent to be both a skeptic.

[00:28:14]

So that's why the question of the definition really matters to people. Correct.

[00:28:18]

And I think actually that there is surprisingly, although I'm sure plenty of listeners are going to disagree, there's an easy solution to this particular problem because the two definitions of skeptic, I don't like to think about definitions because definition sounds too, you know, necessary and sufficient conditions. Very sharp thing. But let's say two concepts of skepticism. Those two concepts of skepticism actually have different historical roots. And I think that as long as we understand about which skepticism we're talking about and it would be useful to distinguish them in terminology, and we could call provisionally one philosophical skepticism and genuine scientific skepticism, for instance.

[00:28:56]

But more importantly, historically, they're different. The first type of skepticism is the one that goes back, at the very least to David Hume. And it's a general attitude of using reason as well as facts to evaluate arguments. So Hume's most famous, among other things, for his fork. Hume spoke, says that whenever you read a book, ask yourself two questions. Is it what's bringing the book? Is it the result of calculations or mathematical proofs?

[00:29:27]

If the answer is no, ask yourself if it is the result of observation or experiment. If the answer is no to the second question also, then throw it out because it's useless.

[00:29:36]

I think you can cast it into the flame, which is far more dramatic and is very dramatic.

[00:29:40]

Stuck in my mind. Exactly. So. So David Hume essentially gives you this broad definition of skepticism that you use both sort of general reasoning abilities and evidence. Now the question of that will fall and the debt skepticism, because although you cannot bring evidence in to bear on some aspects, at least out of the question of God, you can certainly bear reason on it.

[00:30:02]

If the concept of God is really coherent, as theologians and apologists think it is, then you ought to be able to defend that concept in terms in rational terms. And so the very strong philosophical criticism of the apologetic tradition in theology do exactly that. They are skeptical arguments not based on factual evidence, but based on reasoning of the concept of God. So in that sense of skepticism, what I'm what I'm calling now a sort of human skepticism or. Philosophical skepticism, I think that the answer is yes, religion falls into that category.

[00:30:36]

Now, more recently, post one will to particular in the in the 60s and 70s in the United States, the so-called skeptical movement was the result of a small number of people that were had been very influential among them most and foremost, both Kurds who established a number of journals in the field, in a number of organizations in the field, like the what used to be called the Committee for the Scientific Investigation of Claims of the paranormal or Psycho like that name site cop.

[00:31:06]

You get it. So then which of course, changed.

[00:31:09]

They changed the name to CSI Committee for this skeptical investigation. Those people those are the people that publish skeptical inquiry for which I wrote a regular column. I know those people. They're very smart.

[00:31:22]

The editor of Skeptical Inquirer is very careful about the kind of stuff that they publish, but they do refer to largely scientific skepticism because that movement came out of attempts, systematic attempts at debunking the paranormal, UFOs, astrology, in other words, everything that makes empirical claims. So if you are that kind of skeptic, then actually you can make an argument that at least the more esoteric versions of God, not the creationists of 6000 years kind of thing than they really are outside of that purview.

[00:31:56]

So I don't think there is a contradiction. There is just that people are using the word skepticism in two fairly distinct, historically identifiable ways. And if we just refer to, you know, Kurtz type skepticism versus some type skepticism, then we'll be fine. There will be no disagreement.

[00:32:13]

Well, but do they give a justification for why they've excluded some claims from skeptical inquiry and not others? Is it? I mean, it's just a practical thing, like we just want to focus our energy and attention on on these, you know, pseudoscientific and paranormal claims and not on know the typical justification they're in by most others.

[00:32:32]

That right.

[00:32:33]

Although not everybody that writes for Skeptical Inquirer and original stuff from Paul Kurtz himself is that these esoteric versions of God and in fact have nothing to do with empirical evidence. That is, if you make a claim, for instance, about the existence of evil made necessary in the universe by the necessity for free will, let's say something like that. Well, empirical evidence bears nothing on that. We all know that evil exists. How would you define evil? And we all think we know what what freewill means.

[00:33:05]

So that becomes entirely a philosophical argument. It's a matter of, OK, now, first of all, I'm PIAC. The notion of free will. What do you mean by free will? There are some philosophers who have famously argued that free will is in fact an intrinsically incorrect concept. So you cannot use it as a defense for evil. And there is this beautiful classic attack of the free will defense that was made by John McCain back in the 60s is still today, I think is the quintessential argument to the set of arguments that demolishes the free will defense.

[00:33:41]

And it's entirely philosophical in some ways, empirical evidence. It's entirely a matter of, well, let's see what follows logically from your ideas about free will and evil and so on and so forth. It's a beautiful paper, but it's a philosophical approach is not an empirical one. So those people would argue that, yeah, if you're talking about those kinds of problems, then empirical evidence doesn't matter. It's not really doesn't bear on the question and therefore you're not doing could deep skepticism, too.

[00:34:06]

You're doing some type skepticism.

[00:34:09]

Yeah. Although, I mean, I really I basically agree with you, but it still seems to me like there's it's there's this false demarcation between the scientific kind of skepticism and the human kind of skepticism just because you you have to use reasoning in any kind of scientific inquiry and you just basic principles of reasoning, like not not positing some new hypothesis that that has no explanatory power and just adds unnecessary complexity.

[00:34:35]

Like, you know, if you it let's take a really classic skeptic activity of investigating an allegedly haunted house.

[00:34:42]

And the owners of the house say that they hear moaning at night. And so you investigate and you find that, well, actually there are pipes in the house and when it's cold out, they produce a moaning sound well. So it's possible that there are actually ghosts. And so the existence of the pipes is just a coincidence and it's still really the ghosts causing the moaning. But we have no reason to to suppose that at all. We have this perfectly adequate explanation and we don't need to posit this new entity that we have no evidence for such a perfectly good example of scientific skepticism.

[00:35:13]

I mean, you just showed the political grounds and you have a better hypothesis. But when you say it's let's go back to what you said a second ago, because that also is a very common argument, which I don't I'm inclined not to buy. When you say, well, you know, scientists also use reasoning, for instance. Well, yeah, but that's because, first of all, it's a different kind of reasoning. Scientific. It's actually significantly different from philosophical reasoning, just like both of them are different from, say, mathematical reasoning.

[00:35:38]

So just because we're using reason, of course, we're all using reason to some extent, but that doesn't mean that we're doing the same thing or we're deploying the same kind of approaches to reason. But the other thing is you would never say I am.

[00:35:49]

I assume maybe maybe you would when you would never say, I don't know.

[00:35:53]

I'm about to tell you what what I think you would never say. And then you can probably tell me. Of course I would assume that I'm suggesting I suggest that you would never say that just because a mathematician and a scientist and say a biologist both use reason then then a biologist doing mathematics.

[00:36:10]

Right. Are different kinds of activities. Right. Right. So the fact that they have reasoning common doesn't show that they are the same thing. So the same kind of argument to me applies to the difference between philosophy and science. Now, I would agree with you that there is no such thing as, say, that philosophy.

[00:36:23]

OK, what I use the word demarkation probably unfairly there. I just meant that the kinds of problems that that sort of the classic hands off of religion skeptics are referring to, like God, like claims of, I don't know, take the young earth creationists who say that while, yes, the Earth may look like it was created a long time ago, but but that's just because God created the fossils to trick us into thinking it was more than a few thousand years old like so that I mean, it seems like you can use the same kind of reasoning to test to evaluate that claim as you evaluate the ghosts claim that we have no reason to suppose that.

[00:37:02]

And so the skeptic wouldn't say the same reason. The skeptic wouldn't say to the ghost claim, well, I guess you're right. It could be a ghost. I can't prove it's not. They would apply the same kind of reasoning to the younger creationist creationist last Thursday ism, except that in the case of last Thursday's, unlike the case of the ghost, there are also logical implications of that claim. Right. So if you say, OK, look, the Earth looks like it's old, but in fact it's young.

[00:37:28]

It was created last Thursday. Yes. For all effective purposes, you put a stop to any further scientific investigation because the scientists are simply going to shrug your shoulders and say, you know what, you're crazy. And to me, it looks like it's billions of years old. So it probably is billions of years old.

[00:37:42]

And I don't know what you're talking about.

[00:37:43]

That and the discussion of it. If theological inclined philosophers would go further and say, well, come, my friend, you do do you realize what kind of theological and logical implication to derive from this idea that God is about is going around tricking us and testing our faith by putting fossils in the right in the right place? There are consequences there. There's a whole area of of theological reasoning that falls down into pieces if, in fact, that were the case.

[00:38:10]

So you can push that inquiry further and show that that claim that is the last defense against the empirical you know, the empirical data doesn't actually get you what what what the creationists think thinks he's going to get out of out of it. And but but your example is a good one because that's a borderline one. That's clearly an example where, you know, the philosophy and the science really converge on the same team from different perspectives on the same thing.

[00:38:35]

But the example that I gave you earlier, when people talk about the free world defense to justify evil, that seems to me that it's got really nothing to do at all with empirical evidence. So that's at one extreme. And then, of course, we've got the other extremes of creationists don't think that the Earth was created last Thursday and looks like it's all they really think it's junk and they really think that the evidence shows it's young and those are dealt with just nicely, very nicely by the science itself.

[00:39:03]

So there's a continuum clearly. Right. So demarkation is a is a long, fuzzy line. It's not a it's not a sharp one.

[00:39:09]

OK, OK, fair enough. Let's move on to a question from Harry S. Pharisee, who asked about our overall views on feminist philosophy. It was not more specific than that.

[00:39:23]

So I don't know if you are at all familiar with feminist philosophy. I'm I'm a little bit familiar, too. I, I mean, so it tends to overlap a lot with post-modernist philosophy.

[00:39:33]

So I'm already severely biased against it, even just stylistically because of the incredibly obtuse and jargon filled language that they write in before the podcast.

[00:39:45]

And a number of non sequiturs too. Oh yeah. No, that was that was I was I was confining myself just to the language. But yes, you're right, the content is not really any better. So I was looking up some of the most famous feminist philosophers before the podcast just to refresh my memory. And one of them is named Judith Butler, who I had heard of. She's pretty, pretty well established.

[00:40:06]

And she won first prize I read today in a bad writing competition.

[00:40:13]

And I just I just want to read you her prize winning sentence, because I think it typifies the kind of writing and thinking that I've encountered from feminist philosophers and the move from a structuralist account in which capital is understood to structure social relations in relatively homologous ways.

[00:40:31]

To have you have hegemony in which power relations are subject to repetition, convergence and articulation brought the question of temporality into the thinking of structure and marked a shift from a form of alpha Serian theory that take structural totality as theoretical objects to one in which the insights into the contingent possibility of structure inaugurate a renewed conception of hegemony as bound up with the contingent sites and strategies of the articulation of power.

[00:40:56]

Wow.

[00:40:58]

So, yes, I think there's a lot of handwaving and intentional obfuscation to give the appearance of complexity and profundity when there's really no no well-formed ideas there.

[00:41:09]

Right now there are several examples of those.

[00:41:11]

That is not to say that all feminist philosophy is is without merit. Quite right.

[00:41:18]

So so there are some interesting points when feminist philosophy deals with women, issues with with essentially borders into into an analogy of the sociology and psychology of gender. I think they make reasonable contributions and and they need to be taken seriously. I see.

[00:41:38]

I think that they approach they are dealing with more empirical questions than most philosophers, their sociological questions, the questions about the the psychology of people dealing with other people and how our societies are structured. But what bothers me and maybe you have kind of examples, because I'm not that familiar with the field, but what tends to bother me is that they don't tend to treat these empirical questions as empirical questions. They often seem to approach them almost as like literary questions like these.

[00:42:03]

In some cases that's true. But you have to remember, again, I don't want to sound too much of a defense of feminist philosophy, but but any particular of certain kinds of feminist philosophy, like feminist epistemology, for instance. So the idea that women somehow have a different approach to truth than men, I don't buy that for a minute. But there are, you know, philosophical analysis, meaning conceptual analysis of social issues.

[00:42:26]

So the concept of gender, for instance, yes, there are informed by factual knowledge, but that's true for almost everything in philosophy. Almost everything is informed by factual knowledge, but it's not determined by factual knowledge. You know, the philosopher of gender, for instance, is interested in the way people conceptualize gender, as a matter of fact, versus the way in which people ought to conceptualize gender. If we if you start, for instance, from certain assumptions about the ethics of interpersonal relationships and so on and so forth.

[00:42:53]

So that kind of work, I think does have value. The stuff that I Dombi is when feminist philosophers get into epistemology, philosophy of science and so on.

[00:43:02]

So my favorite example is Sandra Harding, who is an American philosopher and feminist who famously several years ago said that feminists have reinvented have done nothing short of reinventing science now. Wow. That's a pretty big claim. So 10 years later, I believe about 10 years later, she was asked, so what was the major discovery that these new feminist science have come about, which is fair?

[00:43:26]

And the answer was, well, we now know that menopause is not a disease. That's right. And the answer would be at that point, we had to give me a break. First of all, whoever thought it was a disease. Yes. Some misconceive men of the Victorian era, probably. But that's not a first of all, that's not a scientific discovery.

[00:43:45]

Number one, if anything, is how you think there's so many things wrong with that, you don't even we don't need to get into much detail. But that is the kind of thing that gives a really bad reputation to a lot of postmodernism and a lot of feminist philosophy.

[00:43:59]

It's not all like that.

[00:44:01]

But when they do get into that sort of of of issues, they really lose credibility. Now, I can give you a name of bit of a philosopher or a feminist philosopher who I do have in good regard. And that's Helen Longino.

[00:44:13]

She published several books on these on science as a social enterprise and her ideas about how science works as is as a mix of, you know, discipline.

[00:44:25]

That does deal, of course, with objective reality out there. She does believe in objective reality. She's not one of those people which claims that everything is socially constructed.

[00:44:33]

But she says that's not nearly enough, an approach to understand how science actually works and what science is. There is a large sociological component. There's a lot of value judgments that are embedded, embedded in the culture of science. There is a lot of things that help explaining. For instance, you know why scientists are interested in certain questions rather than other. If you ask your average scientist, why do you think that this question is interesting, their answer will be something along the lines of what it's intrinsically interesting, meaning that it's self-evident.

[00:45:05]

And Longino and several other philosophers of science, of a feminist or sociological or historical bent would show that now that's not the case. You can actually come up with very nice, reasonable explanations for why we give priority to certain kinds of questions or certain kinds of research. And those priorities are determined by the sociology, psychology and so on and so forth of the scientific enterprise itself. So that's, you know, those are. Really valuable contributions to understanding science.

[00:45:33]

She sounds like a philosopher of science who maybe also is a feminist, but it doesn't sound like feminist philosophy.

[00:45:38]

Well, she's a feminist. You know, she's a feminist philosopher who does also philosophy science. She does bring in gender issues into into science. And, you know, as in her explorations of why is it that certain fields of science, for instance, have in our culture, not just as a matter of fact, but as a matter of cultural motivations, why do they have fewer women than other fields? And what the contribution is between the two genders?

[00:46:02]

Not in terms of sort of dynamics of the social enterprise of science. So all those things are all very reasonable and very good contributions to it. And she does come from those issues from a feminist perspective. But she doesn't go around saying silly things like feminists have reinvented science or anything like that.

[00:46:22]

I can one up your silly thing. Go ahead.

[00:46:24]

And even sillier things from a feminist philosopher named I am not sure if I'm pronouncing it correctly, Luchi Ereka, who argued that the equation equals MS's squared is sexist because, of course, and because it privileges the speed of light over other speeds that are vitally necessary to us.

[00:46:45]

She said. So I guess, I don't know. I guess the speed of light she's conceiving of as masculine. And actually I'm not even going to try to explain this because I have no idea what it means, but it's not even clear.

[00:46:54]

Is she claiming that the law is false and scientists just formulated it because they're sexist, or is she claiming that sexism is just built into the physical nature of the universe and its fundamental laws? I don't even know it just but it's sort of the literary analysis approach that I'm talking about. But they think they can say something about reality based on their analysis of the meanings of what they perceive to be the associations of words.

[00:47:17]

Well, the funny thing is that they often don't even understand the words themselves. I mean, there is a famous essay on relativity by Bruno Latour, who is obviously not a feminist philosopher, but he's a post-modernist philosopher. And if you read that that essay by Latour, where he where he criticized Einstein, criticizes Einstein for his tendency to want it to be a changing frame of reference and making things dependent on the frame of reference and all that sort of stuff.

[00:47:42]

Know he was talking about Einstein's famous thought experiment, about the speed of light.

[00:47:47]

It's clear that you can only come up with one or two conclusions. Either Bruno Tour was trying to be funny and you just built a parody of an analysis of general relativity, or you just simply does not understand relativity, and I am very much inclined to bet on the second option.

[00:48:07]

All right. Let's move on to a question from Yanis, who asks. There are a number of websites where there is a predominance in the use of Bayesian reasoning while reading them. I have the feeling that while in some aspects it's useful, sometimes they overextend and try to visualize the whole structure of reality. What is your take in this particular type of philosophy? So let me try to explain what Bayesian reasoning is.

[00:48:34]

Bayes rule is very incontestable. Basic statistical theorem. The probability of your hypothesis, given a piece of evidence, equals the probability of that evidence, given the hypothesis times, the overall probability of the hypothesis divided by the overall probability of the evidence.

[00:48:52]

So what that means for reasoning in your everyday life, it just means that you start out with some prior degree of confidence in a given belief.

[00:49:02]

And then over time you encounter new evidence. And when you encounter a new piece of evidence that's relevant to that belief, you should update your degree of confidence in that belief based on how likely that evidence would be if your belief were true, relative to how likely that evidence would be if your belief were false.

[00:49:19]

So if the evidence would have been more likely in a world where your belief was true, then in a world where your belief was false, then that would make you more confident in your belief. So that. That's correct.

[00:49:29]

OK, now the the. The fact of the matter is that Bayesian analysis, as you say, it's that it's a debate based theorem, is an uncontested yeah, it's uncontroversial piece of, you know, probability theory.

[00:49:44]

It is applied in a wide variety now of statistical settings. But I think what the commander was referring about was more of a generalized Bayesian ism kind of approach to such a thing as Bayesian ism in philosophy but philosophy of science, where people have suggested that the bias framework is not only a useful way, as you say, out of proportion one's belief to the evidence and constantly update that.

[00:50:13]

It's also a good model for how science itself works. So there is a Bayesian theory, philosophy of science that that the science works as a giant Bayesian calculator distributed Bayesian calculator.

[00:50:24]

And that's how scientists, you know, keep evaluating constantly, by the way, different hypotheses and update the claim that this theory does or that it should know that it does seems probably false. Actually, I'm not so convinced because there is evidence from neurobiology, for instance, that the human brain, very much, at least under certain conditions, works like a automatic Bayesian calculator. Now, that doesn't mean that you come up with objective priors and that sort of stuff.

[00:50:51]

But there are so the players are the priority probability of certain hypotheses, right?

[00:50:58]

Barbarianism. The Bayesian analysis does deal with Nonobjective prior to there are ways to do that. Now, I'm not necessarily defending that view, but but there are compelling arguments actually, for from the study of the history of science, you can reconstruct several episodes of this through science innovation framework, and it actually works very nicely. There are the major objections from a philosophical perspective, is that if it's that vision is McCowan's for certain aspects of science, but not everything.

[00:51:26]

And in particular, for instance, vision doesn't doesn't account for the so-called context of discovery. That is, how do we that people come up with new ideas?

[00:51:37]

And it accounts very well for how people test new ideas once that they are in out there. But it doesn't account for what it's called the context of verification. But the context of discovery is a different thing, which is not even covered at all by veganism. So it's best if it's an incomplete theory of philosophy of science, but it is a pretty powerful way of looking at things, particularly, as I said, because the human brain also seems to work pretty much in the same way.

[00:52:05]

I would also notice that, you know, Carl Sagan famously rephrased Hume's argument on miracles and summarized it by saying extraordinary claims require extraordinary evidence. Well, that that claim can be understood from a patient perspective that that you proportion your evidence is central to your belief and you keep updating the belief that's not the other way around.

[00:52:28]

Proportion your belief and you keep updating that as new evidence comes in. That is a nice way to summarize the sort of Hume's argument and Carl Sagan's rephrasing of that argument. It does have a certain degree of appeal, whether it is an overall overarching theory of science. I doubt it.

[00:52:47]

Yeah. You know, I'm when I've encountered discussion of Bayesian reasoning, I think the I mean, I find it mostly incontestable, but there's a few sort of philosophical problems with it, setting aside how it's applied to describing science, which I wasn't actually aware of.

[00:53:03]

But one of the ones that I find the most interesting is that with Bayesian reasoning, you're not supposed to ever have zero or 100 percent confidence in a belief, because that means that no evidence could ever change your degree of confidence or belief. But there are some claims that a lot of people want to say are not they're not uncertain at all. They're just they're they're just certain 100 percent true, 100 percent false. Like one plus one equals. To what degree of confidence do you assigned to one plus one equals one plus one equals two is an interesting example.

[00:53:32]

But notice that you you got that from mathematics there. It's clear that, yes, we do have, in fact, certainty about that belief in science.

[00:53:41]

Of course, it's rare, if ever happens that we have certainty about something. We have very, very high degree of confidence about something, but not by not certainty. And, you know, classic example, there is the the evolution of our ideas about how the solar system works. You know, flying. Copernicus made a big jump by putting the sun, the sun at the center. And then, of course, Kepler realised that now the sun is not really the center.

[00:54:05]

It occupies one of the force of the ellipses that described the orbits of the planets.

[00:54:09]

Then for a few hundred years, we've figured, okay, well, this is done, we're done with this thing. We understood it. And then I think came about and showed that because of gravitational relativistic effects, well, they're actually slightly off and then keep bouncing around. They're not exactly occupying the force. So who knows that? That's the current wisdom. That's the current acceptance of that idea. And of course, I'm certainly not claiming that there is no progress there in those areas where there's clearly.

[00:54:32]

A sense in which we have made progress in that area, but we haven't had a definitive answer that lasted more than, you know, sometimes decades, sometimes a couple of centuries.

[00:54:41]

So those are the kind of scientific beliefs which are still amenable to interpretation because the players are never, never reached zero until the pursuers never reach zero or 100 percent in mathematics.

[00:54:55]

Things are different, although even in mathematics there are some conjectures, for instance, that are not, in fact demonstrated by proof, and yet that some mathematicians like to go back to conjecture, for instance, about properties of prime numbers.

[00:55:09]

And the idea there is that there's no proof of the conjecture, but it seems to be the consensus among mathematicians and the conjecture is, in fact, true. So there is a high degree of belief, but it's not certainty. I think we've been pretty esoteric today during our Q&A, so maybe we should just quickly wrap up with a much more practical question. This one is also from Eden Park, who asks, Do either of you have any useful, Antia, crazier strategies or insights?

[00:55:35]

First, explain what I create. OK, so it is a word from ancient Greek basically means weakness of the will or acting against what you consider to be your own best interest. So if you believe you'll be happier in the long run, if you stick to your diet, but nevertheless you don't, then that's a creature, right?

[00:55:51]

Do we have any strategies? I have at least one.

[00:55:53]

Yeah, well, it's pretty obvious, actually, that comes out of of basic cognitive science. So let's take your own example. Like, let's suppose that I know that eating chocolate ice cream every night, it's not a good idea for me. Right. In terms of health, in terms of long term health, although it gives me immediate pleasure, of course. And that which is why there is the kraddick tendency.

[00:56:17]

Well, it's much easier as a matter of strategy to resist temptation once in a while than to resist it every every day or every minute. So what that means is that if you go to the supermarket once a once a week, for instance, you should make then and there the decision not to buy the ice cream because you only need to exercise your your willpower once.

[00:56:38]

You should be hungry when you are not hungry, when you start not be hungry. And if you if, on the other hand, you make the purchase on the understanding yourself that, well, you know, I'm going to be good about this, I'm going to resist, you know, it's going to be in the refrigerator, but I'm not going to touch it. I guarantee you that your creation eventually will take over and you're going to be eating significantly more ice cream than you probably want to.

[00:56:57]

So there are there are actually ways of constraining oneself in terms of choices or, for instance, using friends and asking friends and colleagues or whatever to reinforce certain ideas to. You know, that's where the whole idea of having bodies when you get on diets or when you get an exercise from. Right.

[00:57:17]

That's the thing that you don't trust your your will your willpower is limited. And which this is not just a matter for there is pretty good evidence from the country, the sciences, that your willpower is limited. For instance, if I gave you. Now, if I tempted you now with with a cookie and and you said, no, I better not have the cookie, but then at the same time, I'm going to ask you to do a complex mathematical operation in your hand, you're much more likely to eat the cookie.

[00:57:42]

Yeah, this is how my cookie jar ends up empty by the time we've got the same goal. Yeah, exactly. Exactly.

[00:57:49]

So I had an insight about recently, I realized that my big weakness and I suspect this is true of many people is the slippery slope. So any time that I can keep procrastinating just a little bit more or eating just a little bit more and the consequences that I'm just a little bit worse off, then I'm tempted to do it because the consequence is small.

[00:58:10]

And so then I end up like the frog in the pot of water who, you know, the temperature goes up by little increments. And so he ends, he stays inside and then he gets boiled. So what I need, I realized in an anti-aircraft system is a way to transform that continuous relationship between my liking and the consequences into a discrete relationship.

[00:58:29]

So instead of each additional little bit of slacking resulting in a little consequence, I need that additional little bit of slacking to result in a big consequence so that I'm not tempted to do it. So I've started entering into bets with my friends, um, sort of like you are saying, except a little more adversarial, of course, to enforce the big consequences. And so I'll set a rule for myself, not something like I will finish this project by the end of the week.

[00:58:53]

But but rules where there's that discrete relationship to the rule might be something like I will work continuously for the next two hours without checking my email. And then if I do slip up even once to check my email, then I've lost the bet for the day. So then I'm no longer saying to myself, Oh, I'll just check my email, it'll only take five minutes. What's the harm? Because now I if I do that, then I've instantly lost the bet and it's a big consequence.

[00:59:15]

So the key though is to find friends who are real hard asses because some of my friends were too nice. I tried it with them and then I would tell them that I slipped up and they say, Oh, well, you tried, so I'll volunteer always.

[00:59:25]

OK, I don't trust you. OK, now, but the general idea is a very good one to think in terms of consequences, maximizing consequences, drawing sharper lines than that then might seem necessary. Let's go back to, for instance, an example with food. You know, as you know, you go out to a restaurant in most places in the United States and you're overwhelmed by huge amounts of food that you ordered a dish of pasta, for instance, and they bring you the equivalent of four or five portions instead of one or two.

[00:59:56]

Well, there's a couple of things you can do at that point. You can say, all right, I'm going to take a look at the amount of, say, pasta that is on my plate and I'm going to eat about a third of it or something, and then I'm going to leave it there. That's not going to work because you keep looking at this thing. You can have a conversation. Maybe you're having a nice glass of wine and you sleep easily above the one third.

[01:00:18]

But if you do this right. Beginning of the meal, you actually separate the third that you want to eat and you set aside the two thirds in a different plate, that's much more easy because now you have to make an extra effort. Once you're done with the first. Third, you have to make an extra effort which is visible to everybody else and every single part.

[01:00:37]

Right. So that it helps a bunch of these little things that the research in cognitive science is actually rich with this kind of little suggestions on how to make your life either better or more miserable, depending on what your point of view is.

[01:00:50]

Yeah, I actually I should amend my advice to say that you don't necessarily want the consequence to be too horrible because then you're going to be tempted to lie about it. I mean, there's some honor system involved, right? Like my friend can stand over me all the time. So the the consequence that I had set up with this one friend of mine was that if I lost the bet for that day, he got to choose my Facebook status for the next day.

[01:01:12]

And he's really fiendish. So when I lost one time, he picked this Facebook status for me.

[01:01:21]

I had to quote the secret, OK, the secret on my Facebook page for all of my twelve hundred some friends that I was ruined. Your reputation?

[01:01:30]

Yeah, I had this whole thread going of people being like, what? What is this? I don't understand. Why are you quoting this? Oh. And I had to misattributed to Feynman that was the other embarrassing thing that like that I did. Yes.

[01:01:42]

Oh. And I wasn't allowed to tell anyone that it was a bad name.

[01:01:44]

So maybe I'm breaking that rule now. All right. So that that I suppose, concludes our Masimo and Julias A. a crazy guide. And we're all out of time.

[01:01:54]

So that also concludes the rationally speaking podcast. Join us next time for more explorations on the borderlands between reason and nonsense.

[01:02:09]

The rationally speaking podcast is presented by New York City skeptics for program notes, links, and to get involved in an online conversation about this and other episodes, please visit rationally speaking podcast Dog. This podcast is produced by Benny Carlin and recorded in the heart of Greenwich Village, New York. Our theme, Truth by Todd Rundgren, is used by permission. Thank you for listening.