Transcribe your podcast
[00:00:14]

Rationally speaking, is a presentation of New York City skeptics dedicated to promoting critical thinking, skeptical inquiry and science education. For more information, please visit us at NYC Skeptic's Doug. Hello and welcome to the rationally speaking podcast, where we explore the borderlands between reason and nonsense. I'm your host, Massimo Yuchi, and here is my co-host, Julia Gillard. Julia, what are we talking about today?

[00:00:46]

Masimo? We are back again with our live audience at the Jefferson Market Library in the heart of Greenwich Village.

[00:00:55]

Oh, yeah. Thank you. Thank you. So we're back for our second live episode.

[00:01:04]

And this time we're going to do a full 45 minute episode of Just Q&A. So it's Marchione.

[00:01:11]

Very, very smart, especially the cute. I was thinking especially the bucket, but. Well, that's that's the split. It's even split.

[00:01:18]

So usually on the podcast, we take questions from we've solicited questions from the commenters, from the readers of the Russian speaking blog and invite listeners to send us their questions. And then we read them aloud and answer them on the air. So this is going to be the first time that we actually have the questions being asked live. So I hope that you've all brought your. You mean we don't read their mind on the on the air.

[00:01:44]

Know about that study was debunked. Sorry. So if you haven't listened to the show, the questions tend to be, well, really, they can be about anything. But usually people ask us questions about rationality or skepticism or science or well, really anything that tickles your fancy and we'll find some way to apply rationality to it.

[00:02:06]

So first question. I just to ask the panel what they thought of the scientific credentials of economics in particular. It's often referred to as the dismal science, but particularly at the level of macroeconomic policy. It seems like there's no real consensus about the appropriate policies that should be pursued. Europe and America are now taking very different courses. Yet on the other hand, it does seem to be a very theoretical discipline. So I just wondered if the panel had anything to say about the hardness or softness of economics as a science.

[00:02:33]

And I don't think we put economics on our hard to soft spectrum, but it would have been somewhere to the left of psychology and then off the charts. But but it's a good question because economics is certainly a social science and as such, definitely a soft science. Now, some people would argue it's not a science at all. I mean, that's going too far now. As you as you probably know, there are two major ways of doing economics, research and theory as of late.

[00:03:01]

One is sort of the standard classic model where people produce mathematical models about how economic systems should work. And those models are based on a certain like a mathematical model on a certain number of assumptions. One of the most important one of which is that we all rational agents with essentially perfect understanding and access to knowledge and that sort of stuff. Now, when was the last time that you saw a perfect, rational agent with access to knowledge? Let me know, which may explain a lot of why econometric models tend not to work particularly well.

[00:03:33]

I mean, the math is fine. It's just that when you apply to real world situations that we get the problems. And of course, the whole point of economics is, in fact, that we want to apply to real world situations is not a theoretical exercise, unlike, say, theories, for instance. Now, the second way of doing economics, which is still as much as I understand, is sort of a minority position within the field, but is gaining increasing popularity is essentially a behavioural version of economics, where economists borrow from other social sciences, particularly from psychology, and in fact trying to introduce a more variable, more human like behavior in their models and see what happens in those conditions.

[00:04:17]

Now, the problem with that, you know, I don't I am not an expert in the area, so I'm not going to go much further than this. But the problem with that, of course, is that it becomes both more realistic and at the same time the margins of error skyrocket. In other words, the model, the model turns into, just like any other model in psychology or social science or in fact, even in fact, in biology, once you start getting in much more realistic behaviors from your agents, then of course the behavioral itself becomes much less predictable and the margins of error are large.

[00:04:51]

So it's a tradeoff. You can get perfectly fine models that never work because they make completely unrealistic assumptions or you can get much more realistic models. That, however, only work in a sort of statistical framework and a broad parameter range of parameters.

[00:05:06]

Yeah, I would I would pretty much agree with all of that. And and I do think it's important to make the distinction between the different kinds of economics. So I guess you didn't speak directly to like macro versus micro say. But but macro economics would be, I'd say, especially problematic just because not only can you not really do experiments, because your objects of study are entire economies and countries. So you're basically. Just looking at the data that's already out there, that already exists about unemployment or GDP or what have you and trying to sort of find patterns and and tease out a possible causal relationships.

[00:05:45]

But of course, it's really hard to make any conclusive statements about causality when you are just looking at correlations. So also, that's one big problem with with macroeconomics. The other problem is that because your objects of study of analysis are countries or states, but they're large entities, you don't have a lot of data as opposed to, say, if your object of study was companies or people and you would get you could have datasets that would run in the thousands or tens of thousands or hundreds of thousands.

[00:06:16]

And of course, the smaller your set of small your samples, that the less reliable your conclusions are going to be. And then the other problem, of course, is that is that the underlying phenomena that you're interested in may very well change over time. So, you know, if you're if you're looking at the effect of, I don't know, wages on unemployment, there may be a very real relationship there, but that the nature of that relationship may change over the years.

[00:06:44]

And so if you want to, you could either look at you could limit your dataset to just a range of a few years to try to cut down that problem. But then you have very little data. So then if you want to expand your dataset, you have a lot of data now that you're looking at a relationship that may fluctuate over time. And so it's going to look just like noised. You have no way of distinguishing what the relationship between those two variables of interest was at any given point in the dataset.

[00:07:10]

So macro's especially especially tricky, not to mention finally that the the level of complexity is huge when you're talking about countries instead of companies. There's just so many factors that are at work there that even if you wanted to try to control for everything in your in your regression to try to to factor out conflating variables that might might be getting in the way of the relationship you're interested in, you can just there's almost no hope of having control for every possible conflating variable, even if you had data on it, which you often don't.

[00:07:46]

So that's I guess I'm pretty pessimistic about macroeconomics. A lot of people are. I think there's one other thing to note is the problem is not necessarily inherently so the observational nature of the data. I mean, that is a problem because, of course, ideally, as you said, ideally, we would like to want to do experiments. But it's not like there are there aren't any sciences that are successful based on and observation. I mean, astronomy is an observational science, for instance.

[00:08:14]

And a lot of the relationships between the planets don't decide exactly. They are not as complicated. They don't change over time as much certainly as as economic systems. The other way to do it, of course, is by using natural experiments. And we were talking about that last time when we talked about how ecologists take advantage of nature experiments. And certainly human societies do present you with natural experiments. But the film makes it complicated, of course. Is that, again, going back to what Julia said, those natural experiments are much more valuable and much more complex than the kinds of natural experiments that of which ecologists take advantage.

[00:08:49]

So the problem is not necessarily persay, the observational nature or the lack of experiments. The problem is that those problems are compounded by the fact that the system is so complex, causally complex and variable over time. And of course, it's also capable affecting itself because once, you know, we're talking about human beings. And so once you tell human beings something about the way they work, you know, they then take that into account and they change their behavior accordingly, at least some of the time.

[00:09:16]

Yeah, I think that's probably especially a problem in financial models, because they're the the objects of interest know people trading stocks are are paying attention to the research and and care a lot about changing their behavior to to optimize their returns. And so you're going to get a lot of feedback there, I think, which would make it hard for any model to sort of stay relevant over time.

[00:09:40]

Next question I'd like to ask about some theories that I think fall into the realm of evolutionary psychology that I've come across. One is I believe that there are some evolutionary psychologists who say things like human beings were never meant to operate in more than groups of five people. And therefore, all the mental illnesses that we experience now and all our problems and hatreds and conflicts and so on come from ignoring that simple fact of our of an earlier stage in our evolution.

[00:10:19]

And then, I mean, that seems pretty easy to dismiss, even though I believe I did read that in a in a peer reviewed journal. But something else rather more serious is. I think this recent finding it was I actually read it in The New York Times science section that people and I hope I'm saying this more or less correctly, that that people experience a rise in dopamine, perceiving people who are like them, but they have less of those chemicals that make you feel good in their brains when they see people unlike them.

[00:10:57]

And I've been around people who are who are trying to kind of dismantle our standard notion of racism as sort of as a bad thing. And to suggest people working in anthropology and other sort of quasi science suggests that there may be actually some biological basis for racist reactions. And it seems to me all these things are very serious and dangerous since they're they seem almost to be giving a scientific basis to some of these theories right now. That's an excellent question.

[00:11:32]

And there are several interesting layers in there. First of all, of course, we do need to make a distinction between whether this is good science or not and then whether what kind of consequences it has for our understanding of society and how we behave in society. This is if you remember the last episode which you guys were here on in five minutes ago last episode. We talked about, you know, evolutionary theories of rape, for instance. And there are we made the same distinction.

[00:12:04]

Right. It's one thing to ask, well, is it behavior like rape or is a behavior like racism, for instance, a biological result of the biology of being a human being? That's a scientific question. Then there is a separate question, which is, well, what do we do with that answer? Suppose that the answer is yes, there is a biological basis to racism or there's a biological basis to this or other kind of behavior that we don't like, then what are we going to do about it?

[00:12:28]

Does that support in some sense, you know, comfort say, to to racist ideology? Because we find out that racism is, in fact, a biological basis based. Now, I would like to say that there is no relationship between the first and the second question. But of course, that would be naive. In theory, there is no relationship, meaning that just because you find that something is the result of a natural process, that doesn't mean that that something is ethically acceptable is something we should be doing right.

[00:12:54]

That's philosophers have a term for that. Some naturalistic fallacy. You do not equate what is natural, what is good, and if you want, the obvious example is poisonous mushrooms are perfectly natural, but I don't think that that's a good reason to eat them. So, so clearly there is not no connection. There are no logical connection between the two. But of course, that is a societal and cultural connection. Right. I mean, there's a long history of racist ideologies trying to use and co-opt the science to support their ideas.

[00:13:24]

So I think it is naive of scientists who are involved in those kinds of studies to say, well, I'm only asking the biological question, you know, I'm not concerned with the societal consequences. Well, you're a member of human society. You should be concerned with the societal consequences.

[00:13:38]

That said, now, going back to the other side, I still think it's an interesting question. And to add to to address your specific example, the one about dopamine that came out recently in The New York Times, I read the same same article. And it's interesting because dopamine is a human that is normally associated with sort of feelings of warmth for your mate, for instance. It's the kind of thing that for friends, for people that you know very closely.

[00:14:05]

Right. But it turns out that also, as you mentioned, it does actually go down in levels significantly when you're presented with people from an people that you don't know, people who look different from from you. So there does seem to be a biological reaction there. But the first thing that we need to make a distinction is between a biological reaction and an evolutionary basis for that reaction. I mean, anytime we react to anything, it's going to go through our brain.

[00:14:31]

Anything that I do or think or imagine, it's going to have some correlate, neural correlate, because that's how I think, imagine and do things. Just because you find the neural correlate of X, it doesn't mean that acts as evolutionary basis. It certainly has a biological basis. But would you expect that not to have a biological because if if some kind of behaviour, thought or emotion that we have did not have a biological basis, then I will start thinking that you believe in magic.

[00:14:57]

So it seems I don't believe in magic that clearly we have to have a biological correlate. The question of where it's different there from is there a biological correlate? The answer is certainly going to be yes, we need to figure out which one and under what conditions and all that. And the separate question of where did that biological correlate arise as a result of evolution or not? As we said, when we talking about evolutionary psychology, that latter question is much more difficult to answer.

[00:15:25]

It is plausible. I mean, it is a plausible story, OK, but that is the problem with evolutionary psychology that it's a lot of just so. As they as detractors, evolutionary psychology put it, it's you know, it's a it's an interesting scenario. It's perfectly possible. You know, we we can imagine natural selection favoring a stronger bond with members that are of your group and a some kind of negative reaction to members of the outgroup, because after all, for most evolutionary history, the group probably did mean trouble.

[00:16:00]

Whether that happened or not, we don't know whether it's much more difficult to test. And of course, even if it did happen, it doesn't land any logical support to any racist theory for the simple reason that to do so would be committing the naturalistic fallacy. What that means, however, is I think that evolutionary psychologists in particular really ought to be careful about explaining what they're saying and to what extent what they're saying is actually backed up by experimental evidence because it's fine and then to propose an interesting hypothesis.

[00:16:34]

But if you start behaving as if that hypothesis where we're a matter of fact, as if that were in fact certain, in fact, then then you actually are overstepping your boundaries as a scientist and you are, in fact, possibly responsible for supporting ideologies that that are more questionable on the basis of science itself. That is not that is very questionable.

[00:16:55]

But I think that was an excellent answer. I don't have a lot to add to that. I mean, well, first off, it seems to me that you could actually, even if you found a innate tendency to react negatively to people of different races or to outgroup members, it seemed like you could really spin that either way.

[00:17:13]

Like, I think a lot of the the people who who don't take racism seriously and who don't support affirmative action and who don't who don't support any sort of recognition of race, deny that racism is a problem. And so it seems like this could very well be taken as evidence that say, look, this is actually a problem. It's like an innate problem. And so this is something that society has to pay attention to and work for. I don't know.

[00:17:39]

It seems like tiny bit. Yeah. The other thing I wanted to mention, I guess I'm curious what you would say about the evidentiary potential of of doing kind of natural experiments about that, the uniqueness of racism. Like it seems like you could look across history across the world at different societies that for one random reason or another, like shifting geopolitical situations or, I don't know, changing borders. I ended up with like higher or lower concentrations of of people of the opposite race and and look at levels of tension or people's attitudes.

[00:18:19]

Although I guess what I wasn't sure about is whether that even if you found a causal relationship there through the natural experiment method, whether that would tell you anything about the evolutionary basis. Right. That is what I don't know exactly now. I think it's still it's still would what you would end up in there. First of all, it's very difficult kind of research to do. Right, because to come up with I mean, qualitatively, you could do it.

[00:18:40]

But to come up with quantitative assessments of that, quantifying racism, that's one of many factors and of course, a bunch of other factors that social factors might go into. Whatever behavior you're quantifying is going to be difficult. But even assuming that you can do it, as you say, exploiting a natural experiment at the best, what you what you're going to establish is that there is a correlation or there's a correlation and a potential causal causal nexus between the two, whether that causal nexus is, in fact the result of evolution by natural selection or is it the result of, you know, it's a byproduct of other human behaviours or it evolved by chance.

[00:19:19]

It just happened to be the case that Beamer's went one way or another as an entirely stochastic thing. It would be really difficult to tell. Now, people, evolutionary psychologists do tend to favor the natural selection hypothesis by default. But that is a problem because treating natural selection as a default winner, inherently bias is often the statistical test that you do to test these kinds of hypotheses, even just in standard evolutionary biology. Outside of evolutionary psychology, natural selection should be considered as one of a number of alternative hypotheses.

[00:19:53]

And once you do that, then then it becomes much more difficult for human societies to really reasonably exclude some of the alternatives.

[00:20:02]

And that's another nice thing about Satanism instead of fascism, right? That you don't have a default hypothesis that you're testing against. You just have degrees of belief in alternative hypotheses. Next question.

[00:20:14]

There have been a lot of studies that show placebos are just as effective as certain medications. And even when a patient knows that, they know that they are taking a placebo. I was wondering if what the science behind that is, if there is any. Yeah, so go. So interestingly, the the placebo effect I recently learned is not as strong as sort of the common wisdom takes it to be. And so the the common belief about the effectiveness of placebos comes from some studies done a few decades ago, I think in the 50s.

[00:20:55]

And and they've been seemingly confirmed again and again since then. But what usually happens, like usually when you read about the effect of a placebo, it comes from a study in which they have two groups. They have the group that is taking the medication and knows it. And then they have a group that's being given the placebo and doesn't know whether they're taking medication or not. And and so they they they find a placebo effect from that. But what they don't have is a group that isn't taking anything at all.

[00:21:27]

So a lot of what seems to be the placebo effect is just the result of people, some people naturally getting better over time, which is going to happen anyway. And so that effect was sort of getting lumped in as the placebo effect and was artificially inflating what we thought the placebo effect was going to be. And so I think if you do like a meta analysis and just look at the studies that had those three groups so that you can compare the placebo group to the control group, it's not actually nearly as strong, although I guess the areas in which is the strongest are more subjective outcome measures like pain as opposed to the actual measurable improvement of, say, a condition.

[00:22:01]

Right. So what is missing after you? Right. Is it double control that you should have a control for the placebo effect itself? And a lot of the classical studies do not have that double control. So they cannot separate the normal reactions of the human body, which is in fact very powerful. Human body has an incredible ability, probably naturally selected ability of healing itself from a lot of things. And you have to separate that from the placebo. You're right.

[00:22:27]

The recent studies that do separate the two clearly make it clear that the placebo effect is much smaller than people thought. It's also, as you say, very it applies to certain conditions more than others. The more subjective ones, you're not going to be curing yourself of cancer by the placebo effect. But it does raise an interesting question, which is a question in medical ethics. So if if a doctor knows that a placebo effect is going to make a significant difference for the treatment of a patient, should you lie to your patient, because that's what the placebo to be effective as not to know that he's been not given treatment?

[00:23:05]

Well, actually, that the research she was mentioning in the question shows that in your work, even if you know the placebo, it does. But since you don't say, yeah, it does work in part even if you do, because presumably because you don't you don't believe that that you're getting the picture. Actually, I wonder if if the placebo effect works, even if you know it's placebo works. Only if you think it's true that the placebo effect works, even if, you know, if you got that, that's going to get a lot.

[00:23:31]

You're very comfortable. But so there certainly is evidence, however, that the placebo effect works better if the division doesn't know. So that raises the question, well, if you're a doctor, then is it ethical for you to essentially lie on purpose to your patient? And I don't have an easy answer for them to do so. This is the best thing either. Either way, what makes it perhaps less dramatic than that, some people think, is precisely this idea that the largest placebo effects actually do not are not found in hard physical conditions are found for things like pain and psychological conditions, much less for things like heart attacks or cancer.

[00:24:11]

So you don't it's unlikely that your doctor is going to have to lie to you about that sort of thing. Next question. In last week's episode, you guys use the word falsifiable a lot. And and in sort of the common parlance way where it's kind of this gold standard. Yet, Masimo, in your book, you actually come down pretty hard on the concept of falsifiability. So I was wondering if you guys could maybe elaborate a little bit about the idea of phospholipid viability and its limitations.

[00:24:41]

Yeah, a great question. So the idea of falsifiability was proposed by Karl Popper, who was a very influential philosopher of science in the early part of the 20th century. Popper was concerned precisely with the distinction between science and pseudo science, what you call the demarcation problem, how to separate the two. And he figured that the best way to separate the two worst is to say that a scientific theory or scientific hypothesis or let me rephrase that hypothesis or a theory is scientific.

[00:25:14]

If it can be falsified, meaning that if there is a way, at least in principle, to show that it is in fact false, assuming that it is false, if a theory cannot be falsified, according to Popper, then it may even be true. But it's not science because it means that literally any piece of information or any new data is compatible with. There's no way to disprove the theory, his examples is classic examples where Freud and psychoanalysis on the one hand, and Einstein's relativity, general theory, relativity on the other hand.

[00:25:44]

So I thought, well, look, essentially any human behavior is compatible with psychoanalytic theories. No matter what you give to the psychoanalyst, it would say that in one way or another, it fits the theory. And and he had examples of this. And if that is the case, then psychoanalysis could even be true to say that it wasn't about to dismiss the whole thing. He said, well, it could you could be right. Freud could be right.

[00:26:11]

But there's no way to know because there's no way in principle to do the experiment or the observations that would, in fact, really seriously test the theory. Compare that with the theory of relativity. When Popper was writing the journal Theology, it had just been spectacularly confirmed by observation and done to doing a total eclipse of the sun. And the idea was that he predicted that light should be slightly bent when it comes close to a very massive body because of rotational effect effects that are evident only in relativistic scales.

[00:26:46]

The theory was tested during an eclipse because the sun is this massive body and people looked at the stars that were being eclipsed by the sun. And the idea was that he was right, that the stars should reappear at a different time after the eclipse, after the sun was passed by passing by because the light was bent. So you basically would be able to look almost a little behind the sun. Sure enough, people did. The observations and the observations spectacularly confirmed Einstein's prediction that made Einstein overnight celebrity.

[00:27:17]

I mean, literally was on the front pages of every newspaper in the country and for paper, in fact, in the world. And Barbara said, you know, that's the way you do science. You stick out your neck as a hypothesis to the point that it can easily be chopped. And if you survive the chopping of your neck, then that's do you survive another day. Now, the interesting thing about Popper's idea is that all that that shows is not that the theory of relativity is true.

[00:27:43]

It only shows that it's a scientific theory because it's testable and it shows that in that particular example, it was now falsified. Popper thought that science proceeds by eliminating bad ideas. But you're never sure of whether the current theory is correct because, you know, there could be another test and he could fail the next test. And sure enough, that's pretty much the way a lot of scientists think science proceeds. Now, what's the problem with that? It sounds like it's a really beautiful idea.

[00:28:10]

I mean, I think it counts as one of the best ideas of 20th century in philosophy for sure. And in fact, in science, to some extent, there are several problems with it, however. And one of them was pointed out by a physicist, actually, who had interest in political science doing him and do him, pointing out that the problem with that was that the idea of falsification is that you can always modify the idea. The theory or some ancillary hypothesis is go into it or some of the assumptions that go into it and rescue it if the data didn't fit.

[00:28:45]

And in fact, philosophers after papers show that that's exactly what scientists do. In my book, I go into one of these examples. There was in the case of the discovery of the planet Neptune, it's a classic textbook case in philosophy of science. And what happened was at the time, the last the first planet to be known in the solar system was Uranus. And if you applied the Newtonian mechanics to the orbit, to the prediction, the orbit of Uranus, you would regularly be off by just a little bit and which meant there was a problem with Newtonian mechanics.

[00:29:20]

This was a regular problem. It happened every time the people, the astronomers were making the predictions. There were always using mechanics, mechanics, and the predictions were always coming a little off. So that concerned a lot of people. Now, that seemed to me like a classic example of falsification when, you know, you're making your predictions and they come regularly wrong. So clearly you should discard the theory. But the astronomers didn't do anything of the sort.

[00:29:42]

They didn't discard Newtonian mechanics. What they thought instead was, well, maybe there is another planet, Neptune, that is causing these gravitational anomalies. And let's see if we can rescue Newtonian mechanics by postulating the existence of this planet. Sure enough, they did. They made the calculations. They found the planet. Now, fast forward a few decades, actually more than a century. And the same exact problem happened with the orbit of the Mercury. Mercury also was giving getting off of the track if you were doing the calculations based on your dynamic.

[00:30:16]

And so at that point, of course, astronomers know where I've seen that before happening before. So clearly, we're not going to throw out new mechanics. What we're going to do is we're going to imagine postulating the existence of a planet that we haven't seen yet. This case would it would have been a planet closer to the sun otherwise. The. Already they were so sure that the plan existed. They even gave it the name Balkan problem because a lot of them were Trekkies and and they made a calculation as Balkan should be there.

[00:30:43]

And then they look and welcome wasn't there. And this thing went on for a number of years and people were convinced that this thing was there, was just elusive because it was too close to the sun. Whatever they kept calculating position of Vulcan, Vulcan never showed up. Turns out in that case, Newtonian mechanics was in fact wrong and had to be thrown away. Mercury is close enough to the sun that relativistic effects are detectable. So what's causing the anomaly in the case of Mercury's orbit was in fact, the fact that you're trying mechanics is wrong and need to be to be discarded.

[00:31:16]

That little example shows you how difficult it actually is to apply for classification, because at any particular time in the history of science, you're never sure whether the part that has just been apparently falsified can actually be rescued reasonably. And and it's going to live in another for another battle, as people would say, or in fact, that's the end of it. And it needs to be discarded in favor of something else. So that's why a lot of philosophers of science these days think that falsification in order was a great idea and a great starting point to think about how science works.

[00:31:48]

It's a little too simple. There is there's a lot more going on in the actual workings of science.

[00:31:54]

In the last episode, we were talking a little bit about paganism by which you assign degrees of confidence to potential alternate hypotheses, and that that seems to me to make a lot more sense. I actually never understood why people talked about falsifying or confirming a theory because, well, as Massimo was saying, there's always sort of alternate ways you could explain something. You could say, well, maybe the instruments that I was using, maybe my telescope was mis calibrated.

[00:32:22]

I mean, and so it seems like the logical thing to me seems to be to just compare the relative likelihoods of those different hypotheses. How likely do you think it is that your telescope is off versus how likely do you think it is that general relativity was disproven? I guess in practice, it's probably hard to assign degrees of confidence to every possible alternate hypothesis in a way that would be rigorous enough to make this like a workable practice. But that seems to be how you should do it.

[00:32:51]

And that is an excellent point, which is why, incidentally, I tend to favor a what it's called in philosophy. It's called Bayesian ism. So a philosophy of Bayesian based on Bayesian analysis when it comes to comparing scientific theories. But but the point you bring up is very interesting, because, as it turns out, historically, pupper was writing about falsification at the same time that Ronald Fisher was putting together frequent statistics theory. And Fisher was, in fact, inspired by the same kind of thinking and same kind of ideas that Popper was was talking about.

[00:33:23]

At the same time, we don't actually know whether they corresponded. There's no hard evidence that they actually corresponded, but they were running at the same time and those ideas were being thrown around in the literature. And in fact, you can see you can imagine frequent statistics with the emphasis on a null hypothesis that needs to be discarded or or retained as essentially the statistical equivalent of falsification. So they're really the same idea. One is put in qualitative philosophical terms.

[00:33:51]

The other one is in terms of quantitative statistical terms. But there is same idea. Interesting. And more and more statisticians, I think these days are are leaning in favor of tossing out the falsification nest's method, the frequent test methods in favor of Bayesian methods when possible. Yeah. Next question.

[00:34:10]

Hi. How widespread do you think that scientific results are as a result of corporate greed and influence?

[00:34:20]

That's actually a very serious question. Well, we have data. I mean, the good thing about about science is you can you can even do studies about how science itself works. One of the pertinent ones came out, I think, a couple of years ago where people compared the number of studies of the efficacy of drugs and they the separated studies where the support, the financial support for the study came from a government agency like NIH or NSF versus Studies, where the support for the research came from a pharmaceutical at the pharmaceutical industry.

[00:34:58]

And it turned out that the studies that were supported by the pharmaceutical industry were done independent, that these were not studies done by the pharmaceutical industry. There were simply there were no independent researchers at universities or medical laboratories, but there were funded by the pharmaceutical industry. When the funding came from the pharmaceutical industry, the studies were about three times as likely to find that significant effect of the same drug as there were when the studies were funded independently from the pharmaceutical industry.

[00:35:25]

Now, the implication here is not at least most of the time, is not that scientists are engaging in fraudulent. What I know where my money comes from, so I'm going to take the data to make the pharmaceutical industry happy. That is not the implication. Of course, that happens. I mean, we have we have documented cases of fraud. That does happen. But that's a that's a whole different discussion. Most of the times, according to this particular paper, what happens is simply that this isn't about a unconscious bias, that it's very difficult to eliminate, because the only way to eliminate that unconscious bias is to apply the law, to conduct the research using the highest possible standard, which is a double blind, a double blind.

[00:36:04]

Experimental protocol is a situation where not only the patient doesn't know whether he or she is getting the treatment, but in fact the person analyzing the data doesn't know which patient got the treatment or not. The all it's obvious that somebody has to keep track of this information, but it's all barcoded. So when you do the analysis, you don't know. You have no idea. When you actually look at the data, you have no idea of which which experiment, the experimental setup you're looking at when you do things that way, then the bias goes down dramatically.

[00:36:36]

The problem is that large scale, double blind experiments are exceedingly expensive. So it's very difficult to do to do them. They cost a lot of money. But in general, I think the answer to your question is yes, there is there is bias out there. We have to remember that science is done by human beings. And as much as scientists often talk and think as as if they are sort of have a God's eye view of things and completely objective way of looking at things they don't the human beings, just like anybody else.

[00:37:07]

There are countless psychological experiments showing bias of that way, again, often unconscious bias, which is all the more problematic because it's not you know, you can't control it by just saying the guy mean if the guy committed outright fraud, you can just fire. And that's that solves the problem. There are other biases as well, which is there is in scientific research, there is a failed driver, in fact, for instance, that has been studied again, quantitatively.

[00:37:35]

This is the idea that a lot of scientists do research and then they publish only if they have positive results, because the negative results in an interesting if you don't find a connection between A and B, then it's not interesting. And there are two levels in which this bias happens. Most scientists themselves. I know because I know a lot of colleagues who do this most by most scientists and have large piles of data that are unpublished for the simple reason that they never even tried to publish them, because they themselves made the assessment that, well, this is nothing here.

[00:38:10]

It's not interesting. And then even if they do think that a negative result is interesting, then the next level is to convince the editor and reviewers of a journal that that result is in fact interesting and a lot of editors and and we do not publish negative results for the simple reason that general space is precious, because a lot of money to publish in a scientific journal and therefore they don't want to, quote unquote, waste their space by publishing negative results.

[00:38:37]

What's the result? The outcome of it is that you occasionally see these claims of scientist so-and-so. I've found a positive connection between whatever it is and whatever else it is. Turns out that that connection is produced because there may have been 20 studies showing that there was no connection between A and B and only one that did show that connection that happened by chance. The only one that gets published is the one that does show the connection because the only one is interesting.

[00:39:03]

So now these things are well known. They can be studied, they can be quantified. There's there's a way to quantify the failed robbery fact and people are aware of it. The problem is so as is a general problem and we are aware of it and people can do things to sort of counteract the problem is that it's very difficult to tell in the case of a specific finding. Well, is that finding a result of a bias of some sort or is it in fact genuine?

[00:39:28]

Which is why I would trust things only when they are, in fact repeated over and over and confirmed independently over a period of time by different laboratories. One of these examples, so that just so that we talk about specifics, one of these things that came out recently is whether vitamin supplements have an effect on your health or not, positive effect on your health or not. And for some time, there was a good number of studies showing that there is, in fact, an effect, positive effect of taking multivitamins.

[00:39:56]

But the latest seems to be that now actually there isn't. What happened was that only the few studies where there was a spurious correlation and a serious effect were published. A bunch of other studies that showed that there was no connection were not published. And the result is that once you account for those, it turns out that taking multivitamins, from what I understand from the current understanding of medical literature, only think that it does is it causes you to be very expensively.

[00:40:21]

And that's that's about it.

[00:40:24]

I was going to bring up the failure effect myself, because I think that with the difference that you mentioned between the results of the drug. Company funded studies versus the nondrug company funded studies that, well, I wouldn't be surprised if unconscious bias on the part of the scientists was a big factor explaining that difference. But I also wouldn't be surprised if the drug companies not publishing negative results was also a huge component of that. I mean, the companies aren't going to submit studies for publication if it shows that their drug doesn't work, they're just going to keep trying with other drugs or other studies or other outcome measures.

[00:40:59]

So the of that could be the explanation for that. Although the good news about that is that I guess it was five, four or five years ago, the US government established a national registry of drug trials to try to combat the file drawer effect problem. So you have to register your study before you even conduct it now so you don't have to publish it if it turns out negatively. But there's at least a record that it was conducted so that now we can when we read the studies that are published, we can interpret those in the context of all the studies that were actually performed.

[00:41:34]

So we can see, like obviously a study that shows a positive result is going to be much more meaningful. If it was the only such study performed or one of two such studies performed, rather than that it was the one study out of 30 that happened to show a positive effect. Then we're looking at the file drawer effect problem.

[00:41:52]

Right. There's one more thing to say, unfortunately, about the connection between, you know, private funding of research, which is necessary. I mean, that I mean, unless unless this country decides all of a sudden to increase public funding for science by ten orders of magnitude, which is not going to happen. In fact, the latest that I hear from Congress at the moment is a cut at NSF, not not an increase. So we need to rely on private funding, at least partially private funding of especially medical research.

[00:42:20]

But there is another problem with that, which is and this has also been documented in the last several years, a lot of the pharmaceutical companies, for instance, bind the researchers with proprietary agreements so that the researcher has to get permission from the firm, from the company in order to publish the results of the research and then has happened in several cases. The researchers had actually to go to court to get the company to drop that clause in the contract.

[00:42:47]

And that is pernicious because, of course, the idea of scientific research, no matter where the funding comes from that is that the results ought to be open. You know, it's open access. That's the whole point of doing science. If you start saying, no, I'm going to publish only things that are there that are positive for my bottom line. And I'm not going to publish things that could hurt my bottom line, then that's it. That's the end of the game.

[00:43:08]

There's nothing else that the entire exercise becomes, just a big publicity campaign for for the company. And unfortunately, that has happened several times in the last few years. There's this famous case of a researcher in Canada who was not even backed up by your own medical institution, a hospital research university hospital. She had to fight personally for several years before she eventually got her day in court. And they, of course, can be very expensive for talking about a single individual funding in a large pharmaceutical company.

[00:43:39]

And you know what that means in terms of odds. So those are things that are that we are aware of and there are very pernicious and they ought to be fought against it as a societal level. I mean, I think I think that pharmaceutical companies or any other private source of funding for scientific research should agree upfront that whatever the results are, they're going to be open access. Otherwise you can keep your money because it's useless. Next question. I'm interested in hearing from you regarding how some intelligent design proponents seem to be able to have a fairly good disconnect in having a productive scientific career, as long as long as it does not entail any aspect of intelligent design.

[00:44:24]

I'm interested in your speculation, so that's a good question. So there are pretty obvious examples. My favorite one is probably Michael Behe, who is a biochemist at Lehigh University. Benny has published a legitimate research in biochemistry, which is this discipline. It's been published in major journals in the field. It's been peer reviewed. As far as I can tell. I'm not a biochemist, but as far as I can tell, but I play one on TV.

[00:44:53]

But as far as I can tell, it's there's nothing wrong with that research. And then, of course, he veers off and publishes books for the general public, supporting intelligent design, denying evolution and that sort of thing. Now, the question of how can somebody do that? I my guess is these are people that have a high threshold for cognitive dissonance. I mean, these are people who really literally have no trouble separating their brains or living the brain out when they go to church or whatever, whatever metaphor you want to to to use.

[00:45:28]

Now, to some extent, we all have a certain. Capacity for cognitive dissonance, if you try to live your life completely logically and logically consistent manner. Good luck. My favorite example of that is Godo, who was a famous magician, first part of the 20th century, the author of the Gödel's Incompleteness theorems that that undermined the possibility the program of doing coming up with an entirely self-consistent mathematics and logic. So big, big, big, big, big shot in the community.

[00:46:00]

He was an immigrant in the United States in the nineteen thirties. He was a Princeton and Princeton told him that you had to go. It would be better if you went through the citizenship process because it would make things for Princeton much more easily. And he refused to do so apparently for a number of years on the ground that he found logical inconsistencies in the American constitution. And as a logician, you couldn't possibly sign up for something that was logically inconsistent.

[00:46:25]

That is an example where, you know, a little bit of cognitive dissonance is fine. It allows you to live fine. And, you know, you have to just deal with it. Interestingly, the degrees of ability to to withstand cognitive dissonance vary in the human population. And they can be studied, of course, by cognitive scientists and psychologists. So we know that there's variation in general population. We know, in fact, that you can alter those chemically.

[00:46:54]

You can alter the proportion of certain peptides in the brain and all of a sudden turn a skeptic into a true believer and and vice versa. So, I mean, you can do experiments where people are shown a fuzzy screen where there is actually no image. Right. And they ask, well, what do you see? And people were generally gullible. We'll see something. And people who are skeptical don't see anything. But then you give them a chemical neuropeptide and the skeptics all of a sudden start look, look at patterns now seeing patterns.

[00:47:25]

So you can actually see you can study that from a biological perspective. It's fascinating. But my my suggestion my hunch is that people like my hobby and other proponents of intelligent design who also have a background in science or in philosophy and mathematics. Another example is Bill Damski. You know, these are smart people. These are unquestionably smart people. You know, the idea that only idiots believe certain things about the world is something that really we should stay away from, because it's quite obvious that plenty of smart people believe all sorts of bizarre things.

[00:48:02]

Francis Collins, who is the current director of the NIH, he's a fundamentalist Christian who has a fairly strict belief in certain dictates of the Bible. And he's one of the best scientists in the world. You know, can we argue that Francis Collins is an idiot? I don't think so. I mean, that would be very, very strange argument. Can we argue that Francis Collins has an incredible ability to deal with cognitive dissonance? I think so, definitely, yes.

[00:48:28]

I know that we have had discussions on the blog and other episodes before about about this issue and about sort of compartmentalizing, compartmentalizing rationality in general. And part of this came up in the context of talking to Eugenie Scott at the National Center for Science Education, because they are trying to promote the acceptance of the idea that you can have religious belief and still be a perfectly brilliant scientist and that the two don't have to conflict. And that position has caused a lot of controversy.

[00:49:06]

I mean, I think clearly, as Massimo explained quite thoroughly, there is plenty of empirical evidence that there are very brilliant scientists who have sort of one or two compartments in their brain that they just don't apply the scientific method or rationality to. So there's also a good number of them.

[00:49:23]

And frankly, I, I mean, I guess the other one as well. Yeah, no, I was just in this context. I'm just saying that clearly this is possible. But personally, I mean, I have an empirically tested this is just sort of my theory. But I would have a hard time fully trusting anyone who who compartmentalizes to a large degree just because I mean, what that means is that when they get a scientific results or when they believe something they don't they don't just believe it because the evidence points to it.

[00:50:00]

They believe it because the evidence points to it and they like it. And we know that that's the reason, because when they don't like it, they don't follow the evidence. And so I guess I put the most trust and the most respect in people who don't compartmentalize it all, who don't try to compartmentalize, because I because then I have confidence that when they get a result or when they get evidence that they don't like, they're still going to able to keep an open mind and and consider and that they're.

[00:50:31]

They prioritize most of all is the intellectual honesty and that they're going to go wherever the evidence leads them. So, I mean, I don't I don't know how much we can extrapolate from the fact that someone would not apply the scientific method to religious beliefs. I have this hunch that they would not that it would make them more likely to rationalize away or explain away evidence that they didn't like. Another context, too. But I don't have evidence for that.

[00:50:56]

Well, this is an interesting question. By the way, in these last two episodes, we haven't had a single instance of the two of us disagreeing. So it was about time we would bring one of those up. I mean, I understand what you're saying, but my disagreement is this. I don't think that there is any such a thing as a human being who doesn't compartmentalize. Right, exactly. Three degrees. And so we all come to things, to whatever it is that we're approaching that beat science or metaphysical beliefs or whatever we all come with at that with biases.

[00:51:26]

Again, nobody has in God's eye view of things, no matter how much they think that they do. The question is how much bias there is and is there evidence for that bias that one can be we can discuss? I mean, the evidence can always be discussed. But I am wary of you know, it's pretty obvious what the bias of Francis Collins is. It may be less evident to us as a largely secular community what the bias is on the other side.

[00:51:52]

But there is a bias on the other side as well. I don't want to mention names, but let's say Jerry Coyne just just without mentioning names, you know, so everybody comes to you as an evolutionary biologist who has been prominent in sort of the criticizing of the of that the National Center for Science Education that you were referring to earlier. Now, in that particular case, I think we can make a distinction that I have a hard time seeing.

[00:52:19]

Why so many people disagree with the distinction is, is the logical distinction. There is this I think that the NCC and Eugenie Scott are perfectly right in pointing out that very smart people can be religious. Again, Francis Collins being the example and therefore that there is a way in which human beings can be scientists or accept science and be religious at the same time where I think they cross the line and occasionally they do cross the line in supporting that kind of a position.

[00:52:53]

I mean, the NTSB has, in fact, supported or organized events where there was there was a promotion of this fuzzy idea that you can logically and consistently hold those beliefs together at the same time.

[00:53:07]

I don't think I mean and I understand why they make it and just. Yeah, I mean, I I guess just to close, I would say that the more that someone demonstrates to me that they're willing to endorse a belief, even though they have an emotional aversion to it or an emotional bias against it, the more I, I would trust and respect that person's conclusions on other issues as well. That sounds good.

[00:53:33]

We have time probably for one more question, relates a little bit to an issue that came up on your blog recently with regards to the claims made by animal rights groups and vegetarians. And I recently got into a rather heated debate with a passionate friend of mine who is a vegan and who is a big fan of Peter, for instance. Yes. And I was interested in hearing you address maybe some of the kind of scientific and moral issues that often come up when addressing these issues of animal rights.

[00:54:12]

That's an easy question. OK, so let's I think we should be there should be a ban on Nazi comparisons in American discourse in general. It's it's just getting out of control now.

[00:54:30]

That said, it's a really interesting question for a variety of reasons. One of them is because, as you pointed out, this is an intersection between philosophy and particularly ethics, of course. And in science, I mean, there are certain claims that can be investigated by vegetarians or vegans or on the other hand, by omnivores. They can be basically scientifically, you know, doesn't animal feel pain or not? Well, we know something about pain. I mean, it's impossible to know whether even another human being really feels pain or just act like like that.

[00:54:59]

Right. But there are reasonable inferences you can make. The physiology of pain is actually fairly advanced. And we know what kind of neurological system you need as far as we can tell, to feel pain. And there are certain areas there are certain kinds of organism that clearly, at least in most people's opinion, don't feel pain like plants or bacteria or things like that. Why? Because they don't have a nervous system at all. The interesting part comes when, of course, you have animals that do have a nervous system, but the most biologists feel it's not.

[00:55:30]

Complex enough and doesn't have the fibers of specific fibers that carry the sensation of pain. So a lot of invertebrates fall into that category, although not all vertebrates. That's questionable. Right. And at that point, you have to make a judgment call as well. If microtones pain. And of course, the discussion is complicated because pain is not the only criterion that vegetarians bring bring about. There are also a bunch of other issues involved in vegetarians, for instance, environmental impact.

[00:56:00]

Right. But let's say let's focus on pain just just to fix our our thinking here for a minute. If your primary concern is pain. You know, I don't eat anything in the field, Spain, then you clearly can eat bacteria, mushrooms and plants and things like that. You clearly cannot eat anything that is a mammal or you probably in birds, for instance. What do you do with things like fish or some arthropods and things like that?

[00:56:31]

Well, then it comes down to a judgment call. And if you want to be conservative, you can say, well, I'm not going to eat anything that has a nervous system at all because I don't know if you want to be in the more liberal, you can say, well, I'm going to eat things like low level invertebrates because they probably don't feel pain. But I'm going to stay away from from things like squids and octopus, for instance, because they're to have a complex nervous system.

[00:56:52]

They probably can feel your pain, I think, with fish and sea creatures. Just briefly, there's a danger there, too, because because they're so alien looking, it's really hard to sort of see, like, facial expressions on them that we would recognize as corresponding to pain, whereas even like a mouse, we can see them sort of scrunch up their face and so that it's easier for us viscerally to believe that they're feeling pain. And so you have to sort of like use your your intellect instead of your emotions when thinking about the likelihood that that fish and other sea creatures feel pain and look at the sophistication of their nervous system, for example, right now.

[00:57:30]

So so in the first half, the answer is yes, there is a science of bears on it. And there's a science that bears on the science of the question of pain. Clearly, you can make similar reasoning for what about the environmental impact? Well, you can you can study the environmental impact of different practices and compare them. If that's your criterion, you can what you can. What you need to do, however, at some point is to also make the ethical decision.

[00:58:00]

It says, well, OK, why are these criteria important to me? Why is it a pain is important? Why is it environmental input that is important? And what kind of environmental impact? Because any practice is going to have an environmental impact. Now, that's where the ethics comes in. And the ethics ought to be informed by the science, clearly. But in my opinion that it's not determined by the science. You still have to make value judgments.

[00:58:23]

You have to first determine what is important for you and to what degree. And then you ask the science, well, how what is that the best that science can tell me about that particular criterion? So the question of time is also inherently ethical. And I have to I have to say, I thought about this for a while. As you mentioned, there have been several posts on the Russian speaking blog about this thing. Some of my friends are vegetarian or a variation of there are vegan biscuit, Harian or whatever.

[00:58:53]

I have to say, whenever I think about it carefully, it seems to me that at least the vegetarians, I'm not so sure. I can't I'm not ready to concede the vegan part yet. But there is the vegetarians definitely have the better ethical ground. That said, I still slip and eat some meat from time to time. And you know that I have to live with that kind of consequence. It's a matter of. But that goes back to my earlier point of biological consistency.

[00:59:17]

You know, some people are more logical, logically consistent about their beliefs than others. But even so, it's not quite that easy, even for the vegetarian, because actually the impact of a vegetarian lifestyle itself is not quite that well worked out. I mean, there are environmental impacts there as well. There are there are impacts in terms of human physiology and human health and so on. It's not quite that is imminent is not a black and white situation.

[00:59:41]

But overall, I think the both the science and the ethics is, in fact on the side of at least the vegetarians, if not the begins.

[00:59:50]

Just briefly, before I get an answering the question, in regards to your most recent point about the impact in general of any lifestyle, even a vegetarian one, I think it's it's really easy to say, well, what's the point of making this sacrifice? Because there's still going to be all this other stuff that, you know, I am not doing or that I can't do anything about, which I mean, I think if people made that argument in other contexts, it would be sort of clearly absurd.

[01:00:17]

It's a bit like, why bother thinking this person? There are other people who are dying. I mean, you know, I could easily save him, but, you know, in the bucket that was meant to be the other way or the other way. I'm sorry.

[01:00:30]

So. Yeah, I think a lot of things get conflated together when people are talking about the ethical implications of eating animals or animal products. First, the questioner mentioned the question of whether animals can feel pain. Usually when people talk about this, they talk about how self-aware or how intelligent the animal is. And and I guess the implication there is that you don't suffer if you're not above a certain level of intelligence or self-awareness.

[01:01:02]

And I think it probably is true that you don't think you're sorry. You don't suffer if you're if you're below that threshold. And I think it probably is true that your suffering is worse if you have that awareness to be able to say, oh, God, I wonder if I'm going to die like I miss my family. And and this is horrible. Like that adds a whole other layer of suffering. But that doesn't it doesn't mean that the pain itself is not incredibly unpleasant.

[01:01:30]

So I think that that tends to be sort of a strawman that that people set up. I would also say that the much to say about this, I would say that the the question that vegetarians tend to address is often the question of the animal's right to life, of whether it's OK to kill the animal, which is a very different question from the pain question, because you can you can be a vegetarian and buy animal products. And even then, the animals that are being raised by animal products are being killed.

[01:02:04]

So you're still not avoiding killing the animal if you're a vegetarian. But even so, that the animals lives in the factory, farms are pretty unpleasant. And so to me, it always seemed like the real ethical question was was less about the killing of the animal and more about the suffering of the animal during its life. Because, you know, the animal lives for it varies, but, you know, anywhere from a few months to a few years.

[01:02:28]

So it seemed and it's a pretty unpleasant life. So it seemed to me that the question of a few years of suffering is a bigger question than that of whether or not we kill the animal. So but then again, you also have some people who will argue that, well, these animals wouldn't have existed if it weren't for us. And so really anything we do to them is fine because any life is better than no life, which I have a hard time believing that they really believe that.

[01:02:53]

Like, it seems really patently clear to me that some existences are just worse than not existing. But I've met people who deny that. But at the same time, I think they also probably don't believe that, because if you you know, if you ask them whether it would be OK to breed a dog and keep the dog in slavery and beat the dog every day, they would probably say no. And so there's there's it seems clear to me that that's a rationalization, too, that if they tried to apply that same logic to other animals that they actually have an aversion to treating poorly, then they wouldn't they wouldn't see any merit to the whole.

[01:03:27]

Well, it wouldn't exist if not for me argument. On that note.

[01:03:31]

On that note, OK, I guess we're I guess we're out of time. These were excellent questions. You guys have been a wonderful audience. Thank you so much for coming. This concludes another episode of rationally speaking. Join us next time for more explorations on the borderlands between reason and nonsense.

[01:03:52]

Oh, stop growing flowers, stop growing flowers, appreciate it, but he's cool, what is that?

[01:04:00]

Oh, it's the schedule of speakers for Nexus Nexus. Yeah, it's the Northeast Conference for Science and Skepticism.

[01:04:06]

Oh, are you going? Yeah. Nice. Tell me what is next. What's next?

[01:04:10]

Oh, well, it's an educational conference held annually in New York City. Nexxus explores the intersection of science, skepticism, the media and society for the purpose of promoting a more rational world.

[01:04:21]

How does that spell Nexxus? What does that spell nexxus is emesis should be just Jesusa, what should it be pronouncing? Processer. I have no English, so good. But any assessment of spell nexxus.

[01:04:33]

Well, OK, yeah. But just to give you a sense of what's wrong with New Jesusa.

[01:04:39]

Yeah, Jesusa, I have a brother in law, his name, and yet you do not. But I could just say the rolls of the tongue, your tongue maybe, but nothing.

[01:04:47]

So who is speaker going to be a nexus, a nexus, whatever. Telling me. Well, scheduled to appear are Brooke Alcina Villa, Evan Bernstein, Steve Novela. How did John Allen, Paul Julia Gillard, Massimo Preclusion Garden, John Rennie, Thomas Gilovich, Todd Robbins, Jennifer Michael Hecht, Eugenia Scott George from Rebecca Watson, Daniel Khanum, Carl Zimmer, Bob Novela and keynote speaker Phil Plait. Always death from the spaceman from the skies. How you do that.

[01:05:16]

Do what. Make it so bouncy, bouncy like it that well I am a podcast or oh is very nice.

[01:05:23]

Tell me, what does podcasting Nexis search the Northeast Conference for Science and Skepticism. April 9th and 10th in New York City. Go to Mexico or or check out our Facebook page and Twitter feed. The rationally speaking podcast is presented by New York City skeptics for program notes, links, and to get involved in an online conversation about this and other episodes, please visit rationally speaking podcast Dog. This podcast is produced by Benny Pollack and recorded in the heart of Greenwich Village, New York.

[01:06:07]

Our theme, Truth by Todd Rundgren, is used by permission. Thank you for listening.