Transcribe your podcast
[00:00:14]

Rationally speaking, is a presentation of New York City skeptics dedicated to promoting critical thinking, skeptical inquiry and science education. For more information, please visit us at NYC Skeptic's Doug. Welcome to the Russian speaking podcast, where we explore the borderlands between reason and nonsense. I'm your host, Massimo Puchi, and he is my co-host, Julia Gillard. So, Julia, what are we talking about today?

[00:00:46]

Mascoma This is another very special episode. This is our first live episode of rationally speaking. We're here with an audience and the Jefferson market them.

[00:00:57]

I don't hear them wrong. Thank you. That's better.

[00:01:05]

Now, you listeners that at home hearing this recorded after the fact will, just since you can't see the multitudes, the adoring masses out there, you'll just have to take my word for it that they are adults adoring and they are multitudinous. I think I it never lies. So. No, never. Never. Yeah. You'll just have to trust me. There are a lot of a lot of handmade signs out there with we love you, Julia Masimo, though, that Julia does not like that one.

[00:01:31]

So. Right. So on this on this live episode of rationally speaking, we are going to be talking about Massimo's recently published smash hit book, Nonsense on Stilts.

[00:01:42]

I'm on a roll today. Go ahead. We're only ten minutes then. So it's called nonsense on stilts. How to Tell Signs from Bunk, published by the University of Chicago Press. And I just reread it since I read it. First one, it had originally come out and I just read it again last week. That's more than I did. I hope you remember what you wrote most about them enough to prompt. You know, it's a really it's a really excellent book.

[00:02:09]

I really enjoyed it.

[00:02:11]

It's I I would be hard pressed to think of another book that has such a broad range of abstract theoretical analysis of really tough questions like what is science and what how can we trust science and what makes science reliable all the way running the gamut to social and cultural questions about the propagation of science and nonsense in our culture. And Masimo analyzes factors like trends in academia and in politics and and in society in general. And look at how they have have affected the how they've essentially given give a nonsense a boost in recent years and what we can do about it.

[00:02:59]

It's a really enjoyable read and really well-written thank you. So now the hard part. Let's let's do I read it. So one of your your central themes in the book is how reliable research is, how how we can when we can trust research, in what context and about what subjects. And I so one of the ways that you break that down is between the so-called hard and soft sciences in the beginning of the book. So the hard science is being, say, physics and sort of the quintessential queen of the sciences.

[00:03:31]

And then maybe chemistry followed, I suppose, by later on by biology and and then towards the other end, towards the softer end of the hard, soft spectrum with sciences like psychology.

[00:03:44]

And so as you discuss in the book, the hard sciences get, I guess, a little more intellectual cachet than the soft sciences and ah, in some respects considered more serious or more reliable. Speaking in your objective capacity as a former biologist, do you think that this is warranted? To what degree can we trust the hard sciences more than the soft yes or no? How's that for a clear answer? So it depends on what you mean by trust.

[00:04:13]

The reason physics is often considered, especially by physicists, the queen of sciences, is because for a couple of reasons. One of them is historical physics was the first. One of the modern science is to make, in fact, the transition from a proto science to a science right. So if we go back all the way to depending on how you count, but you can go back all the way to Copernicus if you count astronomy as a part of physics, which normally is the case today, or at the very least to Galileo and then Newton, those thinkers.

[00:04:45]

And the need came just at the beginning of the scientific revolution, way before, say, Darwin, which marks the transition of biology from Proteau science to science. So even simply from an historical and then, of course, psychology made the transition at the end of the 19th century, beginning of the twentieth century, William James and to some extent Freud, as much as he has debated his contribution, is debated today. So just historically speaking, you know, physics kept presidents because they came there first.

[00:05:16]

That wouldn't be a particularly good base, however, for for considering the queen of the I mean, that at the best, that would give physics the status of grandmother of of the sciences, but not queen. There are reasons why, of course, physics is, quote unquote more reliable or result in. Some more more reliable than results in, say, biology or in fact, in the social sciences, but there we need to be careful about what we mean by reliable.

[00:05:41]

So one thing that it's not true is and even though it is in fact widely believed, one thing is not true is that the results of experiments in physics are more repeatable, more consistent than the results of experiments in, say, biology or psychology. That's actually demonstrably not the case. People have actually looked at the consistency and repeatability of experiments in the social sciences and the physical sciences in there, about the same. Now, when you talk about results here, I think I'm remembering what you're referring to.

[00:06:12]

You're talking about measuring quantities in psychology versus measuring quantities and physics. You're not talking about confirming a theory, are you, now? Correct. In fact, we're going to get to that point because that's a crucial distinction. But, yeah, if you're talking about a psychologist doing an experiment under certain conditions on whatever, the psychologist wants to know how people respond to certain social cues or or to or investigate a certain way of thinking about a particular issue in life, those results are highly repeatable.

[00:06:45]

You can do the experiment under similar conditions and you get pretty much the same results. But what makes a difference between, say, physics and psychology in particular is that, as you were that you hinted at a minute ago, physics actually has overarching theories that physicists are trying to test. That's not necessarily the case in psychology. I mean, Freud did try obviously psychoanalysis was supposed to be an overarching approach and sort of a paradigm for the for the psychological sciences.

[00:07:15]

And it turns out, however, that currently there is really no viable candidate that hasn't been a viable candidate as an overarching theory in psychology for a long time, with one possible exception that we might get to later, which is evolutionary psychology. But if we're talking about standard psychological research, the results are highly repeatable. But the theory is really a very adhoc it's there's no overarching narrative. Now take biology. No biology falls somewhere in between. Right.

[00:07:43]

Because for a variety of reasons. First of all, because biology is partly a historical science, you know, biologists have to do to deal with even when they do experiments, they have to deal with systems whose behavior is affected by history, which makes them more complicated and more difficult to study inherently. So, you know, if a physicist does an experiment by throwing a bunch of electrons on a target and the split atoms or whatever it is that physics is doing at high velocity in cyclotrons, those elections are going to be the same, behaving the same, no matter where they came from, no matter how long they've been around.

[00:08:23]

At least that's the theory. If you do an experiment in biology and you do it with a particular population of a particular species of animals, plants, bacteria, or whatever it is that you're experimenting on, your results will depend on the evolutionary history of that population, that species, that group of organisms. So which means that inherently findings in biology are more difficult to sort generalize then than findings in physics. So that is why it seems to me, although biology does have an overarching theory, of course, that's the revolution, which is a descendant, modified descendant of the original Darwinian theory.

[00:09:01]

That's nice how that works out.

[00:09:02]

Yeah, but it's the modified biological selection, I would say, of ideas.

[00:09:09]

So there is an overarching theory. So biology is in a better shape than most of the social sciences. But that overarching theory is nowhere near the precision does not provide biologists with nowhere near the precision that say something like quantum mechanical theory allows physicist. But I argued in the book that part of the reason for that is not that physicists are smarter than biologists or that there is any intrinsic reason why it's easier to do. Physics other than physics is literally deal with the simplest things in the universe.

[00:09:40]

So it really would be surprising if they didn't get highly, highly precise and detailed theories. Right? When you get complicated to complicated systems, then the theories necessarily become more general, less reliable and more difficult to test.

[00:09:55]

So it's a combination of biology being a historical science and physics not and also biology studying more complex systems where it's more difficult to isolate a causal relationship than it is in physics. Right. So physicists aren't smarter than biologists, but our biologists are going to be psychologists, but I don't know of any experimental evidence that is actually the case.

[00:10:15]

What about psychologists? Do they rank on this scale?

[00:10:18]

I'm not going to go now. I mean, the fact of the matter is, once you are once you spend five, six, seven years of your life doing a Ph.D. in any particular discipline and then possibly years as a postdoc and then years of. About as as a researcher, it is a matter of being smart. Clearly, you are smart by any definition of smart that counts in science. And also, let's not forget that plenty of physicists, for instance, historically have in fact crossed to the biological sciences and occasionally they've done well.

[00:10:53]

Maybe they needed a break, just relaxing occasionally like Francis Crick, who helped discovered the structure of DNA. Right. But they haven't been able to provide biology with the equivalent of quantum mechanical theory. And I think it's not because they were bad physicists. It's just that it's not possible, at least not not at the moment. We don't we don't know how to do that.

[00:11:15]

Yeah, there was one study that you cited about the explanatory power in the hardware versus the softer sciences. And so in in the softer sciences, whatever figure you were citing was the effect that the coefficient, the correlation coefficient. So the question that tells you what percentage of the phenomenon you're interested in, you actually managed to explain sort of maxes out around 25 or 30 percent. And and the rest of the variation in that phenomenon, we don't know that at random or its variables that we we haven't accounted for, we haven't thought of yet, whereas something like physics could get up to the 90s or even 100 percent of the variation.

[00:11:53]

So it does seem like whatever you want to say about the relative intelligence of the people in each field, which I don't. That's right. Which you don't. Very diplomatic. It does seem at least that that we should be more confident in the results from the hard sciences.

[00:12:09]

Right. But we also need to understand why that difference so that difference that you mentioned, that the actually 30 percent of explained variation is a lot that maxes out at that maximum.

[00:12:21]

A lot of research in ecology, evolutionary biology and psychology and other social sciences actually tends to hover around five or seven percent. Explain variation in whatever phenomenon people are studying in physics, that proportion goes up to 90, 95, 99 percent easily. That's one reason why, for instance, social scientists and biologists are obsessed with statistical analysis. They're really concerned about significant levels of significant variation and statistical testing. And most physicists are not because in the case, a lot of experiments in physics, the standard error around an estimate are so tiny that there's just no point in doing a statistical test.

[00:13:03]

Right. You see the line, it's going in that direction. That's it. If you see a line going in whatever direction in the social sciences or in biology, you have to be worried about how large the standard pattern around that line is, because it may look like a line, but it may actually be a flat thing. Now, the question again is, Will, why? And I think that the answer is twofold. One, again, we get much more complex and therefore inherently variable systems in biology and the social sciences.

[00:13:30]

But also physicists have a much stricter control over the experimental design. So when you control the conditions in physics so tightly, US central eliminate all of the extraneous variants, all of these trends, variation in the phenomenon, whatever phenomenon you're studying and you get highly significant results as is that the kind of results for which you don't really need to bother with significant statistical significance? Um, there is a price, however, for that. And the the price to be paid is that when physics does move outside of the lab, instead of thinking about quantum mechanics, let's think about non-equity on thermodynamics, for instance, which underlies atmospheric physics, which quickly brings us to things like climate change.

[00:14:11]

Well, that's where the phenomena become much more complicated. The control over the experimental conditions is much lower. And now physics is about the same situation as biology and social sciences, that the standard has become huge and the reliability of the estimates is going down dramatically. You guys must feel a little bit of schadenfreude at that, just a little bit of it. So we've talked about hard and soft sciences. Let's talk about borderline sciences. You have a great chapter looking at three areas of research that are sort of hovering on the borderlands between science and non science.

[00:14:49]

And you talk about why the three being string theory, the search for extraterrestrial intelligence and evolutionary psychology, which you mentioned earlier. So let's talk about evolutionary psychology. One of so one of the things that makes all three of these areas of research questionable, questionably scientific, is their lack of ability to be falsified. So the search for extraterrestrial intelligence may never find evidence of intelligent aliens, but that's not a falsification of the idea that there might be, you know, could just be that they don't want to contact us or haven't been able to contact us.

[00:15:27]

We don't know for string theory. There's no the lack of. Confirmation of string theory is not taken as evidence that it's not true, just that we haven't found confirmation yet. And then also for evolutionary psychology, there are difficulties in in testing the theories about why we are why our brains are configured the way they are, because I suppose there's just one human species. And and as you were saying earlier, it's historically the theories are historically based. So we can't really rerun evolution.

[00:16:02]

Right. And so, again, it's difficult to falsify the theory.

[00:16:05]

So let's talk about for a minute about the rerunning of the tape of life. For a long time, Stephen Jay Gould and Richard Dawkins had argued against each other about what would happen if you rerun the tape of life. And Gould's position was that if you rerun the tape of life from the beginning of the origin of life on Earth, you might get something completely different from what we have now. For all we know, you might get a planet inhabited entirely by bacteria and that never progressed anywhere to multicellular life.

[00:16:33]

So he thinks the chance played a big role, correct? Right now, according to Dawkins, on the other hand, a favourite in the British tradition, evolutionary biology, ever natural selection, deterministic forces in evolution as opposed to stochastic forces in evolution. Then, according to Dawkins, if you do Rehren, if you could rerun the type of life what you get is something about the same as we already got, you might not have almost sapience, but you would have a highly intelligent bipedal species with a big brain and so on sport because that's the kind of thing that natural selection favors.

[00:17:04]

And my question to both of them has always been how the hell you know, because of course, there is no way to run the type of life. It is an interesting thought experiment, which tells you, however, mostly about the individual scientist preferences in terms of stochastic versus deterministic events. It doesn't mean you can't make reasonable guesses, or at least you couldn't until recently, as it turns out, in a limited manner. Over the last few years, biologists have figured out a way to rerun the type of life, unfortunately not from the beginning, only, only in a limited way.

[00:17:39]

What they do is they get, for instance, people like my colleague Richard Lenski at Michigan State University. What they do is they start with, say, a large number of bacterial colonies. Richard works on E. coli. They're all genetically identical and then evolves them under a variety of conditions for literally tens of thousands or hundreds of thousands of generations because, you know, for E. coli, it only takes a few months to get hundreds of thousands of generations anyway.

[00:18:07]

And you can rerun the tape of life, meaning that you can restart experiment at any time with the exact same genetic background and see what happens then the second time around. Or you can run it in parallel a large number of times and see what happens. And the result, perhaps not surprisingly, is that both Dawkins and Gould had a point, meaning that depending on how variable the condition the experimental conditions are, stochastic events become more or less important than natural selection, so that if the environment stays constant, there's not a lot of variation and and the conditions are highly controlled.

[00:18:42]

Then it turns out that Dawkins and Dawkins type of outcome is in fact what results natural selection just as straight evolves, better adapted populations of E.coli and pretty much those in the same way with the same outcome. Regardless of the starting point, the details are different. The kinds of mutations that lead to a higher fitness of those E. coli will be different, but the final result is about the same. On the other hand, if you start modifying the environment and introducing environmental variability or environmental complexity into the mix, now you get a lot more stochastic.

[00:19:13]

Now you got that natural selection sometimes does work things out, sometimes it gets stuck in suboptimal situations and so on and so forth. So you can actually do it both ways. Interesting.

[00:19:23]

So I wasn't aware of that when I was reading the book. But you did talk about possible ways of not testing theories conclusively, but at least getting sort of suggestive evidence for or against the theory without doing experiments at all, which is one of the parts of the book I thought was so fascinating. So you talked about natural experiments. Right. And in biology, that would take the form, I guess, of looking at cases of convergent evolution so you can look at species that are genetically very different from each other in different but in similar environments and see if they evolved similar phenotype, similar like strategies.

[00:19:59]

That's right. That's correct. And in fact, a colony that is testing the theory that existing theory and ecologists are particularly good. I want to go back, by the way, to evolutionists right now. You raise the question, but I'm holding the thread. Ecologists are particularly good at taking advantage of natural experiments. So, for instance, after volcanic eruptions where the a small island is, the fauna and flora is completely erased because the volcano just takes over the whole thing.

[00:20:27]

I the lava covers everything. Those are ideal laboratory natural laboratories. For ecologists to go and see what happens, which species colonise the new the newly fertile environment immediately after everything is being excavated, and the question is, if you repeat that several times and you have an occasion to repeat that several times, just as a natural experiment, then you can take a look at whether there are what ecologists call assembly rules for four communities, for biological communities. So do things happen always in the same way?

[00:20:56]

There's always a certain kind of organism that comes in first, followed by another one instance where in a particular sequence or is it a matter of chance that whoever gets there first has an advantage and then determines the ecological evolution of the system? And again, the outcome is often somewhere in between, meaning that not any species will do as a as a beginner in a new environment, because you have to have, for instance, the species to start working the soil that produced the soil because you've turned basically the volcanic ash into biologically fertile soil and so on.

[00:21:32]

Insurance business can do that. But within that ecological category, a number of different species can play that role. So, again, it's a matter of a mix between contingency and determinism.

[00:21:44]

Right. So the more similarity that you find in cases like that, in the more examples of convergent evolution that you find, the more we tend to believe that history would have played its evolution, would have played itself out the same way if we were to run in correct term. So evolutionary psychology is right.

[00:21:59]

So the problem of ocean psychology is not a matter of principle. It's it's a matter of the specific peculiarities of studying the evolution of behavior in human beings, generally speaking. So we should probably step back for a second. Generally speaking, evolutionary psychology is the idea that basic evolutionary principles can explain human behavior and therefore that human behavior evolved, number one, and at least to some extent, evolved by natural selection. Now, that idea is entirely uncontroversial, except among creationists, which of course, about half of the US population.

[00:22:34]

But never mind that among biologists, that's not controversial at all. The devil, as they say, is in the details because the question is, well, right. But now how do I go beyond the general idea that human behavior is just like any other animal behavior evolved, some of them probably by natural selection to specific, testable hypotheses about specific behaviors. The problem there and the specific behaviors range usually have something to do with sex. For some reason, evolutionary psychologists are just fixated with sex and those labs get lonely that.

[00:23:09]

Yeah, although, of course one could come up with a psychological, evolutionary, psychological explanation for why fixated with sex. But you know, for instance, a famous book that came out a few years ago by an evolutionary psychologist and the anthropologist, Thornhill and Palmer are the authors of the book, asked whether rape in human in human beings is a what biologists call a secondary sexual strategy. That is, if a male doesn't get access to a female the usual way by buying dinner at a restaurant and getting a fancy sports car and so on and so forth, the typical kind of thing you would do in the place, the scene to attract the female.

[00:23:51]

If you fail to do that, perhaps the coercion part is your best chance and perhaps natural selection sort of evolved that sort of behavior. Now, that's interesting.

[00:24:01]

I mean, the authors in question were, of course, immediately aware to their credit of the sort of social consequences of even raising that kind of question. Right. And they are certainly not. I talked to both authors several years ago when the book came out. They are certainly not about to suggest anything like a naturalistic fallacy that since that happened naturally, therefore it's acceptable morally or anything like that. That's not what we're talking about. But the question is, well, how do you test that sort of hypothesis?

[00:24:30]

Well, you get tested in in some animal species. So their best example is water striders. There's little insects that that float around the little ponds. Now, what Australia's also have also engage in, quote unquote, rape or biologically speaking, sexual coercion of the of the female on the part of the male. And so you can in fact, you can study that because several species of water striders engage in the same kind of behaviour. We have a large number of species that we can compare.

[00:25:00]

Some of them do and some of them don't. We know how people are genetically related to each other. They are so and you know what the family tree basically of water striders is. And so you can test hypotheses about what kind of environments favor the evolution and that sort of behaviour. But notice one thing in water striders. First of all, notice one thing. Water striders are very far, evolutionarily speaking, from human beings. So it's you know, you need to take a little bit of a grain of salt and find a large grain of salt when you compare insects with with human beings.

[00:25:30]

But that said. The fact is, first of all, in water storage, that it's not the facultative strategy, that's the main strategy, that's it always works that way. And as it turns out, we and we figure out biologists figure out why that happens. It's really not right to characterize that as rape. What it is, is a test that the female puts the male through the female in water straight. It is much larger than the male and they can easily shake off the male off their back.

[00:26:02]

And so the idea is that females just hang around in certain parts of the pond and the males attack to attack them. And the female just shakes her body a little bit and just throws the male out until she finds a male that is strong enough to hang on. And the idea being that, well, if this guy is strong enough to go on for a few seconds, at least, perhaps these genes are in good enough for my progeny. So it's really a way for the female to sort of screen out the males, which, of course, is a completely different situation from what we're thinking about in terms of humans.

[00:26:35]

I'd heard a theory about about why it might be adaptive for females, for human females to not resist rape too strongly, that like maybe they don't this wouldn't have been the ideal mate that they would have chosen if they were not being coerced. But if he is strong enough to to overcome the resistance, then. Right. Then those are good genes and say, all right, well, yeah.

[00:26:56]

Now the question is more, how do you test that? I mean, that's possible, right? I mean, it's logically, but there's nothing incoherent about that idea. So it's a matter of testability. And that's why we come to the real problem, revolutionary psychology. First of all, we should make the distinction. Evolutionary psychology can be done in different ways. The part that is problematic, and it's the one that I'm going to focus on for the next couple of minutes is, is when evolutionary psychologists want to study things that are human universals and human uniques.

[00:27:28]

So traits that essentially, as they put it, define human nature, a trait that is only human. And it's universally found among among humans, for instance, religion not in the sense that every single human being is religion, but in the sense that every Yuman group that we know of, every human population we know of, has as the trait. So they're interested in largely not exclusively larger in those in those traits. And a point that I made with my colleague Jonathan Kaplan a few years ago in a paper that we published in Philosophy of Science, was that that's exactly the worst possible case study for human evolutionary psychology.

[00:28:07]

And the reason for that is this. Essentially, you're confining yourself to a data point of one. You have no comparisons now right there. That's problematic because a lot of evolutionary hypotheses are tested by comparative analysis. You have to have variation across a large number of species to determine under what ecological conditions certain traits evolve and under what ecology conditions that don't evolve. In the case of the traits we're talking about, we have a sample size of one. There's only one human species that used to be several, but we probably clobbered them to death over a period of hundreds of thousands of years.

[00:28:45]

So there's only one left. The other strategies that we have available are to look at the fossil record. Well, good luck looking at the fossil record of behaviors like the one we're talking about. I mean, there are some human behaviors that do leave a fossil record. So, for instance, you know, for a long time on anthologist, biologists were discussing whether humans evolved a large brain first or the ability to walk erect. Well, now we know the answer.

[00:29:09]

We found fossils of astroparticle that were clearly walking Iraq and they were clearly bipedal, but they had a small brain. So the answer. So the fossils do occasionally give good answers to that kind of questions, but not by questions of subtle, exactly subtle nuances like rape. If you want to consider that a subtle nuance, that's that's difficult because you can't determine that from the fossil record.

[00:29:30]

The only thing that phrenology turned out not to be true is that would have been so convenient. We could have just checked the skulls. I was. Yes, indeed. And the last way you can do it, normally evolutionary biologists do test their hypotheses is if you can compare close relatives. So there's not enough variation within a species, but you're going to compare a number of close relatives. Human beings don't have close relatives. We only have two species of chimpanzees into gorillas.

[00:29:59]

They are very distantly related to us from an evolutionary perspective. We're talking about millions of years now, not hundreds of thousands, and they're very different. We have the closer species to us are the two species of chimpanzees, the standard chimpanzee and the bonobo. And the problem is that they have radically different behaviors. So depending on which and the equally distant from us. So depending on which species you pick, you can show that, for instance, human beings are very aggressive species just like the chimpanzees, or they really want sex all every minute, just like the bonobos.

[00:30:31]

And so. You just can't do it because it's not enough, there simply not enough information. There is a way out of this, which is for evolutionary psychology, which is actually has happened over the last several years to focus on those traits that are either variable within human populations. Variable. Yes. So that you can do that, then you can apply the principles of population genetics and look at why is it a threat to human populations? May of all the threat and others don't.

[00:30:57]

For instance, if you're talking about non behavioral traits, you know, the the genes that cause anemia, sickle cell anemia in certain African populations, we know we can study those. Those are valuable within human populations. There is a clear association. If you have that gene, you get sickle cell anemia, but it also provides a defense against malaria. So the gene is, in fact, found in frequency only in areas where there is malaria. So if if we study that sort of the equivalent type of human behavior behavior that is variable within the human species, then you can use the terms of population genetics to test everybody, or you can go the other way and expand to behaviors that are, in fact common across a large number of species, just like Darwin did.

[00:31:41]

Darwin wrote an entire book on the evolution of emotions in animals, but he focused smartly because it was a smart guy and things that are in fact common to a large number of animals like fear. So a fear, a reaction if fearful reaction is in fact common among mammals. So you can actually study a large number of species and you can compare what happens in a fairly large sample size. So those are the two ways in which psychology can, in fact, work, and most definitely away from a pseudoscience.

[00:32:10]

Interesting. OK, so we've talked about hard science, soft science, almost science. Now let's go to Pseudo-Science. We so you have a great case study in the book of a lab at Princeton, the peer pressure lab, which was recently shut down. But they while they were operating, they were testing various paranormal theories of ESP and precognition. And and there are a number of studies that they published with large sample sizes in which they found that people could affect the output of a random number generator just with their minds at a rate slightly better than chance.

[00:32:56]

But but this was it was a small effect, but it was statistically significant. And and it's an interesting case study and especially timely now because of the recent publication in the most prestigious psychology journal, the Journal of Personality and Social Psychology. Last month, they published this study. It was a collection of studies, actually, that purported to show that people are able to predict at a slightly better rate than chance what the images are.

[00:33:26]

They're going to be shown in the journals prestigious.

[00:33:30]

And this level is just went down after the publication of the book. Yeah. So, I mean, and the Journal did issue caveats and said, you know, we just we think this is worth looking into. We're not endorsing it, but nevertheless, they did publish it. And so the question a couple of interesting questions with both of these cases which you raised in the book, one of which is why would we find a significant effect size in studies like this?

[00:33:57]

This is there any way to explain this other than, I guess we have precognition? And also, what should our sort of collective reaction be when someone presents us with a study that has a statistically significant result and that seems to have sound methodology and yet contradicts everything that we know about science up to this point? Yeah, before we address that, I noticed that you're being uncharacteristically shy today. You're shying away from the main reason the new study was interesting.

[00:34:27]

It turned out that people were able to predict the next image of what's coming up, what's being presented randomly by the computer, not if the image was neutral, but if it had erotic content. Right.

[00:34:39]

Well, we'd already covered water strider rape. And so I just didn't want it coming out right. I was free. But we're going to start connecting the dots of this episode. So. But there you go. There I, I to be a big number of downloads. OK, so there are two questions. I want to start from the second one actually, which is, you know, should these things be published or not. And I think the answer is yes.

[00:35:07]

If the study was submitted to a legitimate peer review journal and it was reviewed by a certain number of people that are, you know, whose expertise is pertinent to the to the claims that are made, it should be published no matter what how it reaches the findings. Ah, I mean, that's the way peer review and science in general is supposed to work, as long as we understand that the. Peer review process in science doesn't stop there. It's just the beginning.

[00:35:36]

So the way the Army works is that as an editor, you know, I'm an editor, for instance, a journal called Philosophy and Theory in Biology. And whenever we get a submission, the first thing I do as an editor is I look at it. And if it's a crackpot I just rejected out of hand, I consult with a couple of other editors. I never made the decision on my own. But, you know, from time to time we get submissions like I have a new theory about life, the universe and everything.

[00:36:02]

And of course, you go through the paper, it doesn't say anything. It's a bunch of nonsense. And so you rejected out of hand that is that the editor's prerogative? That's the first defense. That's the first line of defense in peer review. If the paper looks legitimate and it's well win, it's understandable. And it's based on whatever data or arguments, depending on whether it's science or philosophy, then you send it out to review for review.

[00:36:25]

Now, that means that you have two, three, sometimes more people in the field who look at the paper and give anonymous, often anonymous, not always, but anonymous comments back to the editor and the other makes a decision whether to publish or not, depending on the peer review. But it doesn't stop there. You know, a lot of pseudo science enthusiasts and particularly, for instance, intelligent design proponents seem to think or want the public to think that that's the end of the story.

[00:36:54]

Once it gets into into actual peer review journal, it's absolute truth with a capital T, and you should never be questioned again. Now, that's on the contrary. That's where the real interesting stuff gets started. For one thing, you have to understand that most papers published in scientific literature are never, ever cited, period. They just disappear in limbo. You can think of that as a sort of a natural peer review. The people were simply not interested enough.

[00:37:23]

You know, it got out, but it wasn't really nobody paid attention. And so a lot of these things are never, in fact, cited again, the ones that are cited because they did rise to the level of being interesting for the relevant scientific community. The first thing people do is especially if it is an important finding. And this one that we're talking about, you know, discoveries like these in the first case and the discovery of precognition in the second case, those are big deals.

[00:37:54]

This is not like, oh, well, here's another study with with a bunch of psychology undergraduates who, you know, we ask silly questions about what they think about sex just to pick an example of really.

[00:38:08]

And, you know, and that's been done hundreds of times. And who cares, really? But what about the discovery of psycho kinesis or precognition? Then people will jump on it and it will immediately try to replicate the results. And that's where the real interesting stuff comes up. This happens over and over again. I mean, a few years ago, you may remember the brouhaha with cold fusion, this idea that you could get nuclear fusion on a desktop at low temperatures, which would have produced essentially infinite amount of energy and very, very cheap prices.

[00:38:45]

Well, that was a big deal. That paper did get published in a major journal. It was peer reviewed. And then because it was a big deal, people in media tried to replicate it. And what happened, they couldn't replicate it. So they asked for specific information on the experimental protocol to the original authors. They got it. They tried to replicate it. They couldn't. They went back to the original lab. They couldn't replicate it.

[00:39:06]

Now, cold fusion is essentially dead, except for a small number of people who keep having cold fusion conferences every year, which means that cold fusion is moved from potential science into pseudoscience. The same things happen here. Now, in those two cases you mentioned, the problem is the same. That goes back to your first question. The problem, from what I understand from both studies using the statistical analysis, first of all, from what I understand, the the new study, the one about being able to have precognition, about erotic images.

[00:39:40]

The problem there is that none of the peer reviewers were was actually had actually a background in statistics, which is a problem. So there were psychology, there were actually psychology and psychology researchers, but none of them had a background in statistics, which is a problem. It's it's a kind of a peculiar thing for an editor to see a paper that is obviously heavily relying on certain statistical analysis and not sending it out to at least one, possibly more people who actually know what the statistics are about.

[00:40:06]

But apparently, from what I read of the criticisms that have already been published about that particular paper, and the same goes for the findings of the paper lab at Duke University, the problem is with the statistical statistical noise. Now, we don't want to get too technical here, but the basic stuff is that there are there are two major ways of doing statistical analysis. One is the standard classical approach, which is called frequent TISM. This is where you get the statistical.

[00:40:32]

Significance that we were talking about earlier, this is an approach that essentially gives you a number for a particular test and that number tells you what is the what are the chances that that result or more extreme result result are the outcome of a chance event as opposed to an actual systematic difference? Or if there is no effect, what are the chances that we would get correct results, this extreme or more extreme, these extreme or more extreme? Correct. Now, the other approach is called Bayesian analysis, and it works in a completely different way.

[00:41:02]

We don't have time to get into it, but it's based on a very well accepted variant published by Thomas Bias actually a couple of centuries ago. And more and more, the Bayesian approach is the one that is being used in the social sciences and in biology. And the reason for that is because it handles much better precisely situations like these, where you have a huge sample sizes and there is a pretty good chance you're going to find spurious results. You're going to find spurious statistical significance just because you looked at these incredibly large number of cases.

[00:41:39]

Now, is it really spurious, though, or is that does it mean that there actually is a very, very slight effect? It's just it's not practically significant?

[00:41:46]

Well, no, it's actually spurious in this case. So you can the way I used to teach to my students when I was a biologist, a practicing biologist was this you can simulate the experimental design in a computer by using a random number generators. Right. And if if instead of simulating an experiment of the typical size of a biological experiment, which is pretty small, you actually simulate millions of data points, which is what happened with the psychokinesis experiment.

[00:42:16]

You will find significant results some of the time, even though you're actually using a random generator, a random number generator, there's nothing going on there. You you know, there's nothing going on there because you're generating the numbers through a computer randomly. But you will find the occasional positive result. And the reason for that is because frequent these statistics actually tend to give you these biased results where the sample sizes are so large, which is why every time that we see these results about parapsychological phenomena, you will notice that they're always either based on very large sample sizes or on very, very small effect size.

[00:42:53]

And that's because those are the two areas where the frequenters statistics really begins to give you sort of unreliable results and where the Bayesian analysis is more reliable. Sure enough, if you analyze these data on a revision framework, it turns out that neither one of those experiments actually had any significant results.

[00:43:10]

So it sounds like those two problems there. One is that if you get a large enough sample size, you're going to find some slight distinction between both groups just just because the samples large enough and the other being, if you if you try and experiment enough times, just by chance, one of them is going to look significant.

[00:43:29]

All right. So you mentioned creationism earlier, and I wanted to ask about whether that framework could be helpful in analyzing some of the almost sciences that you brought up in your book. So like string theory and search for Extraterrestrial Intelligence, you talked about them in the framework of falsification, that they can't be falsified, therefore, they're not science or that, you know, throws some doubt or whether they could be. Right. Right. Um, but the whole point of vision is, is that you you sort of gradually, incrementally update your degree of belief that you have in a hypothesis as you search for more evidence and either find it or don't.

[00:44:09]

So maybe we should look at string theory and study in that framework and just say that the more we look for evidence, the more we don't find any. That doesn't falsify the theory. But in a patient perspective, it would make us less confident. Yeah, that's an interesting way of looking at it. And it would be different in the two cases. In the case of. So the way Bayesian analysis works is that you start out with something that's called the prior probability of hypothesis to be correct.

[00:44:35]

This is your it could be your hunch because, you know, I think this drug is going to work for curing this disease. I think I have good reasons based on my knowledge of the chemistry of the drug and the physiology human beings. I think that there's a good chance that this thing is going to work. So your prior is somewhere between zero and 100 percent, but it's not zero or one. The way this works, this sounds very subjective.

[00:44:56]

And it is by the way, this is supposed to work is that then you start collecting data. The data will constantly update your priors, turn it into what is called posterior probabilities. That is the probability of that hypothesis to be correct. After you look at the new batch of data and the idea is that this is this system is supposed to go on on a recursive basis and you keep updating your priors every time the new information comes in. It makes perfect sense, right?

[00:45:23]

I mean, this is the way we normally reason. You can't you may start out with certain ideas about whatever the topic is, but then you. You have more and more information, you update your beliefs essentially, and depending on what information, which way the information goes, now, this works very well, except that if we applied that to the two cases you mentioned, here's what we get. My guess would be, in the case of Saidy, the priors are getting worse and worse because the more we listen and the more we don't find anything, the more the prior probability of existence of a technological civilization capable of communication, I would think goes down.

[00:46:02]

Of course, all it takes is one data point in the other direction and all of a sudden your prayers go, you know, to essentially 100 percent, because now you know that there is a civilization out there. But right now, if I were to use a Bayesian approach, since we've been trying to be listening to potential extraterrestrial messages since the 50s when it's been 60 years, and what the heck are they sure? So the prices are going down.

[00:46:23]

In the case of string theory, however, I don't think even Bayesian approach will work at the moment because there simply is no data that is pertinent to string theory at the moment of string theorists. Of course, most recently, Brian Greene, who was on The Colbert Show just yesterday, as a matter of fact, keep telling us that eventually you will be able to test the theory. But right now we haven't. Right now, the theory is simply predicting what every other theory, dominant current theory in physics predicts.

[00:46:52]

So there's no new prediction that's the problem. So the data are essentially irrelevant at the moment. There is no data and therefore I have no way I would have no way to update my Pryors about string theory at this time. That doesn't mean that string theory is wrong. It just means that whatever Prior's you have, Brian Greene's priors are very high user string theorists. Least Molyneaux is another theoretical physicist. Players are very smart. He's also worked with the theory and he thinks that it doesn't have much of a chance to be correct at the moment.

[00:47:23]

My personal problems, and that is just 50 percent. I have no idea. I like that.

[00:47:29]

The nice thing about Bayesian ism is that people who start out with different priors can over time converge to the same degree of belief in a theory after looking at evidence. But the problem is you do need evidence that that's the wrench into the whole works.

[00:47:43]

I have so many other questions I want to ask you, but we're already over time. This went by really quickly. So we're going to have to wrap up this first live episode of the rationally speaking podcast. You all have been a great audience.

[00:47:56]

Thank you so much. Join us next time for more explorations on the borderlands between reason and nonsense. The rationally speaking podcast is presented by New York City skeptics for program notes, links, and to get involved in an online conversation about this and other episodes, please visit rationally speaking podcast Dog. This podcast is produced by Benny Pollack and recorded in the heart of Greenwich Village, New York. Our theme, Truth by Todd Rundgren, is used by permission. Thank you for listening.