Transcribe your podcast
[00:00:14]

Rationally speaking, is a presentation of New York City skeptics dedicated to promoting critical thinking, skeptical inquiry and science education. For more information, please visit us at NYC Skeptic's Doug. Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense, I am your host, Masimo, and with me, as always, is my co-host, Julia Gillard. Julia, what's our topic today?

[00:00:47]

Mascoma for this episode, we're going to be looking at the extent to which values influence science. That's pretty broad. So let me break that down into two topics for us. First, we're going to discuss how scientists own values affect the way they practice science, whether those values come from their culture, their race, their gender, their worldview or anything else. And then we're also going to talk about the value judgments involved in deciding as a society what science is actually worth doing.

[00:01:15]

At the very least, I think I have another two or three point there. So where are we going to start?

[00:01:19]

Well, to start with the first question of scientists on values affecting the practice of science, a lot of the comment discussion on the podcast Tizer revolved around the question of whether a more diverse body of scientists are diverse in terms of race and gender. I think those were the terms we were discussing would be likely to reduce biases.

[00:01:40]

So I believe you were arguing that it would be good that the evidence is very definitely some of that evidence.

[00:01:48]

So a lot of historians, sociologists and philosophers of science have put together over the last two or three decades evidence that that is that is definitely the case. Now, let's make sure, first of all, about what we are not going to talk about. I am certainly not going to be defending what is sometimes referred to as these the strong, socially strong program in the society of science. This was based still is probably at the in Edinburgh. These are group these are people who are sociologists who are heavily influenced by extreme postmodernist positions.

[00:02:23]

And they maintain that essentially everything about science is socially constructed and it's all about a matter of power, relations and ideologies.

[00:02:32]

Quite obviously, I don't subscribe to anything like that. In fact, the majority of historians and philosophers of science, at least the ones that I know, certainly don't subscribe to that kind of position. However, the problem is that too often we have a tendency to sort of throw the the the postmodernist baby out with the bathwater, so to speak. I mean, after all, the postmodernist at a core or the sociologist of science, even the strong program in search of science at a core, they do have an important point, which is let's not forget that science is a human activity.

[00:03:04]

And as such, it's it's going to be affected by all the foibles that affect every human activity. You know, scientists, just like everybody else, care about the same kind of things as individuals. They care. They want same sex and glory, not necessarily, you know, and money, not necessarily in that order. And therefore, they will do a certain number of things to achieve those those things that, you know, fame is is a value in science and which is achieved if you can do certain things that are also in turn valued by scientists.

[00:03:36]

For instance, novelty novel discovery or another piece of research is more important than just repeating, even repeating very well something or confirming very well something and somebody else has found. But but to go back to your original question, the idea here is that there are several instances in the history of science where it's clear that there was a bias, either a gender bias or an ethnic bias. And this was corrected on great pain with a lot of effort and usually only after the some other group has entered the fray.

[00:04:08]

Typical example is a lot of the research on on the difference is the gender or gender based differences in intelligence. At the turn of the nineteenth of the 20th century, there were there were decades during which scientists were absolutely convinced that women were less smart than men and they had the data to prove it, that they got their data. Well, the data had to deal with filling skulls of dead people with little bowls of lead so that you could measure the cranial capacity of these individuals.

[00:04:42]

And it was accepted knowledge literally for decades that all of this research was unquestionably showing that women's brains are smaller than men's brain. And since we all know at least that that was the idea at the time, we all know that small brains are associated with less intelligence than obviously women are less intelligent.

[00:05:01]

Yes, exactly. Terrible science. It is absolutely terrible science.

[00:05:04]

But that was the science at the time. Now, why did it take to correct that? Interestingly, the first correction started coming out when a few women, in addition, originally literally two women got into the field and started doing experiments, starting with doing the measurements well.

[00:05:20]

But it sounds like the measurements weren't the issue. It was the assumption that brain size was a predictor of intelligence.

[00:05:25]

No, that's one of the issues. Yes, certainly. But the other issue is, in fact, that the measurements themselves were wrong because the researchers. Yes, because if you actually. That their failure with overdetermined. Exactly, exactly, determined, that's a good way to put it. Thank you. So the the idea is and this has been observed in a variety of other contexts, that which is why, for instance, we have double blind the golden standard in scientific research and a double blind experiment, because if you know what you're measuring, that may be unconscious bias is coming in.

[00:05:58]

And sure enough, this was definitely the case and they're in the situation that I'm talking about. So these women repeated the experiments and showed several interesting things. First of all, that if you actually repeat the measurements, it turns out that the the measurement error is larger than the average difference between a man and woman's brain size, measured with those with those instruments. Right. So right there that questions the fact itself because it turns out that the measurement error was much larger than people thought.

[00:06:26]

Second, of course, they found a lot of women's skulls that were significantly larger than some of the skulls of the scientists themselves who did that research, which clearly didn't fit the model, which, of course, makes the distinction that it's one thing to show average differences and it's another thing to look at the variance of the distribution in the population of those of those things.

[00:06:49]

On top of that, of course, as you pointed out, other other researchers at some point started noticing that. Yes, but wait a minute, there is a correlation between body size and brain size, for instance, so that if you correct for a body size, as it turns out, women's brain size is not that different at all from one to men. And then, of course, there's the additional point that, in fact, there's no particular reason to believe that sheer size of the brain is a direct in any direct relationship with intelligence, whatever that in turn turns out to be.

[00:07:20]

Now, similar cases also between the end of the 19th century and into the way into the early part of the 20th century had to do with ethnic differences. So immediately after Darwin's theory of evolution was accepted, the standard model was of a progressive model of evolution where, you know, you get from from the ameba all the way up to men. And, of course, at the top of the pyramid was going to be the white Anglo-Saxon men. And below it, of course, was the, you know, what at the time was called the Negro men and so on and so forth.

[00:07:55]

So and of course, that was just taken as a matter of scientific fact.

[00:08:00]

I mean, nobody had any inkling that this wasn't just an issue of of ideological bias until much later on, biology started becoming more varied in terms of the sort of representation, cultural representation and biology started going back to to revisit those issues. And now we know better. Now, I hear the typical objection to these kinds of this kind of evidence as well. But that was then. Now is now. Right. But we have to remember that then.

[00:08:31]

They also thought that they got it absolutely right.

[00:08:33]

And then that was the end of the story. So there is this there's a problem with these kinds of debates that one one the only kind of evidence that one can bring on to bear the discussion, of course, is historical evidence, because controversies that are going on right now by definition, are not settled. So we don't know. We can't determine how much bias necessarily is going is affecting currently ongoing controversies. In some cases, you can make a reasonable argument.

[00:08:56]

But but most of the data actually come from historical record. And there is a tendency from people who want to defend some of these idealized objectivity of science to say, well, but that was really bad science and now we know better. We have to remember that that is that's that's a psychological trap in and of itself. This is sort of assuming that the science that we got now is really the good stuff. And everything else before was bad because was done in a primitive sense or in a primitive way.

[00:09:22]

Well, there's no reason to think that.

[00:09:23]

Well, isn't it reasonable to think that we are getting better over time, figuring out what traps we tend to fall into? Correct.

[00:09:29]

So that means, of course, and not no serious biologists today would argue that women are less intelligent than men based on putting a bunch of of LED bulbs in their craniums.

[00:09:41]

Right.

[00:09:42]

But as we heard recently from one of our guests, Cordelia Fine, they do like the same kind of thing based now on new neural scans and more and more sophisticated machinery and particular approaches. The point, of course, is what if 20 years down the road, somebody comes up and does a similar study showing that, you know what, this was actually a result of bias. It turns out that the data were not accurate. Turns out that they were over interpret it and so on and so forth.

[00:10:08]

And we can't know that now, although, of course, Ultrafine actually argued that there is pretty good evidence even now that those studies are biased.

[00:10:16]

But it's the same idea, right? It's absolutely the science gets better. I mean, there's no argument, certainly not on my part, that the science gets better, but the biases are always there.

[00:10:27]

And so if there is a persistent bias and typically gender bias is an ethnic bias is tend to be persistent. Over long periods of time, unfortunately. And so there's always new ways to reintroduce them. So I guess I'm wondering how general this problem of racial and gender biases is.

[00:10:44]

So, I mean, I don't have much trouble believing that a scientists race or gender bias, his or her research that directly impacts race or gender issues. But is it going to bias other kinds of research, like do we have any reason to suspect that scientists of one race or gender would reach different conclusions systematically about physics or chemistry or cosmology?

[00:11:09]

No. No, I don't. I don't think so. The biases tend to be specific to a particular topic, but that doesn't mean that other choices are also value free. Right. So the idea is, for instance, let's take this famous discussion that you may remember was going on a number of years ago about in the United States, about building the Large Hadron Collider, which eventually got never got built. We actually got to the point of digging these really, really large and very costly hole in the middle of Texas of the superconducting superconducting supercollider.

[00:11:42]

That's right, delijani. And it's actually done in Europe. But the superconducting supercollider now I see remember very clearly the hearings in the Senate where Steven Vineberg, Nobel physicist, was arguing against, for example, who the senator in charge of the Ewings was. And it was this really interesting discussion about essentially values because nobody was arguing that the science was going to be bad or that, you know, that this was somehow an unsound scientific idea. The idea was a matter of values, in particular societal values, conflicting with scientific values.

[00:12:16]

Right. So the senator's point was, well, why should we spend X number of billions of dollars to satisfy the curiosity of a small number of physicists, frankly, about, you know, pretty arcane matters that are probably not going to matter to the rest of society now. The exchange became interesting because Steven Vineberg is a very fine scientist in a very good has a very good sense of humor. So the senator at some point asked, you know, Professor Vineberg, you know, my problem is that the last time I checked, you know, my constituents don't eat quarks.

[00:12:49]

And of course, Vineberg made the remark that probably, arguably, in part cost him the hearings. But he said he couldn't help himself probably and said, well, actually, Senator, by my reckoning, you had a few billion cupcakes for breakfast this morning.

[00:13:02]

So citizens don't appreciate. Exactly. Exactly what probably was the wrong move to make from in that context.

[00:13:11]

But the point is, it is a serious question. Right.

[00:13:14]

So at the time, as a scientist and I'm not a physicist, but as a scientist, I was clearly baffled by the idea that a senator who probably knows nothing about science sort of was there questioning the choices of a group of people who actually knew exactly what was what, what they were doing and why they were doing it. But in fact, if you sort of zoom out to the level of society at large, there is a good question, well, why should we be spending billions of dollars for that project as opposed to, I don't know, health care or anything else that has a more direct impact where people I'm not saying that we should do that.

[00:13:50]

I'm saying it's a relevant question. And it isn't the kind of question that scientists could say. Well, it's the pursuit of knowledge is it's intrinsically interesting and it's it's worth whatever it's worth. Well, that's a value.

[00:14:04]

So so this is getting us already into our second topic of the value judgments about what science is worth doing, sort of collectively, which I'm sure will we'll go back and forth.

[00:14:12]

I'm you know, I just I wanted to say one of the thing about the individual scientists bias is affecting the practice of science because we had talked about racial and gender biases. But I just wanted, for clarity's sake, to note that those are and there are plenty of other kinds of bias that aren't racial or or gender based.

[00:14:29]

So everyone to some degree has experienced some kind of bias in favor of confirming a theory that, for example, they've publicly endorsed and therefore they have some sort of personal reputations taken. Then there's also the bias to find whatever the people who are funding you want you to find, which might, as I believe we discussed in our our live Q&A podcast episode, which might go part of the way towards explaining why medical research is being funded by pharmaceuticals, are more likely to find that the drug they're testing is, in fact, effective.

[00:15:01]

And then and then there are also biases stemming from more intangible or less less classifiable, just worldviews that affect which theories you think are intuitively more plausible or worth investigating. And I found a neat example of this. So the astronomer Kepler was convinced that the universe was organized in a harmonious and elegant way. And this is partly a result of his religious belief that he felt that this is the kind of universe that God would create.

[00:15:30]

So he spent years trying to confirm his theory that. The orbits of the known planets in the solar system would each fit into one of the five perfect solids. So those are the three dimensional figures that have identical faces like the cube in the tetrahedron and so on.

[00:15:43]

And he just he tried in vain for years to make his astronomical observations fit with this theory. And the reason he stuck with it so long and so determinedly is that just because he had this deep seated bias in favor of a harmonious, geometrically organized universe, actually there is a second story about Kepler that also deals with biases and also of a metaphysical nature.

[00:16:04]

So keep Kepler is the guy, of course, who figured out what was wrong with the Copernican system. Right. So Bernanke has made the mistake. So obviously, if you want to put it that way, that once you realize correctly that it's not the earth by the sun that is at the center more or less of the solar system. Yeah. Where we're stuck with the original idea of circular orbits of the planets.

[00:16:22]

Right. And it took many, many years to Kepler to finally figure out that the orbits of the planets were actually elliptical. And the reason for that is because he had a strong metaphysical bias in favor of perfect geometrical figures, as you just pointed out.

[00:16:38]

And of course, the circles are perfect geometrical figures, ellipses, by whatever measure of perfection you want to apply to, those kinds of things are not perfect. And so we just couldn't believe that the planetary orbits could be anything other than circular. And it literally took decades to figure out how that was working out. If you actually did, you had the breakthrough. And then that's why we have laws that eventually led the way to the work of Galileo and Newton.

[00:17:06]

But so biases are everywhere and most of them are unconscious. You know, very few people are actually very few sociologists of science and science or philosophers of science actually accusing scientists of doing things on purpose, although, of course, there is the usual fraud and that and that happens.

[00:17:23]

But that's a different I think that's a different issue, which is much easier to understand. Although there are two scientists when confronted with issues of fraud, as in the case, for instance, the infamous paper allegedly connecting vaccines with autism, which was in fact retracted finally years later because the author, the principal author, in fact, committed fraud and which is causing countless issues as we speak in terms of public health, because so many people are actually seriously thinking of withdrawing vaccinations from of their children.

[00:18:00]

Now, a lot of scientists I talk to tend to think that fraud is a rare, very rare occurrence in science. As it turns out that there are numerous articles that have come out recently, in recent years in both nature and science magazines, which are the the leading magazines and on only published scientific research, but also commentaries about about science itself that allege that actually fraud is much more widespread, much more of a problem than I don't know the actual figures in front of me at the moment, but much more than most scientists realize.

[00:18:31]

And that there are reasons for that which are embedded, which are which are very tightly linked to some of the intrinsic values of science. For instance, the value in that there is in competition and getting research grants, getting certain positions and so on, you know, beating people, other people at the game, whatever the game is that one is playing. All of those things are powerful incentives for fraud and human beings being what they are just like in any other activity.

[00:19:01]

We do find a significant number of scientists, especially now that science is big business and it's done by thousands of laboratories across the world with very different national standards. For instance, for for the reason that there is there is done the incidence of fraud is allegedly going up significantly. So there is a problem there, too.

[00:19:20]

Yeah, I actually know someone who works in a science lab. I'm not even going to specify what kind of science to further anonymize the case.

[00:19:30]

And so she's witness is pretty much certain that there is some amount of fraud going on, on the part of one of the researchers and the I mean, so obviously there's the problem that, you know, she personally could be blacklisted as a troublemaker if she, you know, bring this to the higher ups or public attention.

[00:19:49]

But beyond that, there's this systemic structural problem that I realized when talking to her about about this issue that the interconnectedness of science where you have so many people working on the same research paper and you have other research papers that are dependent on previous results from from previous research papers.

[00:20:07]

So if you if you reveal fraud in in one earlier research paper, you're not just taking down the person who committed the fraud. You're taking down the careers of all the other people who worked with him or whose work was dependent on his work. And so it just complicates the ethical calculus.

[00:20:25]

And the same ethical calculus comes in at a sort of a less dramatic but unfortunately much more widespread level. So. So. If we're not talking about actual outright fraud, but let's say a little bit of massaging of of the scientific output. So, for instance, again, one of the major values in science, of course, is competition for grains and positions, which largely is, in fact, based on the number of publications you get out. Of course, it's more complicated than just the number.

[00:20:53]

It depends also on where you publish your work and how what is the perceived impact of your work and so on and so forth. But believe me, Oeming and plenty of hiring committees in several science departments, the sheer number of publication is an important it's a very important, not necessarily overwhelming, but very important criterion, which means that there is a strong incentive to multiply artificially the number of publications. So I have seen plenty of CVS where most publications are the so-called represent so-called least publishable units, that is, people to break down their work in tiny little chunks and they publish as many checks.

[00:21:31]

But right now, the result of that is, of course, an inflation of papers that overwhelm the scientific literature. So that, for instance, research was done on the citations of papers in biology a few years ago where it turned out that a full two thirds of papers published in biological journals is never cited once. Wow.

[00:21:58]

And that's because there's an overwhelming, frankly, amount of garbage and which is simply clogging the system. So we have these people who keep publishing more and more and more because there is these internal value of the publish or perish ideal. And that has led over a very short period of time, because if you actually look at, say, pre-war world to science, typically when you published when you were a dissident and you finish your thesis, you published a book.

[00:22:26]

It wasn't you were not publishing four or five, six, seven different papers. You were publishing a book. There was one publication in your name. It was, you know, significant publication. It was, you know, a lot of details, a lot of stuff going into that book. And that was your passport to an academic career.

[00:22:43]

But after World War two, once that modern day big science started out, which essentially can be traced back to the Manhattan Project, that was the first big science project ever. And after World War Two, after that one after World War Two, big science, especially in the United States, has been funded at a significant, significant degree by the public purse, which means that you have your flourishing, which is a positive thing of universities and university positions and so on and so forth.

[00:23:12]

But you also have these side effects which are which threaten to overwhelm the system. The other thing that is threatening to overwhelm the system is the number of students that we turn out. Right. And again, it's the same idea. The value that is embedded into the career of a university faculty is that the more offspring you have, the better right for a variety of reasons, first of all, because you collaborate with your offspring. So that's more papers or more grants.

[00:23:38]

You also have influence, more influence in the community. A lot of scientific communities even today actually are still fairly small. I mean, I was I spent most of my career working in what biologists called gene environment interactions and the nature and nurture issues. And I think I pretty much knew almost everybody that was working there, a few hundred people.

[00:23:57]

It's not that much that big of a field, certainly all the major players. So it's a it's a field that it's still highly inbred, which means that the more offspring you produce, the more you are able to dominate the field. Now, what's the result of that? That we produce a lot of kids who are not going to find jobs or who are going to find suboptimal jobs. Now, we have a situation where a lot of excellent PhDs are working for community colleges, for instance.

[00:24:22]

Right.

[00:24:23]

As well as lecturers or lecturers, or the tenure track and tenure track, increasing increasing number of agents and lecturers and so on that are people are essentially, frankly, exploited. They're paid very little. They have usually no benefits or very little in the way of benefits. They have no prospect of having a career. And yet they have a PhD on their CV and they have a fairly high number of publications.

[00:24:45]

So all of these things are value issues that don't enter into the actual doing of science, but that do affect what kind of science we do and how we do it and so on.

[00:24:56]

OK, so let's get back to the question of what scientific research is most worth pursuing. And we were talking about the case study of the superconducting supercollider, which was cancelled in 93. I believe the cost at that point was running up to over 10 billion.

[00:25:12]

Definitely. Yeah. And it was going to be several times more powerful than than the Large Hadron Collider. So it would have been it would have been, I guess, quite significant for for physics. Absolutely.

[00:25:24]

But yeah. So I think I mean, the question of whether it's worthwhile to do research that doesn't have any immediately obvious practical benefits. Is a really valid one. I mean, so I personally, my personal value is is on knowledge for analogies sake, at least relative to many other things, maybe not to things like saving lives, but to things like, I don't know, building sports stadiums or even even funding from the arts.

[00:25:49]

But but I do recognize that that the personal value and that I, I enjoy knowledge for knowledge sake, but I can't defend it as sort of the objectively right way to spend our money.

[00:25:57]

And I think that's the right way to look at it. That is, you know, I tend to share that sort of the sort of as you do. I mean, I do think that knowledge for knowledge sake, not just in science, in fact, but in general across fields is is worthwhile.

[00:26:10]

But first of all, you have to justify, when it comes in, conflict with other values. Right. So it's one thing if if your research costs rather a little money or if you're not in the sciences or another discipline where, you know, there's very little money that goes into this. But if we're talking about hundreds of millions or billions of dollars, that is a significant chunk of community resources. And so you have to be able to make a case.

[00:26:34]

In that case. It's surprisingly undetailed when scientists actually are forced to make that case.

[00:26:41]

I mean, you know, the joke that I made that that I mentioned earlier by the embalming is very cute, but it doesn't mean it's not an argument obviously know so well.

[00:26:50]

But you've probably heard the arguments many times that plenty of discoveries that turned out to be really practical, useful to have really practical benefits did not I mean, they were just done for knowledge sake or they were done with some other end in mind. And the scientists at the time had no foresight as to what they would end up being useful for now, like mucking around with heritability had no idea that it would lead to medical genetics or genomics.

[00:27:15]

That's correct. But the thing that I always find interesting in that in that argument is that it's surprisingly anecdotal.

[00:27:23]

I mean, it's true that you can find plenty of anecdotes, but for an argument like that to be done, to be made by a scientist who normally who should know better than to rely on anecdotal evidence, and it seems like we should be able actually to do research on this.

[00:27:40]

Right. This is the kind of thing you can't I mean, you can you can say, well, let's let's take a look at, you know, a certain number of projects funded by the National Science Foundation for the last 30 years or 40 years or whatever it is. I mean, after all, we have records going back all the way to the 40s. And let's see how many of these pieces of primary research actually led anywhere. I'm sure it's not an easy piece of research to conduct, but it's like the data ought to be there.

[00:28:03]

And so since we're talking again about a large amount of research, it seems like an anecdotal argument, as interesting as it is, and it's undeniable as some of the anecdotes actually are, it's it's really not particularly strong.

[00:28:18]

The other thing is we need a controlled experiment that is OK. The comparison that we should be making is how much money do we give to, say, basic research through the National Science Foundation versus how much money do we give to, say, the NIH for applied research, for the right applied research?

[00:28:41]

And what is the return that society gets in terms of applicable research, if that's what we're talking about, applicable scientific discoveries?

[00:28:50]

I mean, I'm betting I don't know, I don't have the data because I don't think that anybody's actually looked systematically these things. But I'm betting that the return you get from NIH grants is much higher in terms of applications than the return you get from NSF grants.

[00:29:03]

Does that mean that we shouldn't fund them? Absolutely not. But it does mean that the anecdotal argument of well, but certainly certainty this discovery is going to happen kind of loses a little bit of force.

[00:29:14]

And so perhaps that that means that we should be reconsidering how, in fact, we spread out the money. So when I when I wasn't at NSF panels, I noticed that NSF has a tendency to fund more and more larger and larger research and give more and more money to smaller, smaller number of laboratories.

[00:29:37]

So that is well, the model is, again, the large research projects that are sort of typical in physics, for instance, and they want to apply that money to everything, including biology. And and I always argued with NSF offices that seems like to me like exactly the wrong model for basic research, because basic research is, in fact a shotgun approach. The idea of basic research is that most of the time you don't know which way you're going to go.

[00:30:05]

We don't know which lab is going to be successful at doing what. And so what you want to do is you have a certain amount of resources. You want to actually spread them out as much as possible to keep as many players as possible a chance to play if you concentrate most of the money on a very small number of laboratories. That it seems to me, first of all, bias is the kind of research that is done only towards those those things that those laboratories do and secondly, actually reduce.

[00:30:32]

The probabilities of success, because this is by definition basic research, so it's not directed, it doesn't we don't really have a particularly good idea of where we're going.

[00:30:42]

Well, so what the examples that I'm thinking of are from the social sciences, but I feel like there might be a critical threshold below which of funding below which your research project just isn't going to be high quality enough to find a useful result.

[00:30:59]

So I'm thinking of a lot of studies that I read in the social sciences that would they would find some pattern in data or they would do some relatively basic or barebones lab simulation of some phenomenon they were interested in. And in their conclusion or discussion, they would acknowledge, well, you know, we really you know, it would really actually be helpful to be able to control over X, Y or Z.

[00:31:22]

But, you know, we didn't have the the data on that or it would be helpful to sort of do a longitudinal study, but we didn't have the funding for that. And so the studies were, in my opinion, not, you know, more or less worthless, whereas if we'd taken the funding that funded, you know, 10 of those relatively worthless studies and funded one serious longitudinal study that collected the data, they spent the money to collect all the data that they would need to really get a handle on the phenomenon that would be more useful to us than the 10 states.

[00:31:51]

But you still need to make it decisions about what is worth funding and for what. Right. Yeah. And so and again, that comes down to a matter of values. I mean, I met some of my colleagues who seem to think that whatever it is that they're doing, you know, study the sexual habits of a rare species of butterflies or whatever it is that they're doing, they seem to think, you know, when you ask them, you know, why are you doing that?

[00:32:15]

It simply strikes them as self-evident. It strikes me as a bizarre question. What do you mean you don't get it? Well, no, I don't get it, actually.

[00:32:25]

You know, why would you be spending, you know, 30 years of someone's career or, you know, thousands of hours of work and a lot of money on that particular issue?

[00:32:35]

And now please don't tell me that it's likely that you're going to find a cure for cancer by studying that particular butterfly.

[00:32:40]

Because the other thing that a lot of justifications for scientific research, and I don't know if these are things the scientists themselves believe, this is what they say publicly so that they can justify the funding that they're getting. But a lot of the justifications seem like rationalizations, like they'll cite a particular goal or benefit that the science could lead to. But if that were really your goal, there would probably be much better or more efficient ways to pursue the goal than through the scientific project that's being defended.

[00:33:04]

So, you know, the genome project, when scientists were testifying to Congress on behalf of the genome project, they cited the fact that, well, we can screen for terrible diseases that lead to short and painful life spans.

[00:33:15]

And so this will, you know, improve the overall welfare of children because there will be fewer unhappy diseases and early dying children.

[00:33:25]

But so if improving the welfare of children were your goal, there's plenty of of cheap and straightforward things that we know with certainty we could do right now to improve the welfare of children, which doesn't have to do with social and environmental more than when genomics.

[00:33:40]

Absolutely.

[00:33:41]

So, you know, does that mean that the Human Genome Project didn't or wasn't worth it? I don't know. But it is worth asking the question.

[00:33:48]

And I think it is disingenuous sometimes of the scientists involved in sort of making these grandiose statements or treating the question as if it were self-evident. And I can tell you from experience that there is one section in every NSF grant that every scientist they know absolutely hates. And that is the social justification section. It's it's a little part, a little segment. It's a couple of paragraphs at the end of the grant proposal that you have to to fill out, because otherwise the grant is going to be rejected out of hand, you know, acts official, as they say.

[00:34:23]

And and everybody has exactly the same paragraph that we just copy and paste over and over. And it talks about, you know, there is there is going to be training undergraduate's of minorities and women, which means it's going to have a human impact. And this understanding of science, public understanding of science and a couple literally is completely generic.

[00:34:47]

That's a talent. That's right. And nobody, of course, believes anything that is in there, including probably NSF office themselves. But, you know, we got to go through the motions because after all, NSF is funded by Congress and these people have to justify it. Now, I'm sure there's plenty of other things to talk about that we may get or not in this case. But we should also make clear that the issue of values in society also goes the other way around.

[00:35:09]

There is there are there are some philosophers have pointing out that there are also very positive values that come out of science and that affect society. So the value of, you know, empirical investigation, the value of rigorous testing of hypotheses, the value of, you know, objectivity as much as it can be reached. Achieved by human enterprise, the value of and I'm going to use a dirty word here, communism, actually some some philosophers have defined the scientific describe the scientific enterprise as communist in nature, meaning that the results are scientific investigation, supposed to be common knowledge.

[00:35:54]

They're supposed to be accessible by anybody, public use, public knowledge, right?

[00:35:59]

Well, all those are values, again, that there is nothing that is intrinsic to the workings of science that necessarily requires any of those any of those things and certainly doesn't require exporting those things to the rest of society. But science in that case is a positive source of value, especially in a society where we live today, where where people just feel free to deny facts just because they don't fit their ideological agenda.

[00:36:24]

Then it seems like, you know, the issue of value and science is too often brought up when I guess what I'm saying in terms of a criticism of science, which I think it's fair enough, but we also need to remember that the the osmosis goes the other way around as well.

[00:36:38]

So I just want to make sure we talk about before we wrap up the social consequences of some scientific research research which leads, I think, to some of the most difficult decisions about whether to pursue an avenue of research.

[00:36:52]

So, for example, Cordelia Fine, who we were discussing earlier, we had her as a recent guest. She was talking about her research on cognitive differences between men, women or the alleged cognitive differences between men and women, that not only is this a difficult claim to validate and that previous research on it has been questionable, but that there are consequences to publicizing these kinds of claims.

[00:37:15]

They're really catchy.

[00:37:16]

The popular media eats them up and they get repeated and they become part of the conventional wisdom that, quote, science shows that men are more adept at abstract reasoning and so on and so forth, and that when people believe these claims, they actually behave differently.

[00:37:28]

So women will do worse on math tests if they're told that men are better at math. Right. So so there seems to be an argument that if we're going to do research with these kinds of potentially adverse consequences, then maybe there should be a really high standard of evidence before publicizing it. That's right.

[00:37:43]

The other fields of research that are intrinsically connected to values. So, for instance, the example that come to mind is conservation biology with the very term conservation biology is value laden because it means that you're studying things you want to conserve. And so you made a decision and our before you even get started that there are certain things that are very valuable to be conserved. And that's why you're studying them now, that that decision can be perfectly reasonable and perfectly defend the indefensible on an ethical grounds.

[00:38:15]

But it is nonetheless a matter of, again, value, in this case, shipping an entire field of research. It's an entire branch of biology. And you can see the results in it because conservation biologists are the quintessential example of a scientist who is constantly involved with with public policy decisions and public policy issues. Right. Just like, of course, recently in recent years, climate scientists have been constantly involved. I mean, all these these these brouhaha about, oh, should environmental scientists, you know, climate scientist being involved in advocacy and policy advocacy and so on?

[00:38:54]

Of course they should, because they're the ones that know what's going on there, or at least they have the best idea about what is going on there. And it's really hard to say.

[00:39:03]

Well, now you had to separate your science from from your values, from your from your ethical, you know, underpinning as if that were actually possible.

[00:39:12]

Yes. You can make of course, you have to make a distinction between the facts, so to speak, and what you would like society to do with those facts. But even facts themselves are not value free. One of the basic ideas in early philosophy of science, even this, this goes back to essentially Popper immediately after the first criticism of Popper's idea of falsification was that there is no such thing as if in fact, as a as a data is a piece of data.

[00:39:41]

There's a bunch there's an infinite number of facts out there in the universe that you can you can partition reality in an infinite number of ways. The fact that you consider certain things data is because you have something in mind that that you want to test.

[00:39:53]

I'm not saying that there's no such thing as a true piece of evidence that the world is just saying that it's no such thing is like an objectively relevant.

[00:40:01]

Exactly. Exactly. It's the relevance of the data that it's always with a theory mind. Actually, there's a famous quote by Darwin who was running to a friend. And because Darwin, as it turns out, was involved in a big debate about the nature of induction that was going on between John Stuart Mill and and Weedle to the major philosophers of the late nineteenth century. And and Darwin got involved in this debate and in frustration, he wrote to a friend of his saying something on the lines, you know how funny it is that people.

[00:40:32]

Many people don't seem to realize that there is no such a thing as a fact independent of a theory. A fact is always in favor of or against a theory. You pick something from the world and you consider it a fact. You consider a piece of data because you have something in mind, because you have an idea in mind. And so this is the idea that is essentially impossible to extricate facts as objectively true things from values, including theories.

[00:40:58]

Theories are values. Theories are human constructs. So they are a particular kind kind of value. That's not to say, as you pointed out, that's certainly not to say that, well, therefore, anything goes and people can pick up stuff as as that from the from the world as they like. No, that's not true, because facts and and theories are interconnected in a web of knowledge. But that knowledge is not arbitrary. Nonetheless, it's a weapon.

[00:41:23]

I wanted to make one other argument.

[00:41:25]

I've been thinking a lot since reading Cordelia Friend's book and having her on the show about the consequences of scientific research and whether it's worth pursuing avenues that might have adverse consequences and so that the racial differences in innate intelligence or alleged racial differences is sort of a notorious and controversial example of this.

[00:41:47]

And setting aside the question of whether you could ever measure innate intelligence in an objective way, let's say that you could let's say that somehow you could show beyond a shadow of a doubt that race A had a mean IQ that was three points higher than race.

[00:42:04]

B Well, I my sense is that society doesn't really know how to how to react to these claims in an epistemically reasonable way.

[00:42:13]

So, you know, there's so much variation in IQ within a given race that for any person of race arizpe even if the difference in means is three IQ points, there's going to be only a very slight chance that the race a person would have the hierarchy of the race. BRISBY It tells you almost nothing about the expected intelligence of race, a person versus race. But in my experience, people tend to overstate the importance of small differences between the minds of two populations.

[00:42:41]

So if they hear Aretha has a higher IQ on average than race B, then from then on, they tend to think that most race people are more intelligent than most race be.

[00:42:48]

People would obviously have not just disastrous but but disastrous and unfounded, like unjustified consequences on a society.

[00:42:56]

That is exactly the problem, that that is, once you make a scientific claim, even a true scientific claim about a population, unfortunately, because of the abysmal degree of scientific literacy that we have in society at large or because of inherent cognitive biases that that human beings have no matter what. So there's a variety of reasons for that. But you're absolutely right that people would in fact or dramatically probably overestimate the difference when applied to individuals, which, of course, leads easily to discrimination into all sorts of.

[00:43:28]

So the problem is that we really would rather stay away from.

[00:43:32]

So I will wrap up this segment of this episode of the podcast with a quote about scientists washing their hands of the consequences of their research. This is from a song by one of my all time favorite songwriters, Tom Lehrer. It's a song about Wernher von Braun, who was a rocket scientist for Germany in the 20s and 30s and developed some of their deadliest combat rockets. And then after the war came to the US to work for Nathan and developed an intermediate range ballistic missile for us.

[00:43:56]

And so the quote it's an iconic quote from the song is Once the rockets go up, who cares where they come down? That's not my department, says Wernher von Braun. And with that, I will wrap up the section of the podcast and we'll move on to the rationally speaking PEX.

[00:44:25]

Welcome back. Every episode, Julie and I pick a couple of our favorite books, movies, websites or whatever tickles our irrational fancy. Let's start as usual with Julia.

[00:44:34]

Thanks, Massimo. My pick is a classic book called What Is History by Each Car.

[00:44:40]

And I thought it was particularly appropriate for this episode because in this episode we were talking about how Biase is an unconscious value. Judgments can affect the practice of seemingly objective science. And this book is about how both use and value judgments can affect the practice of seemingly objective history.

[00:44:58]

So there are a lot of ways that this can happen. One of the most basic is simply in deciding what counts as a significant enough fact to be included in his history, because obviously there's far, far too many facts about the past that that, you know, we would want to include.

[00:45:14]

Then we would want to include it in sort of a coherent narrative of what happened. But there's there's another fun example of values affecting the explanation of history, of history and that I wanted to share with you.

[00:45:26]

So each car is quoting Gibbon, who wrote the the biggest history of the Roman Empire, its decline.

[00:45:36]

And so given was observing that the Greeks so after the the Greek empire had dwindled to a single province, the Greeks historians started attributing the Roman successes not to merit, but instead to luck, to fortune, and that this is actually a principle, a phenomenon you see across cultures, across historians of different cultures and different ages.

[00:46:00]

So the the way that each card describes this phenomenon is caught in a group or nation which is riding in the trough, not on the crest of historical events. Theories that stress the role of chance or accident in history will be found to prevail. Not by chance.

[00:46:16]

Well, my pick is an article by Alison Gopnik in Slate magazine will post the link on the podcast website. The article is entitled What John Tierney Gets Wrong About Women Scientists Know. John Turner is a well known columnist for The New York Times. He writes the Turning Lab column, which often is very interesting. But I have criticized him a couple of times recently. Actually, in the blog, I wrote a couple of entries about his treatment of Jonathan Haidt, research on the alleged liberal bias in academia and why they exist and so on and so forth.

[00:46:51]

And then immediately after that, Terni published his own article about the research showing, you know, talking about women scientists and they're actually not discriminated, as best people might think, and so on and so forth. And that article smelled the wrong way to me, but I didn't have time to do the actual research to see what turny actually got wrong. And fortunately, Alison Gopnik did. And it turns out that turny has completely misread the article. The article actually does suggest that there is very strong anti women bias in science.

[00:47:23]

It's just more complicated and more subtle and more difficult to to figure out the causality of. And that's why the study was actually getting into. So but but but the bias itself is absolutely unquestionable. The most obvious evidence is, are these controlled experiments where social scientists send the same exact paper to reviewers, in one case, put in the name of a male name as the author, and Yaraka is a female name. Otherwise, the article is absolutely identical and the chances of the paper being accepted if there is a male author are much, much higher than if it is a female author, which right there you think sort of should settle the question.

[00:48:09]

But apparently it didn't look into that sort of thing. So and then some of our readers actually pointed out to me that John Turner has done this on a number of occasions.

[00:48:17]

So he seems to have a fetish for debunking alleged biases or or for bringing up biases that are apparently not there, depending on what the topic is for these women in science or liberal in academia.

[00:48:33]

So that's an amusingly weird and specific fetish, but I'm sure there's an Internet group for it somewhere. And if not, John Tierney can start a Facebook page on it.

[00:48:44]

Okay, on that note, this wraps up another episode of the rationally speaking podcast.

[00:48:50]

Join us next time for more explorations on the borderlands between reason and nonsense.

[00:49:02]

The rationally speaking podcast is presented by New York City skeptics for program notes, links, and to get involved in an online conversation about this and other episodes, please visit rationally speaking podcast Dog. This podcast is produced by Benneton and recorded in the heart of Greenwich Village, New York. Our theme, Truth by Todd Rundgren, is used by permission. Thank you for listening.