A physicist, a philosopher and a psychologist walk into a classroom.
Although it sounds like a premise for a joke, this was actually the origin of a unique collaboration between Nobel Prizeâwinning physicist Saul Perlmutter, philosopher John Campbell and the psychologist Rob MacCoun. Spurred by what they saw as a perilously rising tide of irrationality, misinformation and sociopolitical polarization, they teamed up in 2011 to create a multidisciplinary course at the University of California, Berkeley, with the modest goal of teaching undergraduate students how to thinkâmore specifically, how to think like a scientist. That is, they wished to show students how to use scientific tools and techniques for solving problems, making decisions and distinguishing reality from fantasy. The course proved popular, drawing enough interest to run for more than a decade (and counting) while sparking multiple spin-offs at other universities and institutions.
Now the three researchers are bringing their message to the masses with a new book, Third Millennium Thinking: Creating Sense in a World of Nonsense. And their timing is impeccable: Our world seems to have only become more uncertain and complex since their course began, with cognitive biases and information overload all too easily clouding debates over high-stakes issues such as climate change, global pandemics, and the development and regulation of artificial intelligence. But one need not be an academic expert or policymaker to find value in this bookâs pages. From parsing the daily news to treating a medical condition, talking with opposite-minded relatives at Thanksgiving or even choosing how to vote in an election, Third Millennium Thinking offers lessons that anyone can useâindividually and collectivelyâto make smarter, better decisions in everyday life.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Scientific American spoke with Perlmutter, Campbell and MacCoun about their workâand whether itâs wishful thinking to believe logic and evidence can save the world.
[An edited transcript of the interview follows.]
How did all of this begin, and what motivated each of you to take on such an ambitious project?
PERLMUTTER: In 2011 I was looking at our society making big decisions: âShould we raise the debt ceiling?ââthings like that. And surprisingly enough, we were not doing it in a very sensible way. The conversations I was hearing about these political decisions werenât like those Iâd have over lunch with a bunch of scientists at the labânot because of politics, but rather because of the style of how scientists tend to think about solving problems. And I thought, âWell, where did scientists learn this stuff? And is it possible for us to articulate what these concepts are and teach them in a way that people would apply them in their whole lives, not just in a lab? And can we empower them to think for themselves using the best available cognitive tools rather than teaching them to âjust trust scientists?ââ
So that was the starting point of it. But thatâs not the whole story. If you put a bunch of physicists together in a faculty meeting, they donât necessarily act much more rational than any other faculty members, right? So it was clear we really needed expertise from other fields, too, such as Johnâs expertise in philosophy and Robâs expertise in social psychology. We actually put a little sign up looking for people whoâd want to help develop the course. It said something like, âAre you embarrassed watching our society make decisions? Come help invent our course; come help save the world.â
MacCOUN: When Saul approached me about the course, I was delighted to work with him. Even back in 2011 I was filled with angst about the inefficacy of policy debates; I had spent years working on two big hot-button issues: drug legalization and open military service for gay and lesbian individuals. I worked with policymakers and advocates on both sides, just trying to be an honest broker in these debates to help clarify the truthâyou know, âWhat do we actually know, and what donât we know?â And the quality of debate for both of those issues was so bad, with so much distortion of research findings. So when Saul mentioned the course to me, I just jumped at the chance to work on this.
CAMPBELL: It was obvious to me that this was philosophically very interesting. I mean, weâre talking about how science inputs into decision-making. And in decision-making, there are always questions of value, as well as questions of fact; questions about where you want to go, as well as questions about how do we get there; and questions about what âthe scienceâ can answer. And itâs very interesting to ask, âCan we tease apart facts and values in decision-making? Does the science have anything to tell us about values?â Well, likely not. Scientists always shy away from telling us about values. So we need to know something about how broader effective concerns can be woven in with scientific results in decision-making.
Some of this is about how science is embedded in the life of a community. You take a villageâyou have the pub, you have the church, you know clearly what they are for and how they function in the whole community. But then the science, what is that? Is it just this kind of shimmering thing that produces telephones, TVs and stuff? How does it fit into the life of the community? How does it embed in our civilization? Classically, itâs been regarded as a âhigh churchâ kind of thing. The scientists are literally in an ivory tower and do as they please. And then occasionally, they produce these gadgets, and weâre not sure if we should like them or not. But we really need a more healthy, grounded conception of how science plays into our broader society.
Iâm glad you brought up the distinction between facts and values. To me, that overlaps with the distinction between groups and individualsââvaluesâ feel more personal and subjective and thus more directly applicable to a reader, in a way. And the book is ultimately about how individuals can empower themselves with so-called scientific thinkingâpresumably to live their best lives based on their personal values. But how does that accord with this other assertion youâve just made, saying science likely doesnât have anything to tell us about values in the first place?
PERLMUTTER: Well, I think what John was getting at is: even once we develop all these ways to think through facts, we donât want to stop thinking through values, right? One point here is that weâve actually made progress together thinking about values over centuries. And we have to keep talking to each other. But itâs still very helpful to separate the values and the facts because each requires a slightly different style of thinking, and you want people to be able to do both.
MacCOUN: Thatâs right. Scientists canât tell us and shouldnât tell us, in fact, what values to hold. Scientists get in trouble when they try that. We talk in the book about âpathologiesâ of science that sometimes happen and how those can be driven by values-based thinking. Regarding values, where science excels is in clarifying where and how they conflict so that in public policy analysis, you can inform the trade-offs to make sure that the stakeholders in a debate empirically understand how its various outcomes advance certain values while impeding others. Usually what happens next is finding solutions that minimize those trade-offs and reduce the friction between conflicting values.
And letâs be clear: when we talk about values, we sometimes talk as if people are either one thing or another. You know, someone may ask, âAre you for or against âfreedom?ââ But in reality, everyone values freedom. Itâs just a question of how much, of how we differ in our rankings of such things. And weâre all looking for some way to pursue more than one value at a time, and we need other people to help us get there.
PERLMUTTER: And letâs remember that weâre not even consistent within our own selves about our individual rankings of values, which tend to fluctuate a lot based on the situation.
I love how our discussion is now reflecting the style of the book: breezy and approachable but also unflinching in talking about complexity and uncertainty. And in it, youâre trying to give readers a âtool kitâ for navigating such things. Thatâs great, yet it can be challenging for readers who might assume itâs, say, a science-infused self-help book offering them a few simple rules about how to improve their rational thinking. This makes me wonder: If you did have to somehow reduce the bookâs message to something like a series of bullet points on a note card, what would that be? What are the most essential tools in the kit?
CAMPBELL: This may be a bit ironic, but I was reading somewhere recently that where AI programs such as ChatGPT really go wrong is in not giving sources. Most of these tools donât tell you what evidence theyâre using for their outputs. And youâd think, of course, we should always show what evidence we have for anything weâre gonna say. But really, we canât do that. Most of us canât remember the evidence for half of what we know. What we can usually recall is how likely we thought some assertion was to be true, how probable we thought it was. And keeping track of this is a worthwhile habit of mind: if youâre going to act on any belief you might have, you need to know the strength with which you can hold that belief.
PERLMUTTER: We spend a fair amount of time on this in the book because it allows you to see that the world doesnât come to us with certainty in almost anything. Even when weâre pretty sure of something, weâre only pretty sure, and thereâs real utility in having a sense of the possibility for something contradicting what we think or expect. Many people do this naturally all the time, thinking about the odds for placing a bet on their favorite sports team or about the chance of a rain shower spoiling a picnic. Acknowledging uncertainty puts your ego in the right place. Your ego should, in the end, be attached to being pretty good at knowing how strong or weak your trust is in some fact rather than in being always right. Needing to always be right is a very problematic way to approach the world. In the book, we compare it to skiing down a mountain with all your weight rigid on both legs; if you donât ever shift your stance to turn and slow down, you might go very fast, but you usually donât get very far before toppling over! So instead you need to be able to maneuver and adjust to keep track of what it is that you really do know versus what you donât. Thatâs how to actually get wherever youâre trying to go, and itâs also how to have useful conversations with other people who may not agree with you.
MacCOUN: And that sense of working together is important because these habits of mind weâre discussing arenât just about your personal decision-making; theyâre also about how science works in a democracy. You know, scientists end up having to work with people they disagree with all the time. And they cultivate certain communal ways of doing thatâbecause itâs not enough to just be a âbetterâ thinker; even people well-trained in these methods make mistakes. So you also need these habits at a communal level for other people to keep you honest. That means itâs okay, and necessary even, to interact with people who disagree with youâbecause thatâs how you find out when youâre making mistakes. And it doesnât necessarily mean youâll change your mind. But itâll improve your thinking about your own views.
So in summary:
-
Try to rank your confidence in your beliefs.
-
Try to update your beliefs based on new evidence and donât fear being (temporarily) wrong.
-
Try to productively engage with others who have different beliefs than you.
Thatâs a pretty good âtop threeâ list, I think! But, pardon my cynicism, do you worry that some of this might come off as rather quaint? We mentioned at the outset how this project really began in 2011, not much more than a decade ago. Yet some would probably argue that social and technological changes across that time have now effectively placed us in a different situation, a different world. It seemsâto me at leastâon average much harder now than it was 10 years ago for people with divergent beliefs and values to have a pleasant, productive conversation. Are the challenges we face today really things that can be solved by everyone just getting together and talking?
CAMPBELL: I agree with you that this sort of cynicism is now widespread. Across the past few decades we seem to have forgotten how to have a conversation across a fundamental divide, so now we take for granted that itâs pointless to try to convert those holding different views. But the alternative is to run society by coercion. And just beating people down with violent subjugation is not a long-term tenable solution. If youâre going to coerce, you have to at least show your work. You have to engage with other people and explain why you think your policies are good.
MacCOUN: You can think of cynicism as this god-awful corrosive mix of skepticism and pessimism. At the other extreme, you have gullibility, which, combined with optimism, leads to wishful thinking. And thatâs really not helpful either. In the book we talk about an insight Saul had, which is that scientists tend to combine skepticism with optimismâa combo Iâd say is not generally cultivated in our society. Scientists are skeptical, not gullible, but theyâre optimistic, not pessimistic: they tend to assume that problems have a solution. So scientists sitting around the table are more likely to be trying to figure out fixes for a problem rather than bemoaning how terrible it is.
PERLMUTTER: This is something weâve grappled with, and there are a couple of elements, I think, that are important to transmit about it. One is that there are good reasons to be disappointed when you look at the leaders of our society. Theyâve structurally now gotten themselves into a fix, where they seem unable to even publicly say what they believe, let alone find real compromises on divisive issues. Meanwhile you can find lots of examples of âcitizen assemblyâ events where a random selection of average people who completely disagree and support the opposite sides of the political spectrum sit down together and are much more able to have a civil, thoughtful conversation than their sociopolitical leaders can. That makes me think most of the [people in the] country (but not all!) could have a very reasonable conversation with each other. So clearly thereâs an opportunity that we havenât taken advantage of to structurally find ways to empower those conversations, not just the leaders trying to act for us. Thatâs something to be optimistic about. Another is that the daily news portrays the world as a very scary and negative placeâbut we know the daily news is not offering a very good representative take on the true state of the world, especially regarding the huge improvements in human well-being that have occurred over the past few decades.
So it feels to me that many people are living in âcrisisâ mode because theyâre always consuming news thatâs presenting us crises every moment and driving us apart with wedge issues. And I think thereâs optimism to be found in looking for ways to talk together again. As John says, thatâs the only game in town: to try to work with people until you learn something together, as opposed to just trying to win and then having half your population being unhappy.
CAMPBELL: We are maybe the most tribal species on the planet, but we are also perhaps the most amazingly flexible and cooperative species on the planet. And as Saul said, in these almost town-hall-style deliberative citizen assemblies you see this capacity for cooperation coming out, even among people whoâd be bitterly divided and [belong to] opposite tribes otherwiseâso there must be ways to amplify that and to escape being locked into these tribal schisms.
MacCOUN: And itâs important to remember that research on cooperation suggests you donât need to have everybody cooperating to get the benefits. You do need a critical mass, but youâre never going to get everyone, so you shouldnât waste your time trying to reach 100 percent. [Political scientist] Robert Axelrod and others studying the evolution of cooperation have shown that if cooperators can find each other, they can start to thrive and begin attracting other cooperators, and they can become more robust in the face of those who are uncooperative or trying to undermine cooperation. So somehow getting that critical mass is probably the best you can hope for.
Iâm sure it hasnât escaped anyoneâs notice that as we discuss large-scale social cooperation, weâre also in an election year in the U.S., ostensibly the worldâs most powerful democracy. And sure, part of the equation here is breaking down walls with basic acts of kindness and humility: love thy neighbor, find common ground, and so on. But what about voting? Does scientific decision-making give us some guidance on âbest practicesâ there?
PERLMUTTER: Well, clearly we want this to be something that transcends election years. But in general, you should avoid making decisionsâvoting includedâpurely based on fear. This is not a time in the world where fear should be the dominant thing driving our individual or collective actions. Most of our fears divide us, yet most of our strength is found in working together to solve problems. So one basic thing is not to let yourself be flustered into voting for anyone or anything out of fear. But another is to look for leaders who use and reflect the scientific style of thinking, in which youâre open to being wrong, youâre bound by evidence, and youâre able to change your mind if it turns out that you were pursuing a bad plan. And thatâs something that unfortunately we very rarely see.
CAMPBELL: At the moment we have an abundance of free speechâeveryone can get on to some kind of social media and explain their views to the entire country. But we seem to have forgotten that the whole point of free speech was the testing of ideas. That was why it seemed like such a good thing: through free speech, new ideas can be generated and discussed and tested. But that idea of testing the ideas you freely express has just dropped out of the culture. We really need to tune back in to that in how we teach and talk about free speech and its value. Itâs not just an end in itself, you know?
MacCOUN: And letâs be mindful of some lessons from history, too. For a lot of these issues that are so polarizing and divisive, itâs probably going to turn out that neither side was completely right, and there was some third possibility that didnât occur to most, if any, of us. This happens in science all the time, with each victorious insight usually being provisional until the next, better theory or piece of evidence comes along. And in the same way, if we canât move past arguing about our current conception of these problems, weâre trapping ourselves in this one little region of conceptual space when the solution might lie somewhere outside. This is one of very many cognitive traps we talk about in the book. Rather than staking out our hill to die on, we should be more open to uncertainty and experimentation: we test some policy solution to a problem, and if it doesnât work, weâre ready to rapidly make adjustments and try something else.
Maybe we can practice what we preach here, this idea of performing evidence-based testing and course correction and escaping various sorts of cognitive traps. While you were working on this book, did you find and reflect on any irrational habits of mind you might have? And was there a case where you chose a hill to die on, and you were wrong, and you begrudgingly adjusted?
MacCOUN: Yeah, in the book we give examples of our own personal mistakes. One from my own research involves the replicability crisis and people engaging in confirmation bias. I had written a review paper summarizing evidence that seemed to show that decriminalizing drugsâthat is, removing criminal penalties for themâdid not lead to higher levels of use. After writing it, I had a new opportunity to test that hypothesis, looking at data from Italy, where in the 1970s theyâd basically decriminalized personal possession of small quantities of all drugs. And then they recriminalized them in 1990. And then they redecriminalized in 1993. So it was like a perfect opportunity. And the data showed drug related deaths actually went down when they reinstituted penalties and went back up again when the penalties were removed. And this was completely opposite of what I had already staked my reputation on! And so, well, I had a personal bias, right? And thatâs really the only reason I went and did more research, digging deeper on this Italian thing, because I didnât like the findings. So across the same span of time I looked at Spain (a country that had decriminalized without recriminalizing) and at Germany (a country that never decriminalized during that time), and all three showed the same death pattern. This suggests that the suspicious pattern of deaths in fact had nothing to do with penalties. Now, I think that leads to the correct conclusionâmy original conclusion, of course! But the point is: Iâm embarrassed to admit I had fallen into the trap of confirmation biasâor, really, of its close cousin called disconfirmation bias, where youâre much tougher on evidence that seems to run counter to your beliefs. Itâs a teachable moment, for sure.
CAMPBELL: It takes a lot of courage to admit these sorts of things and make the necessary transitions. One cognitive trap that affects many of us is whatâs called the implicit bias blind spot, where you can be really subtle and perceptive in spotting other peopleâs biases but not your own. You can often see a bias of some sort in an instant in other people. But what happens when you look at yourself? The reaction is usually, âNa, I don’t do that stuff!â You know, I must have been through hundreds and hundreds of student applications for admission or searches for faculty members, and I never spotted myself being biased at all, not once. âI just look at the applications straight,â right? But that canât always be true because the person easiest to fool is yourself! Realizing that can be such a revelation.
PERLMUTTER: And this really informs one of the bookâs key points: that we need to find better ways to work with people with whom we disagreeâbecause one of the very best ways to get at your own biases is to find somebody who disagrees with you and is strongly motivated to prove you wrong. Itâs hard, but you really do need the loyal opposition. Thinking back, for instance, to the big race for measuring the cosmological expansion of the universe that led to the discovery of dark energy, it was between my team and another team. Sometimes my colleagues and I would see members of the other team showing up to do their observations at the telescopes just as we were leaving from doing ours, and it was uncomfortable knowing both teams were chasing the same thing. On the other hand, that competition ensured weâd each try to figure out if the other team was making mistakes, and it greatly improved the confidence we collectively had in our results. But itâs not good enough just to have two opposing sidesâyou also need ways for them to engage with each other.
I realize Iâve inadvertently left probably the most basic question for last. What exactly is âthird millennium thinking?â
PERLMUTTER: Thatâs okay, we actually leave explaining this to the bookâs last chapter, too!
MacCOUN: Third millennium thinking is about recognizing a big shift thatâs underway. We all have a sense of what the long millennia predating science must have been like, and we all know the tremendous advances that gradually came about as the modern scientific era emergedâfrom the practices of various ancient civilizations to the Renaissance and the Enlightenment, all those shifts in thinking that led to the amazing scientific revolution that has so profoundly changed our world here in what, until the end of the 20th century, was the second millennium. But thereâs also been disenchantment with science, especially recently. And thereâs validity to concerns that science was sometimes just a handmaiden of the powerful and that scientists sometimes wield more authority than they deserve to advance their own personal projects and politics. And sometimes science can become pathological; sometimes it can fail.
A big part of third millennium thinking is acknowledging scienceâs historic faults but also its capacity for self-correction, some of which weâre seeing today. We think this is leading us into a new era in which science is becoming less hierarchical. Itâs becoming more interdisciplinary and team-based and, in some cases, more approachable for everyday people to be meaningfully involvedâthink of so-called citizen science projects. Science is also becoming more open, where researchers must show their work by making their data and methods more readily available so that others can independently check it. And we hope these sorts of changes are making scientists more humble: This attitude of âyeah, Iâve got the Ph.D., so you listen to me,â that doesnât necessarily work anymore for big, divisive policy issues. You need a more deliberative consultation in which everyday people can be involved. Scientists do need to stay in their lane to some extent and not claim authority just based on their pedigreeâthe authority comes from the method used, not from the pedigree.
We see these all connected in their potential to advance a new way of doing science and of being scientists, and thatâs what third millennium thinking is about.
CAMPBELL: With the COVID pandemic, I think weâve all sadly become very familiar with the idea that the freedom of the individual citizen is somehow opposed to the authority of the scientist. You know, âthe scientist is a person who will boss you around, diminish your freedom and inject you with vaccines laced with mind-controlling nanobotsâ or whatever. And itâs such a shame. Itâs so debilitating when people use or see science like that. Or alternatively, you might say, âWell, Iâm no scientist, and I canât do the math, so Iâll just believe and do whatever they tell me.â And that really is relinquishing your freedom. Science should be an enabler of individual power, not a threat to your freedom. Third millennium thinking is about achieving that, allowing as many people as possible to be empoweredâto empower themselvesâby using scientific thinking.
PERLMUTTER: Exactly. We’re trying to help people see that this combination of trends weâre now seeing around the world is actually a very fertile opportunity for big, meaningful, positive change. And if we lean into this, it could set us in a very good position on the long-term path to a really great millennium. Even though there are all these other forces to worry about at the moment, by applying the tools, ideas and processes from the culture of science to other parts of our lives, we can have the wind at our back as we move toward a brighter, better future.