The Experimental Philosophy Blog

Philosophy Meets Empirical Research

Menu
  • Home
  • About
  • Guidelines for Comments
  • Labs and Organizations
  • Resources
Menu

Author: Joshua Knobe

Priming Effects Are Fake, but Framing Effects Are Real

Posted on May 23, 2025June 6, 2025 by Joshua Knobe

A few decades ago, it was pretty common to mush together priming effects and framing effects and see them as two closely connected parts of a single Bigger Truth about the human mind. Of course, everyone understood that the effects themselves were a bit different, but one common view was that they were providing evidence for the same larger picture. That larger picture said: People’s judgments are radically unstable, easily pushed around by subtle and almost unnoticeable factors.

Things have changed so much since then. Priming research in social psychology has experienced a series of truly spectacular replication failures, while research on framing effects continues to look very solid. In light of this change, we should rethink our understanding of what framing effects show about human cognition. We shouldn’t see them as part of a larger picture that also includes priming. We need an understanding of framing that allows us to situate it within a larger picture, according to which priming effects are not real.

The priming literature seemed to be showing that people’s judgment and decision-making are highly unstable and can be easily shifted around by small manipulations of the external situation. The thought was that if you just happen to be holding a hot coffee, or sitting at a dirty desk, or in a room that includes a picture of dollar bills, your whole way of thinking about things will be shifted in some fundamental respect. For example, you will end up making deeply different moral judgments.

The key lesson of more recent research is simple: these priming effects do not occur. More generally, we cannot shift people’s moral judgments around in some radical way just by making subtle changes in their situation. Your moral judgments will not shift around completely if you are seated at a dirty desk. That is not how the human mind works.

Okay, with all of that in mind, let’s rethink framing effects. For concreteness, we can focus on a famous study from Tversky and Kahneman (1981). In this study, participants were randomly assigned to one of two conditions. Participants in the gain framing condition read the following case:

A disease is expected to kill 600 people. You can choose between two options:

  • If you choose the first option, 200 people will be saved.
  • If you choose the second option, there is one-third probability that 600 people will be saved and a two-thirds probability that 0 people will be saved.

Meanwhile, participants in the loss framing condition read:

A disease is expected to kill 600 people. You can choose between two options:

  • If you choose the first option, 400 people will die.
  • If you choose the second option, there is one-third probability that 0 people will die and a two-thirds probability that 600 people will die.

Clearly, the two descriptions are logically equivalent, but they tend to yield very different responses. Participants tend to be risk-averse in the first case, risk-seeking in the second.

During the heyday of priming research, many of us thought that this sort of effect should be understood within a larger picture of the mind that also included priming. Basically, the idea was something like this: “People’s judgments about a case can be shifted around but all sorts of little things, including everything from the decor in the room to the precise words used to describe it.” But in light of everything we know now, we need to revisit this view. Framing effects are very real, but that larger picture seems to be mistaken. We need to understand framing effects within a larger picture of the mind, according to which people’s judgments don’t just shift around randomly as a result of all sorts of little factors.

I’d be very open to different views about what the right picture is, but just as a first step in this direction, let’s consider a picture that emerges not from social psychology but rather from very traditional work in philosophy. This picture says that people often have a collection of different intuitions that are mutually inconsistent. These intuitions need not be unstable in any way. It might be that each individual intuition is completely stable; it’s just that the different intuitions contradict each other.

To illustrate, consider intuitions about free will. I might find myself having the following three intuitions: (a) All human behavior is completely explained by genes and environment, (b) If a person’s behavior is completely explained by genes and environment, that person’s behavior is not performed with free will, (c) Some human behaviors are performed with free will. These three intuitions are mutually inconsistent, so they cannot all be right. However, this does not mean that people’s free will intuitions have to be unstable in any way.

On the contrary, a single individual could easily have all three intuitions at the same time. For example, as a philosopher, I might start out a paper by explaining that each of these three claims seems intuitively to be true, that they are mutually inconsistent and hence cannot all be right, and that we therefore face an interesting philosophical problem. Alternatively, someone might simply have each of these three intuitions, but without noticing that they contradict each other. In such a case, the person would be failing to notice something important, but that would not mean that the person’s intuitions were unstable. Each of the three intuitions might be perfectly stable; it’s just that the three intuitions are not consistent.

Some philosophical problems seem to have very much the structure we see in framing effects. Consider the philosophical problem of moral luck. The problem starts with three intuitions: (a) An agent who doesn’t bring about any bad outcomes deserves relatively little blame, (b) An agent who performs the exact same behavior but who ends up bringing about a bad outcome deserves a lot of blame, (c) If the agent performs the exact same behavior in two cases and the only difference is in the outcome that ends up occurring, that difference by itself cannot be relevant to how much blame the agent deserves. I myself have all three of these intuitions. Since the intuitions are mutually inconsistent, they cannot all be right, but that does not mean that my intuitions are unstable. Each of the three intuitions is completely stable and emerges in all situations; it’s just that the three intuitions are in tension with each other.

Let’s now return to framing effects. In the days when it seemed like priming was real, I totally see why researchers would think that framing was a lot like priming. But in light of subsequent studies, maybe we should see it in a completely different way. Framing does not involve people’s judgments being unstable; it instead involves people having different intuitions that are mutually inconsistent.

Take the example described above. Looking at that example, I have the following three intuitions: (a) The correct answer in the first case is to take the non-risky option, (b) The correct answer in the second case is to take the risky option, and (c) It cannot possibly be the case that the correct answer in the first case is different from the correct answer in the second case. These three intuitions are mutually inconsistent, so they cannot all be right. However, each individual intuition can be perfectly stable. In fact, thinking about the problem right now, I find myself having all three intuitions at the same time.

Turning the traditional view about framing effects upside down, one might even see framing effects as an extreme case of stability. Just as we continue to experience a visual illusion even when we know that it is illusory, we continue to have the inconsistent intuitions that together constitute a framing effect even when we know that they cannot all be right.

[I discuss this issue in this paper, but please feel free to respond to this blog post even if you haven’t looked at the full paper.]

Do People Think That Free Will Is Incompatible with Determinism?

Posted on May 2, 2025June 6, 2025 by Joshua Knobe

Imagine a universe in which everything that happens is completely caused by the things that happened before. Suppose, for example, that Mia has a bagel for breakfast. Her act of having a bagel for breakfast would be caused by the way things were right before that, which would be caused by the way things were right before that… all the way back to the very beginning of the universe. In this universe, can anyone ever be morally responsible for anything they do?

If you just ask people this question, the overwhelming majority say “No.” This answer seems to align with the philosophical view called incompatibilism – the view that no one can ever be morally responsible for anything they do in a deterministic universe. So the most straightforward way of understanding this result is that people have an incompatibilist intuition.

But some of my fellow experimental philosophers reject this straightforward interpretation. They say that what’s really going on in this case is that people are misunderstanding the question. On this view, when people get a little story about a universe in which everything that happens is completely caused by what happened before, they don’t correctly understand what is going on in the story. So the take-home message is not that people have incompatibilist intuitions; it is that we need to change our experimental materials so that people understand them better.

The experimental philosophers who argue for this claim have conducted an impressive program of research. Basically, the key findings come from studies in which researchers present participants with a story about a deterministic universe and then ask questions about what life would be like in the universe. If you do this, you find that people give very extreme answers. People say that life in a deterministic universe would be radically different in all sorts of ways. Most philosophers think that these extreme answers are not true, meaning that people are going wrong in some important respect here.

Okay, so far, so good. If you give people a story about a deterministic universe and ask them what life would be like in that universe, they say some very extreme things that we have good reason to regard as false. But what does that show when it comes to the question about what people really think?

In my opinion, it does not show that we should switch over to different experimental materials. Instead, it suggests that people genuinely do have very extreme views about determinism. If we found a way to switch over to different materials that did not yield these extreme views, we would be switching over to materials that were less accurate in giving us an understanding of what people really think.

Let’s consider an analogy. Suppose we are running studies to understand people’s attitudes about abortion. Now suppose some of our participants say that abortion results in the fetus’s soul going straight to hell, to be tortured for all eternity. We might think that this is a catastrophically false understanding of what abortion is like, but we should not change our study materials to make people stop giving this response. This response is accurately revealing what some people believe about abortion. My point is that the results we get in studies about free will and determinism should be understood in much the same way.

Looking at the actual experimental results, what one sees is that when people are given a story about a deterministic universe, they think that nothing even approaching normal human agency would be possible in this universe. Most strikingly, if you ask them whether the actions of people in this universe depend on their beliefs and values, they explicitly say “No.” In other words, they seem to have a sense that a person living in a deterministic universe would do exactly the same thing even if she had different beliefs and values. (This is such an interesting result! It was first uncovered in the classic paper by Murray and Nahmias linked above, but it has subsequently been replicated in tons of further research.)

Importantly, people only apply this intuition to human actions and not to other types of objects. For example, suppose you instead tell them about a computer and ask whether the computer’s output depends on its data and code. You then get the opposite response. Although people say that a human being’s actions would not depend on her beliefs and values, they say that a computer’s output would depend on its data and code.

The most natural way to interpret this result is that people think that the processes underlying human action are radically different from the processes underlying a computer’s output. If everything were determined, the computer could still work fine, but human action would be fundamentally disrupted.

Further studies suggest that people think certain kinds of actions would be possible in a deterministic universe while others would not. For example, people think it would be possible in a deterministic universe for someone to have a craving for ice cream and then give into it and buy some ice cream, but people think it wouldn’t be possible for someone to have a craving but then resist it and not buy the ice cream.

The most natural way to understand this pattern of judgments is that people have a very extreme incompatibilist view. Not only do they think that determinism is incompatible with moral responsibility, they think that determinism is incompatible with the ordinary sort of human agency you might show in resisting a craving for ice cream. To really get to the bottom of this, we should be running further studies that help us understand why people see human agency in this way.

In saying this, I am departing from the usual view within my field. That usual view is that if we find people saying stuff like this, we must be making some kind of error in the way we are designing our studies. So the thought is that we should keep adjusting our experimental materials until we can get people to espouse a view about them that seems more philosophically kosher.

This reaction seems so mistaken to me! We are finding something super interesting here. It might not be what we expected to find when we first started working on these issues, but that just makes it all the more intriguing.

Call: “Oxford Studies in Experimental Philosophy”

Posted on February 23, 2025March 24, 2025 by Joshua Knobe

The “Oxford Studies in Experimental Philosophy” series, published by Oxford University Press and edited by Ángel Pinillos, Joshua Knobe, and Shaun Nichols, is now calling for papers for its sixth volume.

The series joins other successful series in the “Oxford Studies in…” collection, which bring together original articles on all aspects of their respective topics. “Oxford Studies in Experimental Philosophy” features outstanding papers at the cutting edge of experimental philosophy as well as papers that engage in critical discussion of the field. Philosophers and scientists alike are invited to contribute.

To submit, please send a completed paper to oxford.xphi@gmail.com by August 1, 2025. All submissions should be formatted for anonymous review and include a list of four suggested reviewers. In addition to research articles under 10,000 words, which can be theoretical or empirical, “Oxford Studies in Experimental Philosophy” accepts brief reports. A brief report must report new experimental findings and be no longer than 4,000 words.

How People Cite Old Papers in Philosophy vs. Psychology

Posted on January 3, 2025January 5, 2025 by Joshua Knobe

Philosophers and psychologists have very different practices when it comes to citing papers that were written decades ago. In philosophy, the norm is that you are supposed to carefully read those papers and accurately explain what they say. By contrast, in psychology, people typically make less of an effort to accurately summarize the ideas in decades-old papers, and it’s pretty common to cite something without ever having read it. (Of all the many psychologists who have cited Gordon Allport’s 1954 book The Nature of Prejudice, how many have read even a single page?)

Looking at this difference in practices, the obvious first thought would be that what the philosophers are doing is clearly better. After all, the philosophers are the ones accurately describing the papers they cite! What could be more obvious than the claim that it’s better to be accurate than inaccurate? I certainly see the force of this point, but in my view, the situation is more complex. There is at least something to be said on the other side.

As a first step into this question, consider cases where people go much farther in the direction of what psychology does right now. In talking about math, we might speak of a “Riemann integral,” a “Galois group,” the “Peano axioms,” but no one would think that this kind of talk needs to accurately capture what these mathematicians said in their original papers. It would be seen as absurd if someone tried to object to the content of an ordinary calculus lecture by quoting from one of Riemann’s original texts and arguing that the lecture wasn’t faithful to it.

It’s easy to see what is so important about this aspect of our practices. In many cases, the great insights of centuries past were giving us a glimpse of something that was only fully appreciated later. So it’s deeply important that we allow these things to be refined over the decades rather than forcing students to learn them in the form in which they happened to appear when first introduced.

The key point now is that something similar might also be said about the sort of ordinary workaday research that many of us do all the time. As an illustration: it sometimes happens that I wrote a paper on some topic twenty years ago, but then subsequent work by other researchers showed that I didn’t get things quite right. In such cases, my papers tend to be cited in very different ways in philosophy vs. psychology. Philosophers read my papers and accurately explain the view I originally defended. By contrast, psychologists do something else, which sometimes involves citing my old papers without reading them. This practice might appear to be obviously lazy or sloppy when described in that way, but I do think there is something about it that is worth considering.

Caricaturing just a bit, the approach works like this: Psychologists first try to figure out what is actually true; then they write a sentence that they think captures the truth; then, after that sentence, they cite various previous papers. Some of those papers literally defend the view stated in the sentence itself, but others are cited just because the authors want to give credit to previous work that they see as a helpful stepping stone that led up to finding the truth. So, when it comes to my old papers, they might think that something I wrote decades ago was one of those stepping stones, but given all the research that’s been done subsequently, they might think it’s not worth it to go back and read that old paper now. So they might cite the paper without knowing precisely what it actually says.

I totally get why people would think there is something weird or fishy about this. Strictly speaking, there’s a sense in which the citation itself is inaccurate. (It seems to be saying that an old paper defended a particular view, when in reality that precise view was only articulated many years later.) But it also feels like the drive to be accurate about this stuff is moving us away from what we really should be caring about. Perhaps what we see in the seemingly slapdash way that psychologists cite old papers is a kind of half-formed and not fully acknowledged version of the practice that we see so clearly and explicitly in the non-scholarly use of people’s names for mathematical ideas.

Imagine an ethos in which people had very different expectations. When you publish a paper proposing a new theory, you hope that other researchers will improve on your theory. Indeed, you hope that these improvements will be so substantive that after a number of years there will be no need to read your original paper anymore. So the outcome you hope for is one in which people keep using your theory (and maybe citing your paper) but in which almost no one has an accurate understanding of what your original paper actually said.

Now that we’ve talked a little bit about these competing considerations, let’s return to our original question. What is truly the best approach to citing old papers? Something more like the scholarly approach favored in philosophy? Or something more like the non-scholarly approach favored in psychology? I honestly don’t know. My main goal has just been to argue that it’s a difficult question. It is a mistake to think: “Obviously, the best approach is to focus on carefully and accurately describing what those papers actually say.” The correct view is that it is not obvious what we should be doing. It’s very much worth thinking more about the different possible options, and I’d love to hear any further thoughts people might have. 

In What Sense are Generics Normative?

Posted on December 23, 2024December 28, 2024 by Joshua Knobe

Suppose you see a teacher speaking to a student in an insulting or degrading way. You might go up to the teacher and say: “What are you doing? That’s not what a teacher does when students are having trouble.” And then you might say:

  • A teacher tries to help her students.

Here you are using a special type of sentence called a generic. Moreover, you are using this sentence in a way that is normative. That is, you aren’t just saying that teachers generally tend to help their students; you seem to be saying that helping one student is a way of fulfilling some kind of ideal.

The specific sort of normative claim you are making here is a puzzling one, and I don’t feel like I completely understand it. To begin with, it’s clearly not just a claim about what someone should do. For example, it’s not just the claim: teachers should help their students. Instead, it seems to mean something more like: helping one’s students is what follows from the characteristic ideals of being a teacher.

To see this, imagine that you see a teacher listening to Coldplay. You are outraged because you believe that teachers should have better taste in music. In such a case, you could not express the thought you are thinking by saying: “A teacher has good taste in music.” The reason is that even if you think that teachers should have good taste in music, you presumably do not think that this is something that follows from the characteristic ideals of being a teacher.

Okay then, what do we even mean when we speak of the “characteristic ideals” of a particular kind of thing? Unfortunately, I don’t know. I wish I could say something more helpful about this, but I don’t feel like I have a good handle on it yet.

Instead, I just want to suggest that this somewhat mysterious kind of normativity is really a big deal, i.e., that all sorts of different questions we face in understanding people’s ordinary cognition boil down to understanding this kind of normativity, meaning that if we could understand it, we would be able to understand all sorts of different aspects of the way people think.

In people’s ordinary way of thinking about things, people don’t seem to be concerned only about what you should do. They also seem to be very concerned about what follows from certain sorts of characteristic ideals. People have a notion of the characteristic ideals of being a teacher, the characteristic ideals of being a scientist, the characteristic ideals of being a Christian. Then they also have a way of thinking about the characteristic ideals of certain sorts of situations and certain sorts of objects. These notions seem to be right at the heart of people’s ordinary way of making sense of the world.

Just as a first step down this road, consider sentences like:

  • That’s not how one behaves at a Jewish wedding.

Or, more colloquially:

  • That’s not how you behave at a Jewish wedding.

Sentences like these seem to express something pretty fundamental about how people ordinarily understand the behavior that is called for in certain situations. We have a sense that it is sometimes possible to identify a certain behavior that is just “what one does” in a particular type of situation. This notion seems to be normative in some important sense, but how should that normativity be understood?

James Kirkpatrick and I have argued that they are normative in the same hard-to-capture sense that generics are normative. What do we mean when we say that something is “what you do at a Jewish wedding”? We don’t just mean something like: when someone is at a Jewish wedding, she should do this thing. Rather, we are saying something more like: doing this is a way of conforming to the characteristic ideals that follow from being at a Jewish wedding. (For example, you might think that the best thing to do if you are at a Jewish wedding is to ignore all the proceedings and start thinking instead about some profound philosophical question – but this has nothing to do with the characteristic ideal of Jewish weddings per se, and you could not speak about it using this specific type of sentence.)

Now consider the traditional philosophical question regarding knowledge attributions like:

  • Rachel knows how to behave at a Jewish wedding.

This sentence also seems to be saying something normative. It isn’t just saying that Rachel knows something that would be a way of behaving at a Jewish wedding; it seems to be saying that Rachel knows that “right” way of behaving, or the way of behaving that conforms to certain ideals. But which ideals? An obvious hypothesis would be: the exact same ideal we discussed in the previous paragraph. That is, the sentence means something at least broadly like: Rachel knows a way of behaving that conforms to the characteristic ideals that follow from being an action performed at a Jewish wedding.

Finally, consider judgments about persistence over time. Suppose that today we form a club for discussing recent experiments and call it the “Experiment Discussion Club.” Over the course of many years, certain features of the original club are lost but others are retained. Now suppose someone looks at the thing that exists ten years from now and says:

  • Ultimately, this isn’t even the Experiment Discussion Club anymore.

How do people decide whether this sentence is true or false?

In a series of amazing papers, Kevin Tobia finds experimental evidence that intuitions about persistence over time in cases like these depend on something normative. Basically, people’s intuitions depend on whether the changes involve the object getting better vs. worse. People will be especially inclined to say that the club isn’t even the Experiment Discussion Club anymore if it gets a lot worse, whereas if the club changes by getting a whole lot better, people will say that it is still the Experiment Discussion Club – just a more awesome version of that club.

But better in which specific sense? It certainly doesn’t seem that it is just a matter of getting better in any old way. For example, suppose people in the club stopped doing experiments entirely and instead focused on fighting for human rights. You might think that this would make the club better, but it would not make the club better at being the Experiment Discussion Club. It seems that it is not just a matter of being better but rather a matter of being better at embodying the specific ideals that are characteristic of the object itself.

Second-Order Desires Are Not What Matters

Posted on December 19, 2024December 28, 2024 by Joshua Knobe

Here’s a classic philosophical thought experiment: Sandra is struggling with an addiction to heroin. She desperately wants another hit, but she wishes she didn’t. She wishes that she could stop craving heroin and that she could start living a very different life. Faced with this thought experiment, many people have the intuition that Sandra’s desire to do heroin is not part of her true self – that Sandra’s true self is entirely on the other side of this inner conflict.

Now consider a reversed version of the classic thought experiment: Sandra has a visceral aversion to using heroin, but she wishes that she didn’t feel that way. Many of her friends are using heroin, and it’s clearly the easiest way to fit in with the people in her social group, so she wishes that she could stop feeling this aversion and just start using heroin like all her friends are. In this reversed case, do you have the same intuition? Does it seem like Sandra’s aversion to doing heroin is not part of her true self – that her true self is entirely on the other side of this inner conflict?

Within the philosophical literature, the usual view about the original version of this thought experiment is that the agent’s desire does not count as a part of her true self because she completely rejects this desire. Then a lot of the literature is about precisely how to cash out the broad idea that she is somehow rejecting a part of her own self (in terms of second-order desires, or in terms of identification, or in terms of her values, and so forth).

But none of this stuff has anything to do with the actual reason why we have this intuition! The reason we have the intuition that her desire isn’t part of her true self has nothing to do with the fact that she herself rejects this desire. Instead, it has everything to do with the fact that the desire in question is a desire to do heroin. There’s something about this specific desire that makes people think it is not part of the agent’s true self, and if we want to understand the way people ordinarily understand the true self, we need some way of making sense of this.

Within the literature in experimental philosophy and psychology, the usual view is that people think an agent’s true self is drawing that agent toward things that truly are good. Thus, if one part of the agent’s self is drawing the agent to use heroin and another part of the agent’s self is drawing the agent to refrain, people will have a general tendency to think that the part of the agent that is drawing her to use heroin is not her true self. This tendency doesn’t have anything to do with which part of the agent is the part that the agent herself rejects. Independent of anything like that, it is just a very fundamental tendency to think that the deeper essence of the agent is the part of her that is drawing her to the good.

As a result, experimental philosophy research finds that people show a general tendency to think that bad desires are less full part of the agent true self. In cases like the classic philosophical thought experiment, where the desire that the agent rejects is a desire to do something bad, people think that the desire that the agent rejects is not part of her true self. But in cases like the reversed version, where the desire that the agent rejects is a desire to do something good, people tend to think that this desire is a part of her true self.

This effect seems to connect with some much deeper philosophical issues that have nothing to do with second-order desires or anything like that. Basically, it seems like when people are thinking about what is most essential about an object, they tend to pick out what is good about that object. This isn’t just something about how they think about agents; it arises much more generally. For example, if you are reading an academic paper and you think that there is a lot of pointless stuff in it but that there is also an idea of genuine value, you will tend to think that the real essence of the paper is the valuable idea. And when people are thinking about what is most essential about the United States – what the United States is “really all about” – they tend to think about the good things about the United States. This is an important but mysterious phenomenon, and I don’t think we have a good understanding of it quite yet. It seems to involve some important connection in the ways people ordinarily think about essence, teleology and value.

But if we want to understand the role of things like reflective endorsement and second-order desires, then clearly, we need to be wary of looking at cases in which peoples intuition are determined by this other factor. Surely, it is cheating to look at cases in which the agent has a second-order desire not to do something that we ourselves regard as bad. If the action in question is something like doing heroin, then there’s an unrelated psychological process that will lead us to see the desire is not being part of the true self. If we want to understand the role of second-order desires per se, we should look at cases in which the desire itself is not something that we would independently see as particularly bad or good.

So let’s introduce a third case in which you have no independent ideas about whether the desire is good or bad: Sandra is an undergraduate student who is caught between two different majors, A and B. She has a strong desire to focus on major A, but when she reflects about what she is doing, she thinks that she should focus entirely on major B. Sometimes she finds herself staying up at night reading books related to A or writing in her journal about questions related to A, but when she thinks about it, she always concludes that this is a big mistake. She wants to stop wanting to study A so that she can focus on what she think she really ought to do, which is B. In this case, which of the two desires would you see as coming from Sandra’s true self? 

If you are like most people, then when faced with cases of this type, you specifically have the opposite of the intuition aligned with the traditional view. That is, when there are two desires such that one align with the agent’s unreflective urges and the other with the agent’s reflective endorsement, the desire associated with more reflective endorsement is seen as less part of her true self.

Given all this, why might people have had thought that there was some special connection between reflective endorsement and the true self? I don’t know the answer, but in closing, I want to briefly mention one speculative hypothesis. Perhaps the issue is that it just generally happens in life that we more often encounter cases like the classic philosophical experiment in the first paragraph of this post than cases like the reversed version in the second paragraph. That is, when we see an agent who has an unreflective urge toward a behavior but who completely rejects that behavior at a reflective level, we very frequently think that the behavior is something bad. As a result, we normally think that the desires that the agent rejects on reflection are not part of her true self.

But this is just a statistical correlation. Ultimately, second-order desires are not what matters. It’s not as though we have the intuition that these desires are not part of the agent’s true self because the agent wishes she didn’t have them. Rather, we have that intuition because the desires have a certain other quality, and that other quality happens to frequently arise in cases where people reject their own desires.

The Power of Norms

Posted on December 12, 2024January 1, 2025 by Joshua Knobe

In many communities, there is a shared sense that if someone disses you, it is pretty normal to react by punching them. But academia is not like that. In academia, if someone disses your research, it would be considered wildly abnormal to react by punching them. This shared understanding then has a very large impact on behavior. If you understand how academia works, you almost certainly will not react to someone who disses your research by punching them. This is an example of the power of norms.

One common view about the power of norms is that they operate by having an impact on people’s beliefs. For example, one might think that people observe that academics never never punch each other and therefore conclude that punching people is bad (or that punching people would lead to negative social consequences, or some other belief of this sort). I don’t think that this is the right way to understand the power of norms, and I want to sketch a very different approach.

To begin with, let’s note an obvious but deeply important fact about how people make decisions. Typically, when we face a choice, there are an enormous number of possible options, but we only consider a small subset of these options. For example, suppose someone points out a problem in my research, and I am trying to figure out how to respond. Perhaps I would consider three possible options: address the issue by doing further empirical work, or by doing further computational work, or just don’t do anything. As for all other possible options, I simply would not think about them at all. Take the possible of trying to learn some organic chemistry in the hopes that this will give me a valuable insight into the problem. Most likely, this option just would not occur to me.

Now let’s note a second key fact. When it comes to the options that people don’t consider, people might not form any belief about whether those options are good or bad. Thus, suppose someone says: “I notice that he did not respond to this problem by learning organic chemistry. Is that because he believes that learning organic chemistry wouldn’t be a good way to address it?” The correct answer would be: “No! He hasn’t formed any beliefs at all about whether learning organic chemistry would be a good way to address this problem. The whole possibility has not occurred to him.”

This is where we see the power of norms. When an option violates a norm, people tend not to think about it all. (For experimental evidence, see this paper.) So if there is a norm in academia that you can’t respond to disses by punching people, the usual upshot would be that people who are dissed just don’t even consider the possibility of responding to disses with punches. The whole idea just never occurs to them. My point is that this is the power of norms: they completely transform our lives by having an impact on which possibilities occur to us and which do not.

This phenomenon is not a matter of existing norms leading people to conclude that certain options are bad. It is something much more fundamental. Indeed, if someone does form the belief that a particular option is bad, this would show that the norm was not exerting the sort of power one might have expected it to have. Consider an academic who thinks: “Well, there are clear disadvantages to punching this person.” The very fact that an academic is thinking this at all should make us think that the norm does not have the kind of grip on them we would expect it to have.

So let’s distinguish the ways that norms can impact beliefs vs. the ways that norms can truly have a power over you and transform your whole way of thinking about life. To begin with, it’s clear that norms can indeed change your beliefs. If I ask you what you think about responding to a particular academic criticism by starting a fistfight, you might think about that option and go through a process in which you infer something from the fact that you never observe anyone performing this behavior. But this is not an example of the power of norms! On the contrary, it is an example of a case in which norms are not able to exert their full power. When we are truly in the grip of a norm, it’s not just that the norm impacts what we think of an option – it’s that it impacts which options we even think of at all.

Changing Explanatory Theories vs. Changing Norms

Posted on December 8, 2024December 28, 2024 by Joshua Knobe

Suppose you want to do something to decrease the amount of sexist behavior in the world. One thing you might do is try to change people’s explanatory theories. Perhaps you think that sexism is caused in part by people seeing certain outcomes as the result of a biological essence. You might then try an intervention in which you change people’s beliefs about gender and biology. A very different strategy would be to try to change prevailing norms. Some overtly sexist things were considered normal in the America of fifty years ago but are considered highly abnormal in America right now. So in a culture like today’s America, there might be certain sexist behaviors that almost never even come to mind as possible options.

The difference between these two approaches (theories vs. norms) is a very fundamental one. In this quick post, I want to focus on bringing out just one of the key differences. Changing people’s theories is the kind of thing one might be able to do in, say, 10 minutes. But changing norms is not like that. If you want to change the norms in a community, you can’t do it in 10 minutes. It’s the sort of thing you would hope to accomplish over the course of 10 years.

First, consider the point about theories. We are all familiar with times where we are wondering why something is happening, we read something that tells us the answer, and then we immediately adjust our explanatory theory. That’s just how theories work. The same point then arises for theories about social issues. At the moment, I have no idea why it is that such a high percentage of chess grandmasters are male. So if you presented me with a magazine article that provided strong evidence for a particular explanation, there’s a very good chance you could convince me. Over the course of 10 minutes or so, I might go from a state of having no idea why this happens to a state of being convinced by your explanatory theory. One might wonder whether this intervention would have any deep effect on my behavior, but at a minimum, it would successfully change my beliefs.

Changing norms is a fundamentally different type of process. If a given community has a norm of telling lots of sexist jokes, there’s no way you could possibly change that norm through a 10 minute intervention. That’s just not the way norms work. The process of changing a norm requires much more time and effort. As a simple illustration, there has recently been a change of norms that led to the use of preregistration, open data and open code, but that change took around a decade or so.

Of course, one might think it could be possible to have a quick intervention that led to a big change in people’s perceptions of the norms in their community, but studies indicate that this hope is also not warranted. There has been a lot of research about interventions that briefly tell people about the percentage of folks in their community who perform a particular behavior, but research finds that this sort of quick intervention rarely works. Presumably, the reason is that quickly telling people about certain percentages is not something that can change their representation of the community norm in the relevant sense.

With all this in the background, let’s now consider a very general hypothesis. I’m not sure whether the hypothesis is true, but I do think it is very much worth considering.

The hypothesis is that quick interventions like changing people’s explanatory theories just fundamentally do not work. If you want to do something that changes someone’s psychological states in a way that would lead that person to engage in less sexist behavior, there is no way you can do that through an intervention that lasts 10 minutes. The only things that work are large interventions like changing the norms within a community, which typically take years to complete.

Before the replication crisis, it certainly seemed as though we had lots of evidence that quick interventions on explanatory theories could yield large effects on behavior – but most of that evidence seems to be evaporating. Growth mindset interventions designed to change people’s explanatory theories about achievement don’t seem to lead to higher achievement. Interventions designed to change beliefs about free will don’t seem to impact cheating behavior. Interventions designed to change beliefs about genetics don’t seem to have much impact on judgments about punishment. Some recent studies indicate that interventions designed to reduce genetic essentialism don’t have any impact on prejudice.

One possible reaction to all of this would be that we haven’t yet found the exact right interventions on explanatory theories or the exact right downstream behaviors to measure… but another possible reaction would be that we are just fundamentally not looking in the right place.

Philosophy of Mind is Very Different Now

Posted on December 2, 2024January 1, 2025 by Joshua Knobe

A few decades ago, it felt like almost the entire field of philosophy of mind was focused on a pretty narrow range of questions (the mind-body problem, consciousness, the nature of intentionality, etc.). Insofar as anyone wanted to work on anything else, they often justified those interests by trying to explain how what they are doing could be connected back to this “core” of the field.

Clearly, things have changed a lot. These days, people are working on all sorts of different things that don’t connect back in any obvious way to the short list of topics that so dominated the field a few decades ago.

But if you look at various institutions that govern the field, it seems that there is a lag. Many of the norms and institutions we have in place don’t really make sense given the way the field is right now. They are just holdovers from the way the field used to be.

I bet that many readers will agree with the very general point I’ve been making thus far, but there’s room for lots of reasonable disagreement about exactly where our norms are showing a lag and where things need changing. I thought it might be helpful to write this post just to start that conversation. I’m going to suggest a few specific things, but I’d be very open to alternative views.

1. These days, many people in philosophy of mind are engaged in a broadly empirical inquiry into questions about how some specific aspect of the mind actually works: how visual perception works, how racism works, how memory works, how emotions work, and so forth.

When these people apply for jobs in philosophy of mind, it feels like there’s often a vague feeling that what they are doing is somehow “marginal” or “peripheral,” that it doesn’t really fall in the core of the field. But this no longer makes any sense! Contrast a person who is an expert on all the latest experimental studies about implicit bias with a person who is doing purely a priori work in the metaphysics of mind. Given the way the field works right now, there is no sense in which the former is less at the core of things than the latter. To the extent that the latter is seen as having a special status, this is just a residue from the way things were decades ago.

2. People working in philosophy of mind often want to learn about the history of the philosophy of mind. But what exactly is this history? For example, of all the things that Spinoza wrote, what should we call “Spinoza’s philosophy of mind”?

The traditional answer was basically: Of all the things that people in the history of philosophy wrote about the mind, the only ones that count as “history of philosophy of mind” are the ones that relate to the narrow list of questions discussed in late 20th century analytic philosophy. This involved excluding almost everything that figures in the history of philosophy said about the mind.

But again, this doesn’t make sense anymore. If people want to look at Spinoza’s philosophy of mind, I fear they would tend to look only at the discussion of the mind-body problem in Ethics, Book 2, i.e., the part that connects to this stuff discussed in 20th century philosophy of mind. But this is such a narrow way of thinking about discussions of the mind in the history of philosophy. Surely, Spinoza’s contributions to philosophy of mind go way beyond that; it’s just that most of his contributions are about how various specific things in the mind work. So these contributions might not be very closely related to things that philosophers of mind were working on in 1994, but they are extremely closely related to various things that philosophers of mind are working on in 2024.

3. Knowledge of mathematical or formal work is often helpful in philosophy, but we recognize that philosophers cannot possibly master all of the different formal methods that might be relevant to them in their work. So we always face questions of the form: Given that philosophers can’t know everything that would possibly be relevant, which methods do they absolutely need to know?

Now consider a graduate student working in philosophy of mind, and suppose that this student could either (a) take a course in logic but never take any courses in statistics or (b) take a course in statistics but never take any courses in logic.

It feels like there’s a norm in the field that (a) is more acceptable than (b). But does that really make sense anymore? I certainly agree that this is the background that would have been more essential a few decades ago, but if you look at what philosophers of mind are doing right now, it seems that statistics is used much more often than logic.

4. We have certain norms about which things philosophers are allowed to remain ignorant about and which they absolutely have to know. For example, a moral philosopher might say: “I am a consequentialist, and I think that non-consequentialist theories are mistaken.” But we would find it completely unacceptable for a moral philosopher to say: “I am a consequentialist, so I don’t know anything about recent work in non-consequentialist theories. I couldn’t even teach those theories at an undergraduate level.”

A question now arises about which norms would make sense in contemporary philosophy of mind. In many parts of philosophy of mind, the majority of people are using some kind of empirical approach, while a minority are using purely a priori approaches. We can imagine a person saying: “I am pursuing these questions using purely a priori methods, and I think it is a mistake to use empirical methods to address them.” But suppose someone said: “I don’t know anything about recent empirical studies on these questions. In fact, I couldn’t even teach a class about these studies at an undergraduate level.” Should we regard this sort of ignorance as acceptable? And if we do regard it as acceptable right now, might that just be a holdover from norms that really did make sense thirty years ago?

Again, I certainly don’t mean to be dogmatic about any of these four points, and I also don’t mean to suggest that these are the four most important areas in which we are facing a lag. Regardless of whether you agree or disagree about these for specific things, it does seem that the field has changed considerably, and I would love to hear your thoughts about how our norms should be evolving in light of that.

Brief Changes to the Situation Don’t Have Much Impact on Judgments

Posted on November 24, 2024December 30, 2024 by Joshua Knobe

There’s a certain kind of study we used to see all the time. The researchers ask all participants to make a judgment regarding the exact same question, but then they vary something in the external situation. They change the temperature in the room. Or the song that is playing in the background. Or they do something that’s supposed to make people have a particular emotion, or engage in more reasoning, or show more or less of some other psychological process.

A key lesson of post-replication crisis psychology is these sorts of manipulations don’t usually do much. For example, if you try to change the situation so that people feel more of certain emotions, their philosophical judgements remain pretty much unchanged, and if you try to change the situation so that people engage in more reasoning, their philosophical judgments also remain pretty much unchanged.

Within existing research, one sees a lamentable tendency to think about each of these results separately and give a completely separate explanation of each. Proceeding in this way, one might say that the former result indicates that emotions don’t impact people’s philosophical judgments… and then separately, one might say that the latter result indicates that reasoning doesn’t impact people’s philosophical judgments.

But this misses the larger picture. It sure looks like the reason why we don’t get a big effect when we try to manipulate people’s emotions isn’t due to something super specific about emotion in particular. Instead, we are getting growing evidence that this type of experimental manipulation just generally doesn’t do much.

Suppose you are thinking about what makes certain people more conservative, and you want to know whether it is a matter of some psychological state X (which might be a certain emotion, or a way of reasoning, or anything else). How do you test this hypothesis? The traditional idea was that you would run a study that lasted, say, five minutes in total, in which you temporarily increase the amount of state X and then show that this manipulation leads to a temporary increase in conservatism.

But it now seems like this whole approach just fundamentally does not work. The problem is not that we have the wrong X, or that we aren’t doing exactly the right thing to manipulate it, or anything like that. The problem seems to be that the human mind works in such a way that people’s judgments are stable across these sorts of temporary changes.

A few years ago, I wrote a paper about this topic, but that paper was mostly just about all the little details of the empirical data. I’m thinking that it might be helpful to zoom out a bit and think in a larger way about what we are learning from all of these studies. It seems like we face two different questions: one substantive, one methodological.

The substantive question is: What are we learning about the human mind from the fact that people’s judgments cannot be pushed around by these brief manipulations? I don’t know the answer to this question, but just to bring the key issue out a little more clearly, it might be helpful to consider a simple example.

It seems plausible that my dispositions to have certain emotions led to my interest in philosophy. But suppose we took a random person and, just for a single day, gave that person all the emotions that I typically have. Presumably, having these emotions for a single day would not lead the person to start philosophizing on that day (nor is it the case that if I stopped having these emotions for a single day, I would stop philosophizing for that day). If the emotions have any effect it has to be a much more long-term effect – with the philosophy I do today being shaped by the emotions I’ve had over the past twenty years.

How exactly is this to be understood? It does seem like we’re getting growing evidence that this happens, but I wouldn’t say that we already have a good understanding of how or why it happens.

The methodological question is: If this specific method does not work, how we can test claims about the causal impacts of psychological processes on judgments? Suppose we are wondering whether factor X has a causal impact on people’s judgments. One thing we can do is to check to see whether there is a correlation such that people who are dispositionally higher in factor X are more likely to make certain judgments. There are already lots of great studies of that form, and they have taught a lot about the relevant correlations. But one might legitimately wonder whether this approach provides a real test of the relevant causal claims.

The traditional solution was to try to temporarily manipulate factor X and check for a temporary effect on judgments. But if that doesn’t work, what should we be doing instead?

Search

Categories

Tags

Agency Artificial Intelligence Basic Needs Behavior Beliefs Bias Bioethics Blame Causation Cognitive Science Consciousness Corpus Analysis Cross-Cultural Research Decisions Determinism Distributive Justice Emotions Essentialism Expertise Folk Morality Framing Free Will Gender Intention Intuition Jurisprudence Knowledge Large Language Models Moral Psychology Norms Pejoratives Problem of Evil Psycholinguistics Rationality Reasoning Replication Responsibility Self Side-Effect Effect Slurs Truth Valence Values Virtue Well-Being

Recent Posts

  • Call: “The Armchair on Trial”
  • Call: “Interdisciplinary Perspectives on Linguistic Justice”
  • Call: “6th European Experimental Philosophy Conference”
  • Talk: “Ethics in the Lab” (Pascale Willemsen)
  • Talk: “Normality and Norms” (Josh Knobe)

Recent Comments

  1. Nova Praxis on The Folk Concept of ArtJuly 11, 2025

    This article highlights an important point: everyday people don’t rely on rigid definitions to determine what qualifies as art. They’re…

  2. Koen Smets on Priming Effects Are Fake, but Framing Effects Are RealMay 27, 2025

    That is indeed exactly the question I have as well. I operationalize it as having de facto contradicting intuitions, in…

  3. Joshua Knobe on Priming Effects Are Fake, but Framing Effects Are RealMay 24, 2025

    Hi Koen, Thanks once again. This idea brings up all sorts of fascinating questions, but for the purposes of the…

  4. Koen Smets on Priming Effects Are Fake, but Framing Effects Are RealMay 24, 2025

    Great! In the meantime I thought of another potentially interesting example of framing—Arnold Kling’s Three Languages of Politics. Just about…

  5. Joshua Knobe on Priming Effects Are Fake, but Framing Effects Are RealMay 23, 2025

    Thanks Koen! This is all super helpful.

Archives

  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

Imprint • Disclaimer • Privacy Statement • Cookie Policy

© 2024 The Experimental Philosophy Blog
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
  • Manage options
  • Manage services
  • Manage {vendor_count} vendors
  • Read more about these purposes
View Preferences
  • {title}
  • {title}
  • {title}