Q & A on the Philosophical Foundations of Psychology: Session4

0
547

Dr. Sven van de Wetering is an Associate Professor of Psychology at the University of the Fraser Valley, Canada. His research interests are in “Conservation Psychology, lay conceptions of evil, relationships between personality variables and political attitudes.” In a 4-part interview series, we explore the philosophical foundations of psychology.

Scott Douglas Jacobsen:  You have an interest in ecological validity and critical thinking from a psychological perspective. Psychology requires a Swiss army approach to problem-solving, as you have noted in other conversations with me, which is exemplified in the number of disciplines and sub-disciplines within the field. The external validity amounts to the extent that one can extrapolate and generalise the findings of psychology. Ecological validity is one aspect of the extrapolation and generalisation. It looks at the extensions into the real world. From a psychological perspective, how can the apparent simplicity of a research finding become troublesome when taken into the real world?

Dr Sven van de Wetering: I think your phrasing captures the problem: “simplicity of a good solid psychological research finding” is a delightful phrase because it captures so succinctly what is wrong with the way many research psychologists (including me in my less reflective moments) think of their research findings. Findings in physics are often satisfyingly simple and reliable. Think of Newton shining light through a prism, Galileo dropping stuff off of towers, or Robert Boyle goofing around with a vacuum pump. In this model of science, you find a result, you assume that the physical reality underlying the result is fairly simple. Furthermore, you assume that that physical reality will not change over time, and you feel free to draw sweeping generalisations based on the simple experiment (though it turns out Boyle was pretty cautious about doing that, an example we could probably learn from). That approach has gotten us far in physics, presumably because the assumptions of simplicity and changelessness correspond fairly well to the physical reality. A similar approach seems to be less useful in psychology, and I would argue that that is because the subject matter of psychology, human behaviour, is neither changeless nor straightforward.
To take a straightforward example, any good social psychology textbook, and most bad ones as well will talk about the Fundamental Attribution Error (FAE), which is also called correspondence bias, a term which I much prefer. In its simplest form, FAE (correspondence bias) is the tendency for people to assume that other people’s actions tell us a lot about their inner traits, beliefs, and values while ignoring the fact that many of the influences on people’s actions are situational in nature. The thing that irritates me about the name “Fundamental Attribution Error” is the word “fundamental” which seems to imply that the error is anchored in a core aspect of human psychological functioning, one that is universal across individuals, cultures, and situations. When this assumption is examined, it is found that the tendency fails to occur in some situations, that there are individual differences in the degree to which people fall prey to this bias, and that members of individualist cultures are much more susceptible to the bias than members of collectivist cultures. In short, many investigators of the FAE (correspondence bias) seem to assume that people’s behaviour in a small number of fairly contrived situations tells us something important about the way they behave all the time. To maybe highlight the illogic of this, it almost looks like many of these investigators engaged in more egregious examples of FAE than the people in their experiments. If I were more psycho-dynamically inclined, I might even accuse these researchers of projection.
As I said above, I am probably as vulnerable to this tendency as anyone else. I wonder if part of the problem is linguistic. Research psychologists often formulate their hypotheses as universal generalisations, something like “People do X.” It is certainly true that some people, some of the time, under some circumstances, do X; if they didn’t, the results of the experiment wouldn’t have come out the way they did. Researchers are aware that universalism is an assumption, but it’s not problematized as much as it probably should be. Usually, if the phenomenon is replicated with a few slight procedural variations and a couple of different populations, the assumption of universality is considered provisionally acceptable. I don’t really want to be too critical of this; the time, energy, and money necessary to really thoroughly explore the limits of the phenomena studied by psychologists are often not available. Psychologists do what they can, and perhaps are too busy and harried to really take a long, hard look at the intellectual baggage that psychology has picked up that leads to those assumptions of universality.

SJ: What research findings seem to show robust findings – highly reliable and valid – in the ‘laboratory’ but fail to produce real-world results? Those bigger research findings one may find in an introductory psychology textbook.

Dr Sven van: I’m certainly not in a position to give a comprehensive list, but here’s one I find a little ironic. One of the cornerstones of the critical thinking course you cited above was confirmation bias, which is a cluster of biases centred around the tendency to selectively test one’s hypotheses in a way that makes it relatively easy to confirm the hypothesis one already has in mind but difficult to disconfirm that same hypothesis. Some of my best students started to look into the literature and found that the whole intellectual edifice of confirmation bias was based on only a small number of experimental paradigms. Snyder and Swann developed one of the research paradigms in question in 1976. They asked people to prepare to interview another person. Their job in that interview was to find out whether the person in question was an introvert or an extrovert. It found that people often used what is called a positive test strategy; that is, if the interviewer was trying to find out if the person was an extravert, they chose a lot of questions that an extravert would tend to answer “yes” to. This has been taken to indicate confirmation bias on the part of the research participants.
What doesn’t get emphasised when most textbooks cite the above study is that the research participants did not create their interview questions from scratch. Instead, they were asked to choose some from a list. My students wondered if research participants would do the same thing if they could make up questions. We ran a small study on this question, and we did weakly replicate the original study; that is, people asked to find out if someone was an introvert were slightly more likely to ask questions that an introvert would say “yes” to, and people asked to find out if someone was an extravert had a non-significant tendency to ask more questions that an extravert would answer yes to. What we found striking, though, was that a substantial majority of the questions our participants came up with were not yes-no questions at all, but rather open-ended ones that at least had the potential to be informative regardless of whether the hypothesis was true or false. Thus, confirmation bias was, at best, a minor undercurrent in the test strategies used by most of our participants.

Jacobsen: How can those former examples become the basis for critical thinking and a better comprehension of ecological validity?

Dr Sven van: One thing I take from these examples is that human behaviour is highly context-dependent. The issue in these examples is not that people have made a false universal generalization about human behaviour that needs to be replaced with a true universal generalization. The issue is that universal generalizations may not be the way to go in order to explain most facets of human psychological functioning. Nor do I think that we can see people as passive recipients of cultural influences or some other form of learning. Any given person does have neural hardware, an evolutionary history, a history of learning experiences, a social milieu, a set of goals, of likes, of dislikes, of behavioral predispositions, and so on. Most psychologists recognize that this is so, but their hypothesis-testing methods tend to be designed with the assumption that all these different factors operate independently of each other, without interacting. This is probably not a useful assumption to make. I also don’t know what to replace it with, because I’m not mathematician enough to know how to cope with the sort of complexity one gets if every factor interacts with every other factor. I know that some people advocate for a turn from a hypothetico-deductive psychology toward a more interpretive one, but no one has yet shown me a version of this that is disciplined enough to give investigators a fighting chance of overcoming their own biases. So I’m kind of stuck in a methodological cul-de-sac. My own tendency is to more or less stick with existing methodological precepts, but to try to be a little bit skeptical and aware that things may go badly awry. Situations matter, and should be in the forefront of the investigator’s mind even when there is no way of actually accounting for their influence.

Jacobsen: Let us take a controversial example with the pendulum swings within the educational philosophies. Some are fads, while others are substantiated. In either case, the attempt is to make a relatively controlled setting, e.g. a single school’s educational environment in one community or standardized tests, extrapolate into improved school performance on some identifiable markers such as those found on the PISA tests, university English preparedness or – ahem – university preparedness, or even training for citizenship in one of the more amorphous claims, and so on. What educational paradigms, within this temporal and cultural quicksand, stand the test of time for general predictive success on a variety of metrics, i.e. have high general ecological validity for education and even life success?

Dr Sven van: I confess I find this a thorny issue. Once again, culture matters. In the US, asking children to work on problems they have chosen themselves is very much more motivating than asking them to work on problems chosen by their mothers. In some collectivist cultures (maybe most or even all, this hasn’t been tested a lot) the reverse is the case. This sort of thing makes me wonder how important something like child-centred education is.
One fad we probably shouldn’t get too excited about is the idea that all important learning is procedural, and that it is, therefore, unimportant to learn about content. In the area of critical thinking, it turns out that the most important single tool (if you can call it that) is lots and lots of domain-specific knowledge. Once a person has that, procedures may increase that person’s ability to use that knowledge effectively, but without the knowledge, all the procedures in the world don’t seem to do any good. Reading an article from Wikipedia doesn’t cut it; those bullshit detectors that are so important to critical thinking only develop as a result of fairly deep engagement with a body of material. That said, procedural knowledge is tremendously important; my issue is with the assumption that because knowing how is important knowing what is unimportant.
Probably the number one most important factor in education is an attitudinal one. If we think of educating our children and young adults as a sacred mission, we have a reasonable chance of success. This goes along with reasonably high social status for educators, though not necessarily money. If we think of education as something we do because it keeps kids off of the streets until they are 18 or because it enhances people’s “human capital” for the sake of the job market, then we may be trouble. Then you risk having educators going through the motions; if your educators are not passionate about what they are doing, it is pretty much guaranteed that your students won’t be, either, and then you’ve got a real problem.

SJ: Thank you for the opportunity and your time, Sven.

Dr Sven van de Wetering: Thank you, Scott. As always, a thought-provoking exercise.

Read Q & A of Session 1 with Dr Sven van de Wetering here 
Read Q & A of Session 2 with Dr Sven van de Wetering here 
Read Q & A of Session 3 with Dr Sven van de Wetering here

Leave a Reply