It has become increasingly popular in recent times for authors to further blur the line between what the research shows definitively and what the author purports the research shows. No more is this seen than in one of the Free Press’ most controversial books, The Bell Curve. In this book, Charles Murray and the late Richard J. Herrnstein (1994) purport that the research supports a very controversial position within the scientific community. This position, which uses the research in intelligence testing to support a social and political agenda, is old news. The techniques used to validate this position, however, have become more subtle and powerful, since the sacred term, “research,” is brought into the mix as somehow irrrefutable proof. One might call this, “arguing from the research.” Except, it goes beyond that. It is not just any research the authors are using to support their position, but rather, carefully selected studies from the field. Not only do they have the tendency throughout their book to either dismiss, ignore, or gloss over the results of contradictory studies, but they also sometimes draw conclusions from the supposedly “supportive” studies that the original researchers themselves never drew. This type of argument might be called, “arguing from carefully selected research.”
It should then come as no great surprise that the Free Press published another controversial book before The Bell Curve that uses many of the same techniques. Robyn M. Dawes’s (1994) House of Cards: Psychology and Psychotherapy Built on Myth is a book that carefully selects its research to “prove” its main contentions, while ignoring research which disputes his conclusions. Dawes’ conclusions are that the field of professional psychology has little or no additional benefits to offer individuals in need of help with emotional difficulties, ignores its own research, and has a professional organization which supports its closed system of psychotherapy. He argues in the end that the best therapists are those in the field who have the same amount and type of empathy as their clients, and those who practice according to what the research dictates to them as being “effective treatment.” But Dawes admits that nobody yet knows how to efficiently gauge “empathy” type and amount beforehand, so this suggestion is virtually useless. And as to “effective treatment,” Dawes ignores the many problems that most outcome studies in psychology suffer from, some of which will be reviewed below. His crowning conclusions are simply his opinions couched in typical terms, throwing in a study here and there to add the aurora of validity, and discuss the obsession our society today has with self-esteem, the profession’s paternalism, and where to go from here.
Dawes unfortunately plays fast and loose with many of the facts throughout his book, sometimes gleefully contradicting himself. For instance, he states on page 26 that, “The general wisdom based on actual scientific studies is that mild or moderate depression is best treated by a combination of behavioral, cognitive, and drug approaches…[emphasis in the original]” which is cited from a newspaper article no less, as well as a magazine article. It seems ingenuous for Dawes to claim that the scientific research supports this position when he can do no better than to cite “nontechnical reviews” on this literature from 1989 and 1990. Especially since the thrust of his many arguments is based upon the research showing congruent and generally supportive data.
Of course, had Dawes done his homework, he would have come across two meta-analyses on this topic that “prove” exactly the opposite of what he claims, namely that the research does not support the combined use of psychotherapy and drug approaches in the treatment of depression as being any more effective than one approach used alone (Robinson, Berman, & Neimeyer, 1990; Wexler & Cicchetti, 1992). This sloppiness typifies the approach Dawes plays throughout House of Cards to use carefully selected research to support his position, ignoring contradictory studies. As Dawes likes to repeat time and time again, it is arguing not “from a vacuum,” but instead, “arguing from carefully selected research.”
If we cannot believe Dawes on this account, what else might we call into question? How about his unfaltering ability to ignore his own advice? On page 71, Dawes states, “Critics of my argument may well be able to drag out a single study, or even several, that appear to contradict my conclusions. As I pointed out earlier, however, the generality of my conclusions is dependent on multiple studies…”
Despite his support of meta-analysis, Dawes often resorts to falling back on one or two studies in a large field to make his point, simply ignoring the rest of the studies in the field. For example, in his attack on biofeedback, he only cites Roberts’s (1985) review of the field of biofeedback at that time, carefully and purposely ignoring and failing to present the dozens of studies that have been conducted in the past decade that support biofeedback’s effectiveness and use in specific treatments. Dawes is implying (by simple omission) that no such research exists, when in fact a large body of such literature exists (see, for instance, Gauthier, Cote, & French, 1994; Flor & Birbaumer, 1993; Holroyd & Penzien, 1990). Dawes might call this “arguing by omission,” for his failure to examine the research in biofeedback since 1985. Not only are there supportive studies for traditional biofeedback methods, but new research on the effectiveness of EEG feedback for specific disorders have also been published (see, for example, Lubar, 1991).
Dawes’s inability to keep up with the latest research is no more clear than when he discusses the Rorschach Inkblot Test, which he claims is “bogus” and “shoddy.” But what we see in Dawes’s analysis of this test is what we have seen in most of his analyses — a reliance on his own training, personal experiences, and opinions on the topic, rather than an objective and detailed examination of the facts and research.
While he spends a great deal of time discussing the history and misuse of the Rorschach, he devotes only 10 sentences to Exner — the most widely-used system currently — four of which are related or are from a direct quote of someone else (pages 149-150). John E. Exner’s system for scoring and interpreting the Rorschach has been in use for over 20 years, so why is so little time devoted to discussing the strengths and weaknesses of this system? One reasonable answer is that it doesn’t support Dawes’s argument that the Rorschach is a “shoddy” test, despite the enormous normative base, interrater reliability percentages, and other statistical properties that validate Exner’s system as a useful clinical assessment measure (Exner, 1986). This is, of course, in keeping with his “arguing by omission” style. But lest you think that Dawes is not hypocritical, he also manages to throw in an anecdotal experience to help “prove” his contention (page 153), the same type of “proof” that he states elsewhere is unscientific and meaningless, used by researchers when they want to support an untenable or weak position.
There is no question that the Rorschach Inkblot Test is still a somewhat controversial test in psychological assessment circles. But Dawes once again is “arguing from carefully selected research” when he fails to cite the evidence that does support the use of the Rorschach (see, for instance, Parker, Hanson, & Hunsley, 1988). A more even-handed approach to this subject would perhaps make the book less controversial than with its eye-catching phrases such as, “A License to Use Shoddy Tests.” Even-handedness and fairness don’t appear to be a part of Dawes’s agenda for writing this book, though.
Which is not to say that some of Dawes’s main contentions cannot be disputed. There is little disagreement, for instance, among practicing professionals that we are still largely unsure as to what the “curative” factors in psychotherapy are, nor that psychologists have a definitive edge in producing positive outcomes for clients. In fact, we are seeing that through the effects of managed care in America there is little differentiation made between types of therapists who are considered “competent” or “effective.” Often times such decisions are based simply on financial considerations for these companies, to contain health care costs (e.g.- social workers charge less, therefore we will refer our managed care patients to social workers as opposed to the more expensive psychologists, since we see little difference in efficacy). The American Psychological Association has all too often, unfortunately, taken the tact of promoting the field at the expense of educating the public. Dawes makes sound arguments on these topics, in terms of the literature supporting the proposition that there is little difference between outcome efficacy for clients whether they see a professional or a paraprofessional.
And yet, House of Cards isn’t an objective portrayal of the wide variability found within the research. Even on the account of offering alternative explanations as to why practically all studies done up to this point in time support virtually no differences among professionals and paraprofessionals, Dawes is strangely muted. He misses an important study conducted in 1993 by Shedler, Mayman, & Manis which suggests that the self-report measurements used in many such outcome studies studying the efficacy of therapy conducted by experienced and inexperienced therapists may be biased. Instead of measuring “mental health” as many of these self-report measures purport to eventually do (even, the authors argue, the MMPI), they may be measuring other individual factors, such as defensiveness, which can distort any study’s results relying on such “objective” measures. And, as the authors further note, these same scales may measure different things in different people, which is a finding borne out by their research (Shedler, Mayman, & Manis, 1993). Certainly this type of research has important implications with regards to the validity of studies that use such measures.
While Dawes goes on to cite Lambert, Shapiro, and Bergin’s (1986) chapter on this well-worn topic, he glosses over their detailed analyses of what problems the research studies in this area suffer from, choosing instead to dismiss them out of hand. This is not the kind of reasoned and rationale argument one might expect from one who holds the idea (and ideal) of research above all. He also misses (once again!) the contradictory evidence, “arguing from carefully selected research.” For instance, Crits, Baranackie, Kurcias, & Beck (1991) found in a meta-analysis (the type of studies Dawes supports as being definitive) evidence that supports the importance of greater experience in therapy for stable, positive outcomes.
After having poked so many holes into so many different areas of House of Cards though, one begins to wonder what the author is up to. Given the controversial nature of the same publisher’s other books, it is not so hard to figure out what Dawes is attempting to do. Using carefully selected research, omitting contradictory studies or ignoring research done more recently, the author appears to build a cohesive and impenetrable argument for the inevitable — and seemingly logical — conclusions he draws. One is that individuals should seek out an empathetic therapist. (And despite the apparent importance of this “characteristic” in therapists and therapy, I could find only one research study done in the past five years which studied this variable. So much for research guiding the way.) Of course, since we don’t know who is more “empathetic” beforehand, this is a useless piece of advice.
He also suggests that individuals would do well with those psychologists who read a scientific journal, as though there might be a correlation between someone who reads research studies and their likelihood to incorporate and implement such results directly into their therapy (Dawes offers no research for this theory).
Another implicit conclusion is that professional psychology programs have somehow led to psychology’s decline to advance sound scientific theories and research on psychotherapy (he again offers nothing to support this contention, other than “arguing from authority,” basing the “decline” of psychological research on sheer numbers). He ignores market considerations for a demand of psychologists, choosing instead to focus on the politics of the split between the scientists and practitioners in psychology. Dawes seems to dismiss the idea that both could get along equally well (along with dozens of other subgroupings of psychologists) in the American Psychological Association.
Another of his goals is to give further support to the False Memory Syndrome (FMS) Foundation, which is a scientific-sounding name for an organization whose only goals to date have been to disseminate information on a self-defined “syndrome” of “false” memories (as opposed to “true” ones?) of childhood sexual abuse. This is a nice integration of propaganda in the guise of science. Dawes denigrates, paternalizes, and demeans practitioners who are neutral on this issue by stating on page 173, “A truly well-trained practitioner would know…” as though anybody who would disagree with his position must, by his definition, be “untrue” and “badly-trained.” He uses this same “argument by authority” that he elsewhere details in the book, not only by the above statements, but with the quoting of membership numbers of the Foundation (“It is a large organization, therefore we must have something going for us”).
Dawes also is obviously not in a comfortable position here. Being a member of the FMS Foundation’s scientific advisory board, he must retain his credibility as a scientist while not denying the existence of memories of when actual childhood sexual abuse does occur. He also knows though, as a scientist, that while we have many theories on how the brain functions with regards to memory, no one can say for certain whether a memory is “real” or not. An objective scientist would likely realize this contradiction and attempt to stay out of the political and social fray. Others, despite the lack of the same rigorous scientific proof that Dawes as his been arguing for throughout the entire book, seem to welcome the unscientific nature of the debate.
Dawes’s final three chapters are devoted to his own opinions about “what is going on” in psychology and mental health, as though he has some unique insight into today’s problems unrealized heretofore by thousands of other professionals in the field of psychology. While Dawes has some interesting things to say in these chapters, much of it has been spoiled by his outward bias against the professionals in the practice of psychology and his emphasis on selective research. By this point in the book, the reader is left to wonder how much of the whole picture Dawes is really showing us.
“House of Cards” is an attempt to provide a look at many fascinating issues and beliefs in the profession of psychology. The author is a bit overzealous in his rush to prove his thoughts on these matters, though. Many of the conclusions he draws throughout the book are based on arguing against what he perceives to be as the standard taught in psychology, and yet it is actually a more traditional and psychodynamic point of view. Most of today’s graduate students in psychology are taught to appreciate the wide variety of theories and research found in the field and science of psychology and to form informed opinions on such topics, many of which Dawes presents. This has never been more true nor more clear than after having read this book.
- Crits, C.P., Baranackie, K., Kurcias, J.S., & Beck, A.T. (1991). Meta-analysis of therapist effects in psychotherapy outcome studies. Psychotherapy Research, 1, 81-91.
- Dawes, R.M. (1994). House of cards: Psychology and psychotherapy built on myth. New York: Free Press.
- Exner, J.E. (1986). The Rorschach: A comprehensive system. Volume 1: Basic foundations (2nd edition). New York: Wiley & Sons.
- Flor, H., & Birbaumer, N. (1993). Comparison of the efficacy of electromyographic biofeedback, cognitive-behavioral therapy, and conservative medical interventions in the treatment of chronic musculoskeletal pain. Journal of Consulting and Clinical Psychology, 61, 653-658.
- Gauthier, J., Cote, G., & French, D. (1994). The role of home practice in the thermal biofeedback treatment of migraine headache. Journal of Consulting and Clinical Psychology, 62, 180-184.
- Herrnstein, R.J., & Murray, C. (1994). The bell curve: Intelligence and class structure in American life. New York: Free Press.
- Holroyd, K.A., & Penzien, D.B. (1990). Pharmacological versus non-pharmacological prophylaxis of recurrent migraine headache: A meta-analytic review of clinical trials. Pain, 42, 1-13.
- Lambert, M.J., Shapiro, D.A., & Bergin, A.E. (1986). The effectiveness of psychotherapy. In S.L. Garfield & A.E. Bergin (Eds.) Handbook of psychotherapy and behavior change, 3rd edition. New York: Wiley & Sons.
- Lubar, J.F. (1991). Discourse on the development of EEG diagnostics and biofeedback for attention-deficit/hyperactivity disorders. Biofeedback and Self-Regulation, 16, 200-225.
- Parker, K.C., Hanson, R.K., & Hunsley, J. (1988). MMPI, Rorschach, and WAIS: A meta-analytic comparison of reliability, stability, and validity. Psychological Bulletin, 103, 367-373.
- Roberts, A.H. (1985). Biofeedback: Research, training, and clinical roles. American Psychologist, 40, 938-941.
- Robinson, L.A., Berman, J.S., & Neimeyer, R.A. (1990). Psychotherapy for the treatment of depression: A comprehensive review of controlled outcome research. Psychological Bulletin, 108, 30-49.
- Shedler, J., Mayman, M., & Manis, M. (1993). The illusion of mental health. American Psychologist, 48, 1117-1131.
- Wexler, B.E., & Cicchetti, D.V. (1992). The outpatient treatment of depression: Implications of outcome research for clinical practice. Journal of Nervous and Mental Diseases, 180, 277-286.