Showing posts with label forensic psychology. Show all posts
Showing posts with label forensic psychology. Show all posts

February 15, 2020

Flawed science? Two efforts launched to improve scientific validity of psychological test evidence in court

There’s this forensic psychologist, we’ll call him Dr. Harms, who is infamous for his unorthodox approach. He scampers around the country deploying a bizarre admixture of obscure, outdated and unpublished tests that no one else has ever heard of.

Oh, and the Psychopathy Checklist (PCL-R). Dr. Harms never omits that. To him, everyone is a chillingly dangerous psychopath. Even a 30-year-old whose last crime was at age 15.

What’s most bizarre about Dr. Harms’s esoteric method is that he gets away with it. Attorneys may try to challenge him in court, but their protests usually fall flat. Judges rule that any weaknesses in his method should go to the “weight” that jurors give Dr. Harm’s opinions, rather than the admissibility of his tests.

Psychological tests hold a magical allure as objective truth. They retain their luster even while forensic science techniques previously regarded as bulletproof are undergoing unprecedented scrutiny. Based in large part on our briefcases full of tests, courts have granted psychologists unprecedented influence over an ever-increasing array of thorny issues, from future dangerousness to parental fitness to refugee trauma. Behind the scenes, meanwhile, a lucrative test-production industry is gleefully rubbing its hands all the way to the bank.

In other forensic “science” niches such as bite-mark analysis and similar types of pattern matching that have contributed to wrongful convictions, appellate attorneys have had to wage grueling, decades-long efforts to reign in shoddy practice. (See Radley Balko's The Cadaver King and the Country Dentist for more on this.) But leaders in the field of forensic psychology are grabbing the bull by the horns and inviting us to do better, proposing novel ways for us to self-police.

New report slams "junk science” psychological assessments


In one of two significant developments, a group of researchers today released evidence of systematic problems with the state of psychological test admissibility in court. The researchers' comprehensive survey found that only about two-thirds of the tools used by clinicians in forensic settings were generally accepted in the field, while even fewer -- only about four in ten -- were favorably reviewed in authoritative sources such as the Mental Measurements Yearbook.

Despite this, psychological tests are rarely challenged when they are introduced in court, Tess M.S. Neal and her colleagues found. Even when they are, the challenges fail about two-thirds of the time. Worse yet, there is little relationship between a tool’s psychometric quality and the likelihood of it being challenged.

Slick ad for one of a myriad of new psych tests.
“Some of the weakest tools tend to get a pass from the courts,” write the authors of the newly issued report, "Psychological Assessments in Legal Contexts: Are Courts Keeping 'Junk Science' Out of the Courtroom?”

The report, currently in press in the journal Psychological Science in the Public Interest, proposes that standard batteries be developed for forensic use, based on the consensus of experts in the field as to which tests are the most reliable and valid for assessing a given psycholegal issue. It further cautions against forensic deployment of newly developed tests that are being marketed by for-profit corporations before adequate research or review by independent professionals.

"Life or death" call to halt prejudicial use of psychopathy test


In a parallel development in the field, 13 prominent forensic psychologists have issued a rare public rebuke of improper use of the controversial Psychopathy Checklist (PCL-R) in court. The group is calling for a halt to the use of the PCL-R in the sentencing phase of death-penalty cases as evidence that a convicted killer will be especially dangerous if sentenced to life in prison rather than death.

As I’ve reported previously in a series of posts (here and here, for example), scores on the PCL-R swing wildly in forensic settings based on which side hired the expert. In a phenomenon known as adversarial allegiance, prosecution-retained experts produce scores in the high-psychopathy range in about half of cases, as compared with less than one out of ten cases for defense experts.

Research does not support testimony being given by prosecution experts in capital trials that PCL-R scores can accurately predict serious violence in institutional settings such as prison, according to the newly formed Group of Concerned Forensic Mental Health Professionals. And once such a claim is made in court, its prejudicial impact on jurors is hard to overcome, potentially leading to a vote for execution.

The "Statement of Concerned Experts," whose authors include prominent professionals who helped to develop and test the PCL-R, is forthcoming from the respected journal Psychology, Public Policy, and Law.

Beware the all-powerful law of unintended consequences


This scrutiny of how psychological instruments are being used in forensic practice is much needed and long overdue. Perhaps eventually it may even trickle down to our friend Dr. Harms, although I have a feeling it won't be before his retirement.

But never underestimate the law of unintended consequences.

The research group that surveyed psychological test use in the courts developed a complex, seemingly objective method to sort tests according to whether they were generally accepted in the field and/or favorably reviewed by independent researchers and test reviewers.

Ironically enough, one of the tests that they categorized as meeting both criteria – general acceptance and favorable review – was the PCL-R, the same test being targeted by the other consortium for its improper deployment and prejudicial impact in court. (Perhaps not so coincidentally, that test is a favorite of the aforementioned Dr. Harms, who likes to score it high.)

The disconnect illustrates the fact that science doesn’t exist in a vacuum. Psychopathy is a value-laden construct that owes its popularity in large part to current cultural values, which favor the individual-pathology model of criminal conduct over notions of rehabilitation and desistance from crime.

It’s certainly understandable why reformers would suggest the development of “standard batteries … based on the best clinical tools available.” The problem comes in deciding what is “best.”

Who will be privileged to make those choices (which will inevitably reify the dominant orthodoxy and its implicit assumptions)?

What alternatives will those choices exclude? And at whose expense?

And will that truly result in fairer and more scientifically defensible practice in the courtroom?

It’s exciting that forensic psychology leaders are drawing attention to the dark underbelly of psychological test deployment in forensic practice. But despite our best efforts, I fear that equitable solutions may remain thorny and elusive.

September 3, 2015

Adversarial allegiance: Frontier of forensic psychology research

A colleague recently commented on how favorably impressed he was about the open-mindedness of two other forensic examiners, who had had the courage to change their opinions in the face of new evidence. The two had initially recommended that a man be civilly committed as a sexually violent predator, but changed their minds three years later .

My colleague's admiration was short-lived. It evaporated when he realized that the experts’ change of heart had come only after they switched teams: Initially retained by the government, they were now in the employ of the defense.

"Adversarial allegiance" is the name of this well-known phenomenon in which some experts' opinions tend to drift toward the party retaining their services. This bias is insidious because it operates largely outside of conscious awareness, and can affect even ostensibly objective procedures such as the scoring and interpretation of standardized psychological tests.

Partisan bias is nothing new to legal observers, but formal research on its workings is in its infancy. Now, the researchers spearheading the exploration of this intriguing topic have put together a summary review of the empirical evidence they have developed over the course of the past decade. The review, by Daniel Murrie of the Institute of Law, Psychiatry and Public Policy at the University of Virginia and Marcus Boccaccini of Sam Houston State University, is forthcoming in the Annual Review of Law and Social Science.

Forensic psychologists’ growing reliance on structured assessment instruments gave Murrie and Boccaccini a way to systematically explore partisan bias. Because many forensic assessment tools boast excellent interrater reliability in the laboratory, the team could quantify the degradation of fidelity that occurs in real-world settings. And when scoring trends correlate systematically with which side the evaluator is testifying for, adversarial allegiance is a plausible culprit.

Daniel Murrie
Such bias has been especially pronounced with the Psychopathy Checklist-Revised, which is increasingly deployed as a weapon by prosecutors in cases involving future risk, such as capital murder sentencing hearings, juvenile transfer to adult courts, and sexually violent predator commitment trials. In a series of ground-breaking experiments, the Murrie-Boccaccini team found that scores on the PCL-R vary hugely and systematically based on whether an expert is retained by the prosecution or the defense, with the differences often exceeding what is statistically plausible based on chance.

Systematic bias was also found in the scoring of two measures designed to predict future sexual offending, the popular Static-99 and the now-defunct Minnesota Sex Offender Screening Tool Revised (MnSOST-R).

One shortcoming of the team’s initial observational research was that it couldn’t eliminate the possibility that savvy attorneys preselected who were predisposed toward one side or the other. To test this possibility, two years ago the team designed a devious experimental study in which they recruited forensic psychologists and psychiatrists and randomly assigned them to either a prosecution or defense legal unit. To increase validity, the experts were even paid $400 a day for their services.

Marcus Boccaccini
The findings provided proof-positive of the strength of the adversarial allegiance effect. Forensic experts assigned to the bogus prosecution unit gave higher scores on both the PCL-R and the Static-99R than did those assigned to the defense. The pattern was especially pronounced on the PCL-R, due to the subjectivity of many of its items. ("Glibness" and "superficiality," for example, cannot be objectively measured.)

The research brought further bad tidings. Even when experts assign the same score on the relatively simple Static-99R instrument, they often present these scores in such a way as to exaggerate or downplay risk, depending on which side they are on. Specifically, prosecution-retained experts are far more likely to endorse use of "high-risk" norms that significantly elevate risk.

Several somewhat complimentary theories have been advanced to explain why adversarial allegiance occurs. Prominent forensic psychologist Stanley Brodsky has attributed it to the social psychological process of in-group allegiance. Forensic psychologists Tess Neal and Tom Grisso have favored a more cognitive explanation, positing heuristic biases such as the human tendency to favor confirmatory over disconfirmatory information. More cynically, others have attributed partisan bias to conscious machinations in the service of earning more money. Murrie and Boccaccini remain agnostic, saying that all of these factors could play a role, depending upon the evaluator and the situation.   

One glimmer of hope is that the allegiance effect is not universal. The research team found that only some of the forensic experts they studied are swayed by which side retains them. Hopefully, the burgeoning interest in adversarial allegiance will lead to future research exploring not only the individual and situational factors that trigger bias, but also what keeps some experts from shading their opinions toward the retaining party.

Even better would be if the courts took an active interest in this problem of bias. Some Australian courts, for example, have introduced a method called "hot tubs" in which experts for all of the sides must come together and hash out their differences outside of court. 

In the meantime, watch out if someone tries to recruit you at $400 a day to come and work for a newly formed legal unit. It might be another ruse, designed to see how you hold up to adversarial pressure.

* * * * *

The article is: Adversarial Allegiance among Expert Witnesses, forthcoming from The Annual Review of Law and Social Science. To request it from the first author, click HERE


Related blog posts:

March 9, 2014

Psychologist whistleblower awarded $1 million; fired after testifying about state hospital's competency restoration program

In an unprecedented case, a civil jury has awarded $1 million in damages to a psychologist who was retaliated against after she challenged the validity of a state hospital's competency restoration methods.

Experts at the trial included Thomas Grisso and Randy Otto, prominent leaders in the field of forensic psychology who have written and taught extensively on best practices in the assessment of competency to stand trial.

After a five-week trial with dozens of witnesses, the jury found that Napa State Hospital failed to apply generally accepted professional standards for competency assessment and coerced its psychologists to find patients competent to stand trial "without regard to the psychologist's independent professional judgment, and without application of objective, standardized, normed, and reliable instruments."

Photo credit: J. L. Sousa, Napa Valley Register
Melody Samuelson, the psychologist plaintiff, ran afoul of her supervising psychologists at the Northern California hospital in 2008, when she testified for the defense at a competency hearing in a capital murder case in Contra Costa County. She had treated "Patient A" the prior year and had doubts about whether he was capable of being restored to competency, as his current treatment team claimed. Both the prosecutor and a hospital psychiatrist who testified for the state complained about Samuelson's testimony to then-Chief Psychologist James Jones, who launched an investigation that ultimately led to Samuelson's firing.

Samuelson was reinstated after a three-day hearing in 2011. An administrative law judge ruled that hospital administrators had failed to prove that Samuelson overstated her credentials during her 2008 testimony. Samuelson was not yet licensed at the time.

Samuelson subsequently filed a civil suit against the hospital, the chief psychologist, and two other supervising psychologists, claiming they engaged in a string of retaliatory actions against her even after her reinstatement. These actions included initiating a police investigation for perjury and taking action against her state license. She said she incurred the wrath of hospital administrators by repeatedly objecting to sham competency restoration practices designed to get defendants out of the hospital as quickly as possible, whether or not they were actually fit for trial.

Napa is the primary state psychiatric hospital serving Northern California, and houses defendants undergoing competency restoration treatment and those found not guilty by reason of insanity.

It has long been general knowledge that the overcrowded hospital routinely certifies criminal defendants as mentally competent with little seeming regard for whether they are truly fit to stand trial. I have evaluated many a criminal defendant shipped back to court with a formal certificate of competency restoration, whose mental condition is virtually identical to when he was sent to Napa for competency training in the first place. (Typically, such defendants now proudly recite random legal factoids that have been drilled into them -- such as "the four pleas" -- that are often irrelevant and unnecessary to their cases.)

But until Samuelson blew the whistle, there was little direct evidence from within the institutions of intentionality rather than mere bureaucratic incompetence. Samuelson alleged in her civil complaint that Chief Psychologist Jones "made clear to Samuelson that he was committed to … returning patients to court as competent to stand trial, and to minimizing the time for attaining such positive outcomes, regardless of the actual competency of individuals to stand trial."

According to Samuelson’s lawsuit, one reason that psychologists were pressured to find patients competent was to improve outcome statistics as mandated by a federal consent decree. In 2007, around the time of Samuelson’s hiring, the U.S. Attorney General's Office negotiated the consent decree mandating sweeping changes aimed at improving patient care and reducing suicides and assaults at Napa. The federal investigation had revealed widespread civil rights violations, including generic "treatment" and massive overuse of seclusion and restraints. 

Rote memorization

A longstanding criticism of the hospital's competency restoration program is that it focuses on rote memorization of simple legal terminology, ignoring the second prong of the Dusky legal standard, which requires that a defendant have the capacity to rationally assist his attorney in the conduct of his defense.

In her lawsuit, Samuelson accused the hospital of violating the standard of care for forensic evaluations and treatment by relying upon subjective assessment methods that are easily skewed. Defendant progress was measured using an unstandardized and unpublished instrument, the Revised Competency to Stand Trial Assessment Instrument, or RCAI, and a subjectively scored "mock trial" that was scripted on a case-by-case basis by poorly trained non-psychologists, the lawsuit alleged.

According to testimony at the Napa County civil trial, the hospital drilled patients on simple factual information about the legal system rather than teaching them how to reason rationally about their cases. Staff distributed a handbook outlining the factual questions and answers, posted the RCAI items at the nurse's station, and administered the RCAI repeatedly, coaching patients with the correct answers until they could pass the test.

Although forensic psychology experts Grisso and Otto were retained by opposite sides -- Grisso by the hospital and Otto by the plaintiff -- they agreed that this process falls short of the standard of practice in the field. It ignores the Constitutional requirement that, in order to be fit for trial, a criminal defendant must have a rational understanding of his own case as well as the capacity for rational decision-making. 

It has long  been my observation that the hospital's program was generic and failed to address defendants' specific legal circumstances. Both Grisso, who authored one of the earliest and most widely referenced manuals for assessing competency to stand trial, now in its second edition, and Otto, co-author of The Handbook of Forensic Psychology and other seminal reference works, testified that competency evaluations must address the defendant's understanding of his or her own specific legal circumstances, sources close to the case told me.

Disclosure of test data unethical?

Another pivotal issue at trial, according to my sources, was whether Samuelson's disclosure of test data from two competency instruments she administered -- the Evaluation of Competency to Stand Trial-Revised (ECST-R) and the MacArthur Competence Assessment Tool (MacCAT-CA) -- was improper. Samuelson disclosed the data at Patient A's 2008 competency hearing, after obtaining an authorization from the patient and a court order from the judge.

The hospital peer review committee that first recommended Samuelson's firing reportedly claimed that this disclosure was unethical and a violation of the American Psychological Association's Ethics Code.

Nothing could be further from the truth. The current version of the Ethics Code contain no prohibition on this type of disclosure in legal settings. Furthermore, fairness dictates that the legal parties be allowed to view data that are being invoked to decide a defendant's fate, so as to be able to independently analyze their accuracy and legitimacy. 

The jury levied $890,000 in damages against the hospital, $50,000 personally against Jones, described in the lawsuit as "the ringleader" of the campaign against Samuelson, and $30,000 each against two other supervising psychologists -- Deborah White and Nami Kim -- who allegedly conspired with Jones. Although punitive damages were not awarded, the jury found that the three psychologists acted intentionally and with "malice, oppression or fraud" toward Samuelson.

The state has until the end of next month to appeal the verdict, according to reporter Jon Ortiz of the Sacramento Bee, the only media outlet to cover the verdict so far.

Hat tip: Gretchen White

* * * * *

The Sacramento Bee report on the verdict is HERE. Dr. Samuelson’s civil complaint is HERE; the jury’s verdicts are are HERE

. . . And, speaking of psychiatric care -- I highly recommend this incredible story of the one-of-a-kind town of Geel, Belgium. (Hat tip: Ken Pope) 

UPDATE: On Oct. 28, 2016, California's First District Court of Appeals denied an appeal by the state hospital, upholding the jury's verdict except for one portion of the monetary damages. In its detailed opinion, the appellate court fleshes out the rights of psychologist whistleblowers who come to believe that assessments are being conducted in a potentially unlawful manner within an institutional setting. One of the more fascinating issues addressed in both the trial and the appeal was the principle that institutional failure to properly tailor competency restoration training and assessment to the Dusky legal standard -- which mandates that an accused have the capacity to rationally assist his or her attorney -- constitutes a violation of the U.S. Constitution. "If, as plaintiff's counsel argued, [Napa State Hospital] personnel were certifying to the trial court that patients were competent to stand trial without properly assessing their competency, a patient's constitutional due process rights could potentially be implicated," the appellate court noted in approving Samuelson's right to have argued this point in the closing arguments of the trial. 


(c) Copyright Karen Franklin - All rights reserved

February 16, 2014

Dutch forensic psychology blog interviews this blogger

Forensic psychology bloggers are few and far between, so I was delighted to make the acquaintance of Harald Merckelbach, a psychology professor at Maastricht University in the Netherlands who co-hosts -- you guessed it -- a "Forensische Psychologie Blog." Maastricht University, in case you are not familiar with it, is an internationally oriented school that -- together with Portsmouth in the UK and Gotheborg in Sweden hosts a three-year Ph.D. program in legal psychology funded by the European Union that is open to excellent candidates from the USA and Canada (check it out HERE).

Dr. Merckelbach interviewed me for his blog. In case you aren't fluent in Dutch, I thought I would post the English version of the interview, "Van Journalist Naar Forensisch Psycholoog: Interview met Karen Franklin":

* * * * *

Dr. Merckelbach: Can you give some background statistics on your forensic psychology blog? 

Dr. Franklin: Thanks for the opportunity to give you some background on the blog. When I started the blog seven years ago, it was just out of curiosity, dipping my toe into online media. I never imagined it would grow to its current size and scope. Now, almost a thousand posts later, the blog and my mirror blog at Psychology Today (“Witness”) have gotten about 700,000 hits, and the subscriber base just keeps growing. But more than the quantity of subscribers and readers, I have been gratified by the quality. Subscribers cross professional disciplines and include forensic practitioners, attorneys, professors, researchers, criminologists, journalists, students and public policy advocates. The majority are from English-speaking countries including the United States, Canada, England and Australia. But subscribers also hail from dozens of other countries, from Saudi Arabia and Turkey to Scotland and Lithuania. Not to mention the Netherlands, of course.

Dr. Merckelbach:  You were trained as journalist and legal reporter before you entered the forensic psychology scene. In your post “What’s it take to become a forensic psychologist?” you say that forensic psychologists should have excellent writing skills. Did your career as a journalist help you in that regard? Do you think that forensic psychology programs should include courses on journalism? 

Dr. Franklin: My education and training in journalism has definitely been a big asset. (And it is undoubtedly what spurred me to start the blog, as once writing gets in your blood, it’s hard to stop.) Journalism school teaches writing as a craft, and working in the field -- as a daily newspaper reporter – forces a certain efficiency in writing. In my graduate school teaching, I have definitely noticed that many students do not realize how critical writing precision is to success as a forensic psychologist. Only a small portion of forensic cases result in expert witness testimony. But almost all involve a written report. So our reputations rest largely on our written product. I don’t think psychology programs need to include courses on journalism, but I would certainly favor a lot more focused attention to students’ report-writing skills. I try to teach my students to edit their work carefully, and to take the time to produce multiple drafts, rather than thinking that they are finished after they have typed out a first draft. Writing is hard work, and requires concentrated practice.

That post on forensic psychology as a career is my most popular blog post, by the way. Posted back in 2007, it still gets multiple hits every day, attesting to the popularity of this field.

Psychology Professor Harald Merckelbach
Dr. Merckelbach:  It is impressive to see on your site this listing of highly diverse topics that you wrote about: 35 on psychological testing, 81 on expert witnesses, 60 on wrongful convictions, 27 on malingering and so on. What is the topic that keeps you awake most? 

Dr. Franklin: That’s a great question. When I first started blogging, I didn’t have a specific focus. I didn’t know whether to cover the field broadly or focus in on a few topics more narrowly. I wasn’t sure whether to do straight reporting or critical commentary. One beauty of blogging, it turns out, is that you can do both, like being a news reporter who also writes a weekly opinion column. But it took me awhile to find my voice.

Looking around the blogosphere, I was especially influenced by Vaughan Bell, who hosts a superb neuroscience blog called Mind Hacks, and Scott Henson, a fellow ex-journalist who writes about Texas justice at Grits for Breakfast. Both of them are skillful at blending facts and analysis. They are also far more prolific than I will ever be.

Gradually, I did find my own voice. I realized that there are plenty of academic journals supplying research findings. And there are plenty of news stories on any given topic, easily accessible through a quick Internet search. And running a blog all by myself, in my spare time, I could never hope to cover everything. So the best way I could be of service to the professional community was to provide a critical perspective on major issues and developments in the field, things that captured my attention and that I felt passionate about.

I can’t say that any topic keeps me awake at night. But an overarching concern of my blog is the ways that bureaucracies of social control deploy forensic psychology to provide a scientific veneer for injustice. So, for example, here in the United States we see the prejudicial label of “psychopathy” being used as a scientific rationale for sending juveniles to adult prisons for life. And what is most alarming is when forensic psychologists within the institutions of containment rationalize such practices as serving the greater good. This theme of moral disengagement, which grew out of my blog writings, became the topic of my keynote speech to the national association of forensic psychologists in Australia a few years ago. It’s a dangerously slippery slope. We end up with the American Psychological Association deciding not to punish psychologist John Leso for participating in the torture of prisoners at Guantanamo, a blatant human rights violation.

Dr. Merckelbach: Apropos malingering: some years ago, you wrote an article on 22-year-old Mr. Chavez who was sentenced to 25 years of prison because he had walked around with a weapon that he occasionally fired, while exhibiting bizarre behavior. The state hospital experts testified that he was a malingerer, but you – as a defense retained expert – discovered that they had based their opinions on erroneous scoring and interpretation of a malingering test (the SIRS). A disturbing story. Do you think that this type of problem occurs on a wide scale? 

Dr. Franklin: Yes, I do believe this is more widespread than is generally recognized. Whenever you have concentrations of people with no social power and no voice, such as in prisons and psychiatric hospitals, you are going to have abuses. That is what Piper Kerman illustrated so well in her bestselling memoir, Orange is the New Black, about her year in a women’s prison. In the Chavez case, it was a novice intern working under lax supervision. Professionals in government hospitals and prisons tend to get institutionalized, and some of them stop seeing their subjects as worthy human beings. This gets back to the issue of our moral and ethical obligation not to collude in injustice. I’m reminded of the case I just read about in which a man spent most of the past 40 years locked up in a psychiatric hospital for the theft of a $20 necklace. The poor guy, Franklin Frye, was 70 years old before someone finally noticed. I mean, how does that happen? Why wasn’t anyone paying attention?

Dr. Merckelbach: I like your blogs about biases, for example the one about authorship bias, i.e. the phenomenon that test designers report more hallelujah statistics about their risk assessment tools than independent researchers. Makes one think of researchers who are involved in the Prozac business. It leaves one with a somewhat gloomy impression of our discipline. Do you, at moments, say to yourself: "What a field, let’s get back to journalism?"

Dr. Franklin: I got out of journalism when I saw the writing on the wall, just as corporate monopolization began to get a stranglehold on the industry. The newspaper that I worked for was bought by a chain that was only interested in profits. And that has now happened throughout the newspaper industry. Rupert Murdoch’s empire now stretches around the globe, and Amazon’s billionaire owner just bought the Washington Post. That latter purchase was especially iconic for me, because I entered journalism school during the heyday of muckraking journalism, when Washington Post reporters Bob Woodward and Carl Bernstein were being heralded as role models for exposing the Watergate scandal and bringing down a corrupt administration. So, no, I haven’t regretted leaving journalism. After all, I can always blog!

Dr. Merckelbach:  What about writing a book in which you bring together all these fine blogs?

Dr. Franklin:  I’ve thought about it. I just have to find the time.

Thanks again for asking me to do this interview, and also for your own fine blog. I’ve been amazed at the dearth of forensic psychology blogs, so I was excited to discover yours. I hope others will join in. Blogging can be time consuming, but it’s also rewarding.

Dr. Merckelbach:  Thank you very much for this interview, Dr Franklin!

January 23, 2014

California conference to highlight juvenile treatment

Michael Caldwell, co-founder of the Mendota Juvenile Treatment Center in Wisconsin, will share his Center’s innovative approach to treating hard-core juvenile offenders at this year’s Forensic Mental Health Association of California (FMHAC) conference.

Caldwell, whose research on juvenile risk assessment has been highlighted on this blog, says the Mendota approach has been proven to reduce violent offense among the extreme end of intractable juvenile delinquents who absorb such a disproportionate amount of rehabilitation resources and account for a large proportion of violent crimes.

His two workshops are part of a special juvenile track that will also feature a session on introducing the practice of mindfulness to incarcerated juveniles.

The juvenile track is one of five special tracks at this year’s FMHAC conference, coming up March 19 in beautiful Monterey, California. The other tracks are clinical/assessment, legal, psychiatric and, of course, the omnipresent sex offender track.

More details and registration information can be found HERE.The FMHAC's website is HERE.

December 8, 2013

The psychic perils of forensic practice

John Bradford burst into tears. Hitting the road for the four-hour trek back to his home in Ontario, Canada, he could not stop crying and shaking.

An internationally renowned forensic psychiatrist, Bradford had been working around-the-clock on the high-profile case of Canadian Air Force Colonel Russell Williams, a decorated military pilot and commander of the country's largest military airbase who had spent his spare time torturing and murdering women.

Bradford's breakdown took him by surprise. Like other forensic practitioners, he had spent decades sitting across the table from rapists, murderers and sexual sadists. He was adept at emotionally distancing himself from their twisted psyches and wretched deeds. But the gruesome video of two young women screaming and begging for their lives (unsuccessfully, as he knew) proved a tipping point.

Descending into a very dark place, he was eventually diagnosed with posttraumatic stress disorder. He underwent lengthy therapy and drug treatment. Although he has now returned to his forensic practice, he is more cautious about the types of cases he will take on.

The profile by reporter Chris Cobb in the Ottawa Citizen, documenting Bradford's three-year struggle with vicarious traumatization, came as a complete shock to me. It was just three years ago that I served with Bradford on a team debating three controversial paraphilias being proposed for the DSM-5. Bradford, an advisor to the DSM-IV, was past president of the American Academy of Psychiatry and Law (AAPL), which hosted the debate. He holds numerous other accolades. He is a professor at the University of Ottawa, founder and clinical director of the Sexual Behaviors Clinic in Ottawa, and a Distinguished Fellow of the American Psychiatric Association, earning its prestigious Isaac Ray Award.

 Williams' victims, Jessica Lloyd and Marie-France Comeau
If he could fall apart, I wondered, who couldn’t?

Bradford described for the reporter how his mental state gradually morphed from calm and collected to irritable and angry, as he worked long hours on the Williams case. At one point, being cross-examined by a defense attorney in another case, he got so irritated by the attorney’s repetitiousness that he almost blurted out, "Why don’t you shut the f-- up, you a—hole?' "

It was then that he realized he was losing control.

"I knew there was something wrong but there was a lot of denial on my part," the 66-year-old Bradford told Cobb. "And that’s why it didn’t work when I first went into treatment. I was pessimistic and depressed, but if you’re a psychiatrist and a tough forensic guy you think you can blow anything off, right? And that’s what I did."

I was struck by the courage it must have taken Bradford to reveal his vulnerabilities to the world. I hope that his personal story can help stimulate conversation on the emotional dangers of this work. If Bradford can crumble, so can anyone, no matter how experienced, competent, or externally cool. Being part of a culture in which weakness is taboo, and can even be professional suicide, makes honest disclosure and help-seeking all the more difficult.

Confronting vicarious traumatization

Vicarious traumatization (also known as compassion fatigue, secondary trauma, or just plain burnout) has received some attention in professional circles in the past few years. There are books, journal articles, professional trainings, even websites.

The DSM-5 criteria for Posttraumatic Stress Disorder (PTSD) reflect this growing awareness. Criterion A, which lists the stressors that make one eligible for the diagnosis, now includes "experiencing repeated or extreme exposure to aversive details of the traumatic event(s)." To keep those who view disasters on TV from being diagnosed with PTSD, as happened after the 9/11 terrorist attack, the text clarifies that this applies to such people as "first responders collecting human remains or police officers repeatedly exposed to details of child abuse," and NOT to those exposed through the media, "unless this exposure is work related."

As this criterion implies, vicarious traumatization can strike not just forensic evaluators, but anyone who spends too much time rubbing up against trauma -- nurses, ambulance operators, child welfare workers, police, lawyers, judges, even jurors.

Studies on its incidence among forensic professionals are mixed. An unpublished survey by graduate student Julie Brovko and forensic psychologist William Foote of the University of New Mexico found low levels of vicarious traumatization among a convenience sample of 65 forensic psychologists. However, consistent with Bradford's case, more time in the field was correlated with more problems.

In contrast, a 2010 survey of 52 Australian clinicians providing treatment to convicted sex offenders found no evidence of compassion fatigue or burnout. The majority reported low stress and high levels of job satisfaction working with this challenging population. Ruth Hatcher and Sarah Noakes found that supervision and external social support helped clinicians avoid burnout.

One limitation of both of these studies is that they surveyed only those who remained active in the field. Anecdotal accounts suggest that some individuals leave forensic practice due to the emotional toll, which can produce feelings of estrangement, numbness, and hypervigilance.

An opposite danger?

Reflecting on Bradford's breakdown, I thought about the opposite tendency. Is it resilience that keeps other professionals from crumbling under the weight of witnessing constant perversion and misery? Or, might some be repressing their feelings in a manner that is not so healthy?

After all, to not be disturbed by graphic cruelty or stark oppression is in itself disturbing. Such psychic numbing whittles away at one's humanity.

In the memoir 12 Years a Slave (which I highly recommend), Solomon Northrup reflected on how the cruelty of slavery fostered casual violence not only toward slaves but also among white slaveholders. These men thought nothing of stabbing or shooting each other at the slightest provocation, the Southern "culture of honor" that remains with us today:
"Daily witnesses of human suffering -- listening to the agonizing screeches of the slave -- beholding him writhing beneath the merciless lash … it cannot otherwise be expected, than that they should become brutified and reckless of human life."
I've seen that phenomenon first-hand in institutions. Brutality breeds brutality, along with an indifference to brutality among institutionalized professionals that is equally troubling.

Mitigation?

Perhaps the first step in addressing the problem is for professionals to openly discuss the risk of professional burnout, vicarious traumatization, and psychic numbing. It’s very useful to have support and consultation groups where one can let one's guard down and be more vulnerable, debriefing after horrific case work with trusted colleagues.

Mindful meditation is so en vogue these days that I hesitate to join the bandwagon, but I do think it too can help reduce stress and emotional meltdowns.

Balance is also essential. Rest, relaxation, hobbies, exercise. It's not coincidental that Bradford broke down while working around-the-clock on a high-profile case. 

I'd be interested in others' thoughts on the emotional hazards of our work, and strategies or techniques for staying healthy.

Hat tip: Jeff Singer


Related resources:
  • Brovko and Foote (2011), Vicarious Traumatization: Are forensic psychologists vulnerable to trauma exposure? (Presentation) 
  • Culver, McKinney and Paradise (2011), Mental health professionals’ experiences of vicarious traumatization in Post-hurricane Katrina New Orleans, Journal of Loss and Trauma 16, 33-42 
  • Harrison and Westwood (2009), Preventing vicarious traumatization of mental health therapists: Identifying protective practices, Psychotherapy Theory, Research, Practice, Training 46 (2), 203-219 
  • Hatcher (2010), Working with sex offenders: The impact on Australian treatment providers, Psychology Crime and Law 16 (1-2) 
  • Robertson, Davies and Nettleingham (2009), Vicarious traumatisation as a consequence of jury service, The Howard Journal 48 (1) 
  • Tabor (2011), Vicarious traumatization: Concept analysis, Journal of Forensic Nursing 7, 203-208 
  • Taylor and Furlonger, A Review of Vicarious Traumatisation and Supervision Among Australian Telephone and Online Counsellors, Australian Journal of Guidance and Counselling 21 (2), 225-235

September 4, 2013

'Authorship bias' plays role in research on risk assessment tools, study finds

Reported predictive validity higher in studies by an instrument's designers than by independent researchers

The use of actuarial risk assessment instruments to predict violence is becoming more and more central to forensic psychology practice. And clinicians and courts rely on published data to establish that the tools live up to their claims of accurately separating high-risk from low-risk offenders.

But as it turns out, the predictive validity of risk assessment instruments such as the Static-99 and the VRAG depends in part on the researcher's connection to the instrument in question.

Publication bias in pharmaceutical research
has been well documented

Published studies authored by tool designers reported predictive validity findings around two times higher than investigations by independent researchers, according to a systematic meta-analysis that included 30,165 participants in 104 samples from 83 independent studies.

Conflicts of interest shrouded

Compounding the problem, in not a single case did instrument designers openly report this potential conflict of interest, even when a journal's policies mandated such disclosure.

As the study authors point out, an instrument’s designers have a vested interest in their procedure working well. Financial profits from manuals, coding sheets and training sessions depend in part on the perceived accuracy of a risk assessment tool. Indirectly, developers of successful instruments can be hired as expert witnesses, attract research funding, and achieve professional recognition and career advancement.

These potential rewards may make tool designers more reluctant to publish studies in which their instrument performs poorly. This "file drawer problem," well established in other scientific fields, has led to a call for researchers to publicly register intended studies in advance, before their outcomes are known.

The researchers found no evidence that the authorship effect was due to higher methodological rigor in studies carried out by instrument designers, such as better inter-rater reliability or more standardized training of instrument raters.

"The credibility of future research findings may be questioned in the absence of measures to tackle these issues," the authors warn. "To promote transparency in future research, tool authors and translators should routinely report their potential conflict of interest when publishing research investigating the predictive validity of their tool."

The meta-analysis examined all published and unpublished research on the nine most commonly used risk assessment tools over a 45-year period:
  • Historical, Clinical, Risk Management-20 (HCR-20)
  • Level of Service Inventory-Revised (LSI-R)
  • Psychopathy Checklist-Revised (PCL-R)
  • Spousal Assault Risk Assessment (SARA)
  • Structured Assessment of Violence Risk in Youth (SAVRY)
  • Sex Offender Risk Appraisal Guide (SORAG)
  • Static-99
  • Sexual Violence Risk-20 (SVR-20)
  • Violence Risk Appraisal Guide (VRAG)

Although the researchers were not able to break down so-called "authorship bias" by instrument, the effect appeared more pronounced with actuarial instruments than with instruments that used structured professional judgment, such as the HCR-20. The majority of the samples in the study involved actuarial instruments. The three most common instruments studied were the Static-99 and VRAG, both actuarials, and the PCL-R, a structured professional judgment measure of psychopathy that has been criticized criticized for its vulnerability to partisan allegiance and other subjective examiner effects.

This is the latest important contribution by the hard-working team of Jay Singh of Molde University College in Norway and the Department of Justice in Switzerland, (the late) Martin Grann of the Centre for Violence Prevention at the Karolinska Institute, Stockholm, Sweden and Seena Fazel of Oxford University.

A goal was to settle once and for all a dispute over whether the authorship bias effect is real. The effect was first reported in 2008 by the team of Blair, Marcus and Boccaccini, in regard to the Static-99, VRAG and SORAG instruments. Two years later, the co-authors of two of those instruments, the VRAG and SORAG, fired back a rebuttal, disputing the allegiance effect finding. However, Singh and colleagues say the statistic they used, the receiver operating characteristic curve (AUC), may not have been up to the task, and they "provided no statistical tests to support their conclusions."

Prominent researcher Martin Grann dead at 44

Sadly, this will be the last contribution to the violence risk field by team member Martin Grann, who has just passed away at the young age of 44. His death is a tragedy for the field. Writing in the legal publication Das Juridik, editor Stefan Wahlberg noted Grann's "brilliant intellect" and "genuine humanism and curiosity":
Martin Grann came in the last decade to be one of the most influential voices in both academic circles and in the public debate on matters of forensic psychiatry, risk and hazard assessments of criminals and ... treatment within the prison system. His very broad knowledge in these areas ranged from the law on one hand to clinical therapies at the individual level on the other -- and everything in between. This week, he would also debut as a novelist with the book "The Nightingale."

The article, Authorship Bias in Violence Risk Assessment? A Systematic Review and Meta-Analysis, is freely available online via PloS ONE (HERE).

Related blog reports:

August 8, 2013

Cluelessness, complacency and the great unknown

The case of the self-blind psychologist

An experienced forensic psychologist -- let's call him Dr. Short -- applies for a job as a forensic evaluator. He is rejected based on his written work sample. He files a formal protest, insisting that the report was fine.

As you all know, forensic reports should be (in the words of an excellent trainer I once had) both "fact-based" and "fact-limited." In other words, we must (a) carefully explain the data that support our opinion, and (b) exclude irrelevant information, especially that which is intrusive or prejudicial.[1]

Dr. Short's report was neither fact-based nor fact-limited. The adduced evidence did not support his forensic opinions, and the report was littered with extraneous material insinuating bad moral character. We learned of the subject's unorthodox sexual tastes and former gang associations, neither of which were relevant to the very limited forensic issue at hand. Using ethnic terms to describe the subject's hair, Dr. Short inadvertently revealed more about his biases than about the subject.

Obviously, based on his vehement insistence that his report was fine, Dr. Short was blind to these deficiencies. Which got me to thinking: Since biases are largely unconscious, can people be made aware of them? Can blind spots be overcome? How can we come to understand what we do not know?

"The anosognosia of everyday life"

Pondering these questions in connection with one of my seminars at Bond University, I stumbled across some intriguing philosophical discourse on the various types of unknowns, and how to remedy them:

The simplest type of unknown has been labeled a "known unknown." This is something we don't know, and know we don't know. Let’s say you learn that someone you are evaluating in a sanity proceeding had ingested an obscure substance just before the crime. If you don’t know the substance’s potential effects, the solution is straightforward (assuming you are motivated): Do the research.

In some cases, we know the question, but no answer exists. For example, we know that six out of ten individuals who score as high risk on actuarial instruments will not reoffend violently, due to the base rates of violence. What we don’t know is how to distinguish the true from false positives. So that’s a known unknown with an unknown answer. But if we are at least aware of the issue, we can explain the field’s empirical knowledge gap in our reports.

However, unknown unknowns [2] are an entirely different kettle of fish. These are things we don't know and don't realize that we don't know. We don't know that there even IS a question that needs to be asked. Without being able to frame the question, we obviously cannot figure out an answer. Put simply: We are clueless.

Unknown unknowns are a major problem in forensic psychology, with its dearth of racial, ethnic and cultural diversity among researchers and practitioners.[3] Vast experiential divides lead evaluators to impose their own moral standards without even realizing they are doing so. In condemning his subject's sexual promiscuity and drug use, for example, Dr. Short made false and universalizing assumptions that revealed ignorance of lifestyles other than his own. (This reminded me of an African American prisoner’s dilemma interacting with white guards in remote, rural prisons; because the farming communities from which these guards were recruited are devoid of mainstream African Americans, the guards tended to assume that all Black people had the characteristics of Black convicts.)

"The anosognosia of everyday life" is the rather gloomy term coined by David Dunning of Cornell University, who specializes in decision-making processes, to describe such routine ignorance.[4] Dunning is a great believer in ignorance as a driving force that shapes our lives in ways in which we will never know. 
"Put simply, people tend to do what they know and fail to do that which they have no conception of. In that way, ignorance profoundly channels the course we take in life."

Apropos of Dr. Short's report, Dunning notes that cluelessness on the part of a so-called expert does not imply dishonesty or a lack of caring:
"People can be clueless in a million different ways, even though they are largely trying to get things right in an honest way. Deficits in knowledge, or in information the world is giving them, leads people toward false beliefs and holes in their expertise."

Laziness a major culprit

Unknown unknowns are not unfathomable mysteries that can never be solved. They are caused by laziness and complacency, which block the process of discovery as surely as a dam holds back water. It’s what German cognitive scientist Dietrich Dorner was talking about when he wrote, in The Logic of Failure, that “to the ignorant, the world looks simple.”[5] We’ve all known people who are incompetent, but whose very incompetence masks their ability to recognize their incompetence. There’s even an unwieldy term for this condition (named after the researchers who studied it, naturally): Just call it the Dunning-Kruger Effect. Quoting Dunning yet again: 
"Unknown unknown solutions haunt the mediocre without their knowledge. The average detective does not realize the clues he or she neglects. The mediocre doctor is not aware of the diagnostic possibilities or treatments never considered. The run-of-the-mill lawyer fails to recognize the winning legal argument that is out there. People fail to reach their potential as professionals, lovers, parents and people simply because they are not aware of the possible."

Before leaving the topic of the great unknowns, I must mention one final type of unknown, an especially pernicious one in forensic work. Unknown knowns, which undoubtedly beset Dr. Short, are unconscious beliefs and prejudices that guide one’s perceptions and interactions. Perhaps the 19th century humorist Josh Billings captured the quality of these unknowns the best when he wryly observed:
"It ain't what you don't know that gets you into trouble. It's what you think you know that just ain't so." [6]

Tackling the great unknown

So, is there any hope for our wayward Dr. Short, oblivious to his biases and blind spots? The answer, as in many facets of life, is: It depends. One of the most elementary lessons one learns as a novice psychologist is that people don’t change unless they are motivated to change. (Hence, a whole area of psychology devoted to enhancing motivation to change, through so-called “motivational interviewing.”) Effective change is rarely compelled. If Dr. Short is open to feedback and correction, this experience could be a wake-up call. On the other hand, his very protest speaks to an impaired capacity for self-reflection, a brittle ego defense that may be difficult to penetrate.

Either way, Dr. Short's dilemma can serve as a lesson for others, including both students and practitioners. The key to opening the locks on the dam of knowledge is readily available: It is simply a genuine desire to learn, and a willingness to confront life’s complexities. To those with a thirst for knowledge, the world is complex, and that complexity is what makes it so fascinating.

Here, in a nutshell, is the advice I gave to the graduate students at Bond during last week’s lecture that touched on the paradoxes of the unknowns:
    If you haven't faced it, it's not easy to imagine this life
  • To reduce the unknown unknowns, seek broad knowledge. Seek out people from other walks of life, who may not share your views or experiences. Travel outside your comfort zone, not just geographically but in other local cultures as well. These experiences can open one’s eyes to difference. Travel vicariously by reading widely, especially OUTSIDE of the insular, micro-focused and ahistorical field of psychology.
  • Study up on cognitive biases and how they work. Especially, understand confirmatory bias, and build in hypothesis testing (including the testing of alternate hypotheses) as a routine practice. (Excellent resources on cognitive biases include Nate Silver's The Signal and the Noise and Carol Tavris and Elliot Aronson's Mistakes Were Made (but not by me), which brilliantly and unforgettably explains how two people can start out much the same but diverge dramatically so that they ultimately stare at each other as strangers across a great chasm.)
  • Create formal feedback loops so that you learn how cases you were involved in were resolved, how your work was received, and whether your opinions proved accurate. 
  • Don't assume you know the answer. Ask questions. And then ask more questions. 

  • Stay humble. Arrogance, or overconfidence in one’s wisdom, can short-circuit understanding as surely as TSA security checkpoints destroy the fun of flying. (That rather strained metaphor is a clue that this post was penned from 40,000 feet in the air.)
  • Finally, and most critically: When you look across the table, try to see a fellow human being, someone who perhaps lost their way in life's dark wood, rather than an alien or a monster. Before you judge someone, try to walk a mile in his shoes.

Ultimately, Dr. Short's dilemma flows not only from complacency but from an essential deficit in empathy, an inability to truly see -- and understand -- the fellow human being sitting across from him in that forensic interview room.

* * * * *

Notes
  1. This is discussed in both the American Psychological Association's Ethics Code (Standard 4.04, Minimizing Intrusions on Privacy, states that psychologists should include in written reports "only information germane to the purpose for which the communication is made") as well as the Specialty Guidelines for Forensic Psychology (see, for example, 10.01, Focus on Legally Relevant Factors).

  2. The term "unknown unknown" is sometimes credited to US Secretary of Defense Donald Rumsfeld, who used it to explain why the United States went to war with Iraq over mythical Weapons of Mass Destruction (WMD’s). Although the phrase gained currency at this time, others had already used it

  3. Heilbrun, K., & Brooks, S. (2010). Forensic psychology and forensic science: A proposed agenda for the next decade. Psychology, Public Policy, and Law, 16, 219-253. 

  4. For further conversation on this topic, see: Morris, E. (2010, June 20), The anosognosic's dilemma: Something's wrong but you'll never know what it is, New York Times blog.  Also see: Dunning, D. (2005). Self-Insight: Roadblocks and Detours on the Path to Knowing Thyself (Essays in Social Psychology), Psychology Press, p. 14-15; Dunning, D. & Kruger, J. (1999), Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments, Journal of Personality and Social Psychology 77 (6), 1121-1134. 

  5. I cribbed the Dorner quote from Dr. Wayne Petherick, Associate Professor of Criminology and coordinator of the criminology program at Bond University. 

  6. Some attribute this quote not to Josh Billings but to Mark Twain, who was kicking around at the same time. Others claim it was neither. For now, the true origin of the quote is just one more of life's unknowns. 

April 28, 2013

Forensic practice: A no-compassion zone?

Murder trial prompts professional dialogue

Do empathy or compassion have a place in a forensic evaluation? Or should an evaluator turn off all feelings in order to remain neutral and unbiased?

That question is at the center of a controversy in the murder trial of Jodi Arias that I blogged about last week, with the prosecutor accusing a defense-retained psychologist of unethical conduct for giving a self-help book to the defendant.

Under heavy-artillery fire, Richard Samuels* denied prosecutor Juan Martinez's accusation of "having feelings for" the defendant, who killed her ex-boyfriend and is claiming self defense. Samuels testified he gave Arias a book because he is a "compassionate person" and thought the book would help her, but that his objectivity was never compromised. The exchange prompted a juror to ask Samuels:  "Do you believe absolutely that it is possible to remain purely unbiased in an evaluation once compassion creeps in?"

Martinez called a rebuttal witness to testify that gift-giving is a boundary violation and unethical. Newly minted psychologist Janeen DeMarte, testifying in court for only the third time, testified that a forensic evaluator should never feel compassion for a defendant, as such feelings compromise integrity (a position she modified under cross-examination).

Given these starkly divergent positions, I was curious what other forensic psychologists think. So, I initiated a conversation with a group of seasoned professionals, publishing two brief video excerpts of the relevant testimony on YouTube (click on the images below to watch the excerpts) to guide the conversation.

View the Richard Samuels excerpt (18 minutes) by clicking on the above image.

View the Janeen DeMarte excerpt (10 minutes) by clicking on the image.

Gift-giving: A bad idea

Contrary to the prosecutor’s insistence, our Code of Ethics does not prohibit gift-giving. Nor do the Forensic Psychology Specialty Guidelines (which are aspirational rather than binding). It's an ethical gray area.** As with much involving ethics, it all depends. But still, the consensus was that giving a book to a defendant is a mistake. Whether or not it affects one's objectivity, it gives the appearance of potential bias. And in forensic psychology, maintaining credibility is essential. "Gift giving," as one colleague put it, "gives the appearance of either a personal or therapeutic relationship with the defendant."

Samuels's error lay in failing to think through his action, and recognize how his blurring of boundaries could damage his credibility and thus undermine his testimony. Ultimately, by discrediting his own work, he potentially caused harm to the very client whom he was attempting to help.

The nature of the book itself further undermined the expert's credibility in this case. As another colleague pointed out, what good is a self-help book, Your Erroneous Zones: Step-by-Step Advice for Escaping the Trap of Negative Thinking and Taking Control of Your Life, going to do a woman who is in jail and facing the death penalty for stabbing and shooting someone to death?

On the other hand, although gift-giving is a slippery slope, there are times when only a curmudgeon would not give. For example, if you are conducting a lengthy evaluation and you decide to buy yourself a drink or a snack from the vending machines, do you refuse the subject a soda, for fear it would undermine objectivity or lend an appearance of bias? How rude!

Empathy: It's only human 

The general consensus was that, without some measure of empathy, one cannot hope to understand the subject or the situation. One is left with "an equally problematic perspective that dehumanizes and decontextualizes the evaluation,' in the words of another psychologist.

"There is an orientation toward forensic work that is strikingly cold," noted yet another colleague. "I have seen some highly experienced forensic examiners who use their 'objectivity' with icy precision and thereby fail to establish the kind of rapport necessary to obtain a complete account of the offense or other important information…. The absence of empathy can be just as biasing as too much of it."

Or, as Jerome Miller wrote, in one of my favorite quotes from the forensic trenches, "It takes unusual arrogance to dismiss a fellow human being’s lost journey as irrelevant."

In other words, without empathy, any claim to objectivity is illusory, because there is no true understanding. And that, too, is dangerous. DeMarte's extreme position thus errs in the opposite direction from Samuels', in advocating for forensic psychologists to be automaton-like technocrats.

Indeed, the main danger of empathy as discussed by leaders in our field, such as Gary Melton and colleagues in Psychological Evaluations for the Courts, is not that it biases the evaluator, but that it potentially seduces vulnerable subjects into revealing too much, thus unwittingly increasing their legal jeopardy. For this reason, Daniel Shuman, in a minority position in the field, argues that using clinical techniques to enhance empathy is unethical because this can -- wittingly or unwittingly -- cause harm to evaluatees. 

After all, our training as therapists makes us good at projecting understanding, and at least the illusion of compassion. Our subjects often let down their guard and experience the encounter as therapeutic, even when we clearly inform them that we are not there to help them in any way, and even when we remain vigilant to control our expressions of empathy.

"The best forensic evaluations bring all the clinical skills learned to promote self-disclosure and emotional emitting (empathy, reflective comments, attention to feelings, suspension of moral judgment, etc.)," a colleague commented. "We know how to get people to talk about things that they might otherwise wish to hide from others and themselves. Most defendants feel understood or at least feel they have been heard at the conclusion of an assessment."

Behaviors, not emotions, can be unethical

A third general consensus emerging from our professional dialogue was that feelings themselves are "almost never unethical." Which is fortunate, as we can never know for certain what another person is thinking or feeling. Rather, it is the behavior that follows that can be problematic; we must remain alert to what feelings a subject is evoking in us, lest they lead us astray. Sticking close to the data, and being transparent in our formulations, can keep us from behaving incompetently or problematically in response to our feelings, whether of empathy and compassion or -- at least as problematic -- dislike or revulsion. 

Bottom line: Do not check your empathy at the jailhouse door. You need it in order to do your job. And also to remain human.

Thanks to all of the many eloquent and insightful colleagues who contributed to this conversation.


NOTES:

*Samuels has taken down his website (svpexpertwitness.com), so I am providing a link to an old cached version.  

**Psychology ethicist Ofer Zur has written more on gift-giving in psychotherapy, with links to the gift-giving provisions of various professional ethics codes.