In the absence of a definition of service and guidelines for conducting service-centred science, psychology research cannot reach its potential to give back to participants, patients, or communities. In many basic clinical research reports, allusions to individual or societal benefit appear briefly in Discussion or Future Directions sections, reflecting perfunctory afterthoughts rather than core professional tenets. Separately, applied clinical psychology research is often characterized by aspiration–method mismatches5, whereby methodological shortcomings render stated goals to improve clinical care unachievable. For example, an underpowered study with inflated, unreliable estimated effects cannot differentiate helpful from unhelpful treatments, or clinically-useful prognostic predictors from statistical noise. At best, such work produces equivocal findings with limited practical utility; at worst, it yields endorsements or dissemination of questionable treatment approaches, or spurs the misallocation of resources towards unreliable leads.
Youth psychotherapy research exemplifies how aspirations to identify evidence-based treatments are routinely misaligned with methods, and how this mismatch undermines our work’s utility. Over the past six decades, the average sample size of youth psychotherapy trials has been approximately 69 participants, giving the average trial 47.5% power to detect overall treatment effects (estimated, optimistically, at Cohen’s effect size (d) = 0.46)6. To achieve 80% power to detect treatment effects would require twice as many participants5. Likewise, aspirations to identify treatments for diverse populations are undermined when participants in clinical trials are overwhelmingly white and middle-income, as in much youth psychotherapy research5,6.
Moreover, although community perspectives are commonly incorporated into the planning of community and counselling psychology research7, this is rarely the case in clinical psychology research. This decreases the odds that clinical psychology will address scientific questions that matter to those we aim to serve. Even efforts to benefit participants themselves in a given study (for example, by connecting participants with treatment, providing feedback reports, or sharing plain-language study results) are seldom documented in scientific publications, leaving the nature and depth of missed opportunities impossible to gauge.
Importantly, individual researchers are not to blame for these challenges and omissions. Clinical psychologists are not asked to provide evidence of service at any level when designing studies, applying for funding, or publishing results. Indeed, clear standards for conducting service-centred science simply do not exist, despite the profession’s goal of service–research integration.