• Subject List
  • Take a Tour
  • For Authors
  • Subscriber Services
  • Publications
  • African American Studies
  • African Studies
  • American Literature
  • Anthropology
  • Architecture Planning and Preservation
  • Art History
  • Atlantic History
  • Biblical Studies
  • British and Irish Literature
  • Childhood Studies
  • Chinese Studies
  • Cinema and Media Studies
  • Communication
  • Criminology
  • Environmental Science
  • Evolutionary Biology
  • International Law
  • International Relations
  • Islamic Studies
  • Jewish Studies
  • Latin American Studies
  • Latino Studies
  • Linguistics
  • Literary and Critical Theory
  • Medieval Studies
  • Military History
  • Political Science
  • Public Health
  • Renaissance and Reformation

Social Work

  • Urban Studies
  • Victorian Literature
  • Browse All Subjects

How to Subscribe

  • Free Trials

In This Article Expand or collapse the "in this article" section Implementation Science and Practice

Introduction.

  • Why We Need Implementation Research and Practice
  • Setting the Agenda
  • General Resources
  • Resources for Providers and Organizations
  • Theoretical and Conceptual Frameworks
  • Barriers and Facilitators to Dissemination and Implementation
  • Guideline Implementation Strategies
  • Training Strategies
  • Multilevel and Multifaceted Strategies
  • Measurement
  • Research Design

Related Articles Expand or collapse the "related articles" section about

About related articles close popup.

Lorem Ipsum Sit Dolor Amet

Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Aliquam ligula odio, euismod ut aliquam et, vestibulum nec risus. Nulla viverra, arcu et iaculis consequat, justo diam ornare tellus, semper ultrices tellus nunc eu tellus.

  • Evidence-based Social Work Practice
  • Evidence-based Social Work Practice: Finding Evidence
  • Experimental and Quasi-Experimental Designs
  • Impact of Emerging Technology in Social Work Practice
  • Meta-analysis
  • Mixed Methods Research
  • Social Intervention Research
  • Social Work Research Methods
  • Systematic Review Methods
  • Technology Adoption in Social Work Education
  • Technology for Social Work Interventions
  • Transdisciplinary Science

Other Subject Areas

Forthcoming articles expand or collapse the "forthcoming articles" section.

  • Rare and Orphan Diseases and Social Work Practice
  • Social Work Practice with Transgender and Gender Expansive Youth
  • Unaccompanied Immigrant and Refugee Children
  • Find more forthcoming articles...
  • Export Citations
  • Share This Facebook LinkedIn Twitter

Implementation Science and Practice by Enola K. Proctor , Byron Powell , Hollee McGinnis LAST REVIEWED: 29 November 2011 LAST MODIFIED: 29 November 2011 DOI: 10.1093/obo/9780195389678-0012

Evidence-based practice (EBP) has been increasingly advocated and is gaining wider acceptance in social work. This signals a continuing reaffirmation of social work’s commitment to generating and maintaining a scientific knowledge base in general and, more specifically, to an expectation that social work be informed by, and based on, evidence from scientific research. Yet actual implementation of evidence-based programs, services, and practices remains a formidable challenge. In most areas of health and human services, evidence-based care comprises only a small fraction of all the care that is actually delivered. We have little if any data on the proportion of actual social work services that are evidence based. In response to the challenge of moving evidence-based practice from the research environment to real-world care, a growing literature addresses the science and practice of translation, and specifically the implementation of evidence-based practices. Translational science is a broad field that pertains to the progression from basic biological research to its application for public health benefit. Implementation research is a subset of translational research, although some literatures use these terms interchangeably. Literature on implementation reflects an early stage of science, for we have much to learn about the factors that enhance implementation, and even more to learn about actual strategies implementing evidence-based practices and the methodology for studying implementation processes. The literature discussed in this entry is drawn from a range of disciplines, because social work journals have published very few articles about implementation and because implementation itself is an inherently transdisciplinary topic.

Introductory Works

Implementation is defined as the use of strategies to adopt and integrate evidence-based practices, programs, and treatments and change service delivery within specific settings. Research on implementation addresses how well new interventions are accepted, fitted within real-world service situations, and sustained. Rarely can interventions that are developed and tested within the context of efficacy and effectiveness research be transferred to different settings without deliberate effort and/or adaptation. Therefore implementation research is needed to examine and understand the process, methods, and outcomes of implementation. Such research will help inform an evidence base for implementation practice, or the use of effective approaches to introduce and sustain evidence-based programs and services. Although implementation of evidence-based care is a concern for all of the health, human, and social services, most of the articles in this entry address the implementation of evidence-based psychosocial practices in order to provide readers with three important types of information: (1) why we need implementation research and practice, (2) systematic reviews and broad overviews of implementation research, and (3) articles that set the agenda for implementation science and practice.

back to top

Users without a subscription are not able to see the full content on this page. Please subscribe or login .

Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here .

  • About Social Work »
  • Meet the Editorial Board »
  • Adolescent Depression
  • Adolescent Pregnancy
  • Adolescents
  • Adoption Home Study Assessments
  • Adult Protective Services in the United States
  • African Americans
  • Aging out of foster care
  • Aging, Physical Health and
  • Alcohol and Drug Abuse Problems
  • Alcohol and Drug Problems, Prevention of Adolescent and Yo...
  • Alcohol Problems: Practice Interventions
  • Alcohol Use Disorder
  • Alzheimer's Disease and Other Dementias
  • Anti-Oppressive Practice
  • Asian Americans
  • Asian-American Youth
  • Autism Spectrum Disorders
  • Baccalaureate Social Workers
  • Behavioral Health
  • Behavioral Social Work Practice
  • Bereavement Practice
  • Bisexuality
  • Brief Therapies in Social Work: Task-Centered Model and So...
  • Bullying and Social Work Intervention
  • Canadian Social Welfare, History of
  • Case Management in Mental Health in the United States
  • Central American Migration to the United States
  • Child Maltreatment Prevention
  • Child Neglect and Emotional Maltreatment
  • Child Poverty
  • Child Sexual Abuse
  • Child Welfare
  • Child Welfare and Child Protection in Europe, History of
  • Child Welfare and Parents with Intellectual and/or Develop...
  • Child Welfare Effectiveness
  • Child Welfare, Immigration and
  • Child Welfare Practice with LGBTQ Youth and Families
  • Children of Incarcerated Parents
  • Christianity and Social Work
  • Chronic Illness
  • Clinical Social Work Practice with Adult Lesbians
  • Clinical Social Work Practice with Males
  • Cognitive Behavior Therapies with Diverse and Stressed Pop...
  • Cognitive Processing Therapy
  • Cognitive-Behavioral Therapy
  • Community Development
  • Community Policing
  • Community-Based Participatory Research
  • Community-Needs Assessment
  • Comparative Social Work
  • Computational Social Welfare: Applying Data Science in Soc...
  • Conflict Resolution
  • Council on Social Work Education
  • Counseling Female Offenders
  • Criminal Justice
  • Crisis Interventions
  • Cultural Competence and Ethnic Sensitive Practice
  • Culture, Ethnicity, Substance Use, and Substance Use Disor...
  • Dementia Care
  • Dementia Care, Ethical Aspects of
  • Depression and Cancer
  • Development and Infancy (Birth to Age Three)
  • Differential Response in Child Welfare
  • Digital Storytelling for Social Work Interventions
  • Direct Practice in Social Work
  • Disabilities
  • Disability and Disability Culture
  • Domestic Violence Among Immigrants
  • Early Pregnancy and Parenthood Among Child Welfare–Involve...
  • Eating Disorders
  • Ecological Framework
  • Economic Evaluation
  • Elder Mistreatment
  • End-of-Life Decisions
  • Epigenetics for Social Workers
  • Ethical Issues in Social Work and Technology
  • Ethics and Values in Social Work
  • European Institutions and Social Work
  • European Union, Justice and Home Affairs in the
  • Evidence-based Social Work Practice: Issues, Controversies...
  • Families with Gay, Lesbian, or Bisexual Parents
  • Family Caregiving
  • Family Group Conferencing
  • Family Policy
  • Family Services
  • Family Therapy
  • Family Violence
  • Fathering Among Families Served By Child Welfare
  • Fetal Alcohol Spectrum Disorders
  • Field Education
  • Financial Literacy and Social Work
  • Financing Health-Care Delivery in the United States
  • Forensic Social Work
  • Foster Care
  • Foster care and siblings
  • Gender, Violence, and Trauma in Immigration Detention in t...
  • Generalist Practice and Advanced Generalist Practice
  • Grounded Theory
  • Group Work across Populations, Challenges, and Settings
  • Group Work, Research, Best Practices, and Evidence-based
  • Harm Reduction
  • Health Care Reform
  • Health Disparities
  • Health Social Work
  • History of Social Work and Social Welfare, 1900–1950
  • History of Social Work and Social Welfare, 1950-1980
  • History of Social Work and Social Welfare, pre-1900
  • History of Social Work from 1980-2014
  • History of Social Work in China
  • History of Social Work in Northern Ireland
  • History of Social Work in the Republic of Ireland
  • History of Social Work in the United Kingdom
  • HIV/AIDS and Children
  • HIV/AIDS Prevention with Adolescents
  • Homelessness
  • Homelessness: Ending Homelessness as a Grand Challenge
  • Homelessness Outside the United States
  • Human Needs
  • Human Trafficking, Victims of
  • Immigrant Integration in the United States
  • Immigrant Policy in the United States
  • Immigrants and Refugees
  • Immigrants and Refugees: Evidence-based Social Work Practi...
  • Immigration and Health Disparities
  • Immigration and Intimate Partner Violence
  • Immigration and Poverty
  • Immigration and Spirituality
  • Immigration and Substance Use
  • Immigration and Trauma
  • Impaired Professionals
  • Implementation Science and Practice
  • Indigenous Peoples
  • Individual Placement and Support (IPS) Supported Employmen...
  • In-home Child Welfare Services
  • Intergenerational Transmission of Maltreatment
  • International Human Trafficking
  • International Social Welfare
  • International Social Work
  • International Social Work and Education
  • International Social Work and Social Welfare in Southern A...
  • Internet and Video Game Addiction
  • Interpersonal Psychotherapy
  • Intervention with Traumatized Populations
  • Interviewing
  • Intimate-Partner Violence
  • Juvenile Justice
  • Kinship Care
  • Korean Americans
  • Latinos and Latinas
  • Law, Social Work and the
  • LGBTQ Populations and Social Work
  • Mainland European Social Work, History of
  • Major Depressive Disorder
  • Management and Administration in Social Work
  • Maternal Mental Health
  • Measurement, Scales, and Indices
  • Medical Illness
  • Men: Health and Mental Health Care
  • Mental Health
  • Mental Health Diagnosis and the Addictive Substance Disord...
  • Mental Health Needs of Older People, Assessing the
  • Mental Health Services from 1990 to 2023
  • Mental Illness: Children
  • Mental Illness: Elders
  • Microskills
  • Middle East and North Africa, International Social Work an...
  • Military Social Work
  • Moral distress and injury in social work
  • Motivational Interviewing
  • Multiculturalism
  • Native Americans
  • Native Hawaiians and Pacific Islanders
  • Neighborhood Social Cohesion
  • Neuroscience and Social Work
  • Nicotine Dependence
  • Occupational Social Work
  • Organizational Development and Change
  • Pain Management
  • Palliative Care
  • Palliative Care: Evolution and Scope of Practice
  • Pandemics and Social Work
  • Parent Training
  • Personalization
  • Person-in-Environment
  • Philosophy of Science and Social Work
  • Physical Disabilities
  • Podcasts and Social Work
  • Police Social Work
  • Political Social Work in the United States
  • Positive Youth Development
  • Postmodernism and Social Work
  • Postsecondary Education Experiences and Attainment Among Y...
  • Post-Traumatic Stress Disorder (PTSD)
  • Practice Interventions and Aging
  • Practice Interventions with Adolescents
  • Practice Research
  • Primary Prevention in the 21st Century
  • Productive Engagement of Older Adults
  • Profession, Social Work
  • Program Development and Grant Writing
  • Promoting Smart Decarceration as a Grand Challenge
  • Psychiatric Rehabilitation
  • Psychoanalysis and Psychodynamic Theory
  • Psychoeducation
  • Psychometrics
  • Psychopathology and Social Work Practice
  • Psychopharmacology and Social Work Practice
  • Psychosocial Framework
  • Psychosocial Intervention with Women
  • Psychotherapy and Social Work
  • Qualitative Research
  • Race and Racism
  • Readmission Policies in Europe
  • Redefining Police Interactions with People Experiencing Me...
  • Rehabilitation
  • Religiously Affiliated Agencies
  • Reproductive Health
  • Research Ethics
  • Restorative Justice
  • Risk Assessment in Child Protection Services
  • Risk Management in Social Work
  • Rural Social Work in China
  • Rural Social Work Practice
  • School Social Work
  • School Violence
  • School-Based Delinquency Prevention
  • Services and Programs for Pregnant and Parenting Youth
  • Severe and Persistent Mental Illness: Adults
  • Sexual and Gender Minority Immigrants, Refugees, and Asylu...
  • Sexual Assault
  • Single-System Research Designs
  • Social and Economic Impact of US Immigration Policies on U...
  • Social Development
  • Social Insurance and Social Justice
  • Social Justice and Social Work
  • Social Movements
  • Social Planning
  • Social Policy
  • Social Policy in Denmark
  • Social Security in the United States (OASDHI)
  • Social Work and Islam
  • Social Work and Social Welfare in East, West, and Central ...
  • Social Work and Social Welfare in Europe
  • Social Work Education and Research
  • Social Work Leadership
  • Social Work Luminaries: Luminaries Contributing to the Cla...
  • Social Work Luminaries: Luminaries contributing to the fou...
  • Social Work Luminaries: Luminaries Who Contributed to Soci...
  • Social Work Regulation
  • Social Work with Interpreters
  • Solution-Focused Therapy
  • Strategic Planning
  • Strengths Perspective
  • Strengths-Based Models in Social Work
  • Supplemental Security Income
  • Survey Research
  • Sustainability: Creating Social Responses to a Changing En...
  • Syrian Refugees in Turkey
  • Task-Centered Practice
  • Technology, Human Relationships, and Human Interaction
  • Technology in Social Work
  • Terminal Illness
  • The Impact of Systemic Racism on Latinxs’ Experiences with...
  • Translational Science and Social Work
  • Transnational Perspectives in Social Work
  • Transtheoretical Model of Change
  • Trauma-Informed Care
  • Triangulation
  • Tribal child welfare practice in the United States
  • United States, History of Social Welfare in the
  • Universal Basic Income
  • Veteran Services
  • Vicarious Trauma and Resilience in Social Work Practice wi...
  • Vicarious Trauma Redefining PTSD
  • Victim Services
  • Virtual Reality and Social Work
  • Welfare State Reform in France
  • Welfare State Theory
  • Women and Macro Social Work Practice
  • Women's Health Care
  • Work and Family in the German Welfare State
  • Workforce Development of Social Workers Pre- and Post-Empl...
  • Working with Non-Voluntary and Mandated Clients
  • Young and Adolescent Lesbians
  • Youth at Risk
  • Youth Services
  • Privacy Policy
  • Cookie Policy
  • Legal Notice
  • Accessibility

Powered by:

  • [185.148.24.167]
  • 185.148.24.167
  • Tools and Resources
  • Customer Services
  • Addictions and Substance Use
  • Administration and Management
  • Aging and Older Adults
  • Biographies
  • Children and Adolescents
  • Clinical and Direct Practice
  • Couples and Families
  • Criminal Justice
  • Disabilities
  • Ethics and Values
  • Gender and Sexuality
  • Health Care and Illness
  • Human Behavior
  • International and Global Issues
  • Macro Practice
  • Mental and Behavioral Health
  • Policy and Advocacy
  • Populations and Practice Settings
  • Race, Ethnicity, and Culture
  • Religion and Spirituality
  • Research and Evidence-Based Practice
  • Social Justice and Human Rights
  • Social Work Profession
  • Share This Facebook LinkedIn Twitter

Article contents

Program implementation.

  • Rosalyn M. Bertram Rosalyn M. Bertram University of Missouri-Kansas City
  • https://doi.org/10.1093/acrefore/9780199975839.013.949
  • Published online: 02 January 2014
  • This version: 21 December 2022
  • Previous version

For more than two decades, academic professional degree programs, as well as behavioral health, education, public health and social services have grappled with how to integrate the emerging science of implementation and evidence-based programs, policies or practices into their organizations and systems. During these initial decades of the 21 st century, peer-reviewed journals such as Implementation Science , Implementation Research and Practice , and Global Implementation Research and Applications were established. Concurrently, special issues or sections of other journals are also adding to our knowledge of policy and program implementation as was as of academic program preparation and organizational development of a workforce versed in the implementation of effective, sustainable programs or practices. A recent study of this explosion of peer-reviewed outlets was published in Frontiers in Public Health .

Organizations such as the Society of Implementation Research Collaboration and the Global Implementation Society offer international conferences as venues for interdisciplinary exploration and development of the science and practice of sound, sustainable implementation of effective policies, programs, or practices. Registries of evidence-based or supported programs are provided by Blueprints for Health Youth Development, the California Evidence Based Clearinghouse for Child Welfare, The National Child Traumatic Stress Network, the Office of Juvenile Justice and Delinquency Prevention, the Title IV-E Prevention Services Clearinghouse, and others.

Guidance on program or practice selection implementation can be found through websites maintained by the Active Implementation Research Network, the Child and Family Evidence-Based Practice Consortium, the Franklin Porter Graham Child Development Institute’s National Implementation Research Network, the University of Maryland School of Social Work’s Institute for Innovation and Implementation and many other organizations.

  • evidence-based
  • program implementation
  • social work curricula

Updated in this version

Article and references have been substantially updated to reflect current scholarship.

Introduction

Discourse on evidence-based practice emerged from the field of medicine in the 1990s. Described as the explicit application of scientific evidence to guide decision-making in patient care, evidence-based practice is well understood and accepted in medical education and services ( Sackett et al., 1996 ; Straus et al., 2011 ) but sparked debates in social work ( Bertram, Charnin, et al., 2015 ; Bertram & Kerns, 2019 ; Howard et al., 2003 ; Rubin & Parrish, 2007 ). Concurrent with these debates, research examining how to ensure effective service delivery birthed the field of implementation science, which Eccles and Mittman (2006) defined as the study of means to promote the systematic uptake of research findings to ensure the quality and effectiveness of service.

Prior to 2005 , there was little consensus about the infrastructure necessary to achieve and sustain practice fidelity and improved client outcomes. However, in that year, the National Implementation Research Network (NIRN) published a seminal review of over three decades of empirical studies that identified what contributes to improved products and services in corporate business, agribusiness, hospital administration, medical and nursing services, social services, education, and other disciplines. From these sources, they identified three overarching and integrated frameworks: intervention components , implementation drivers , and implementation stages ( Fixsen et al., 2005 ). These frameworks provided a basis to synthesize discussions about program development, fidelity, and outcomes, as well as implementation across federal, state, and local initiatives ( Bertram, Blase, et al., 2011 ). The NIRN study emerged in near concurrence with a (now well-respected) new journal ( Implementation Science ). This synergy of efforts was well represented at the initial biennial Global Implementation Conference, which in 2011 attracted over 800 participants to Washington, DC from every continent but Antarctica ( Bertram, Blase, & Fixsen, 2015 ).

This article discusses how programs can actively think through and apply these practical implementation frameworks. Taught in some graduate social work courses, this framework can and should be infused throughout social work curricula ( Bertram, Collins, & Elsen, 2020 ; Bertram & Kerns, 2019 ; Bertram, King, et al., 2014 ; Bertram et al., 2017 ).

Intervention Components, Program Implementation, and Social Work Curricula

Matching service to population needs is an emphasis in social work, especially in discussions about evidence-based practice ( Howard et al., 2003 ; Mullen et al., 2008 ). Client characteristics and concerns as well as research that identifies effective means to address them should inform the selection of interventions in a process of evidence-based practice ( Gambrill & Gibbs, 2009 ; Gibbs & Gambrill, 2002 ). While that process addresses individual practitioner assessment, planning, and interventions with clients, target population characteristics and concerns should also inform an organization in its search for effective programs and practices. In that exploration, the organization should consider the essential elements and activities, theory base, and theory of change of any potential program or practice ( Bertram & Kerns, 2019 ). This framework of intervention components provides a sound foundation for purposeful selection of a new practice ( Bertram, Blase, & Fixsen, 2015 ; Fixsen et al., 2005 ). The framework includes (a) target population characteristics (behavioral, cultural, socioeconomic, and other factors that suggest a good match with the practice model); (b) model definition (who should be engaged in what elements, activities, and phases of service delivery); (c) theory base(s) supporting those elements and activities; and (d) theory of change (how those elements and activities create improved outcomes for the target population).

At the heart of implementation science is the explicit understanding that exploration and selection of an evidence-based or promising practice is simply the first step toward effective implementation. Service organizations must adjust infrastructure and resources to support and sustain effective delivery of the program or practice ( Bertram, Blase, & Fixsen, 2015 ; Bertram & Kerns, 2019 ; Bertram, Suter, et al., 2011 ; Fixsen et al., 2005 ; Glisson, 2007 ; Mullen et al., 2008 ). Without these adjustments, implementation of even well-tested practice models may lack fidelity and prove ineffective, inefficient, and unsustainable ( Henggeler et al., 1999 ).

Unfortunately, knowledge of implementation science and frameworks and the ability to critically examine, choose, and implement an evidence-based program or practice is not yet a consistent product of graduate academic programs ( Barwick, 2011 ; Bertram, Charnin, et al., 2015 ; Bertram, King, et al., 2014 ; Bertram et al., 2017 ). Exploration and selection of evidenced-based or evidence-informed practices requires social workers to be able to find and critically examine literature about those practices ( Aarons & Sawitzky, 2006 ; Bellamy et al., 2008 ; Howard et al., 2003 ; Manuel et al., 2009 ). However, students often enter graduate studies with a primary desire to provide counseling in private practice and are not interested in seeking data to inform their practice or to evaluate programs in which they are eventually employed ( D’Aprix et al., 2004 ; Green et al., 2001 ).

Social work curricula may not be sufficiently challenging these perspectives. Often research courses in master of social work (MSW) programs teach single-subject case studies ( Hardcastle & Bisman, 2003 ; Rubin et al., 2010 ) rather than conceptual frameworks bridging theory, practice, policy, research, and evaluation. Instead of seeking evidence-informed literature about a target population to compare with elements, activities, theory of change, and outcomes of a promising or evidence-based practice, social workers often rely upon overview texts from their practice or human behavior courses or they seek guidance from peers ( Bertram, King, et al., 2014 ; Howard et al., 2003 ; Smith et al., 2007 ).

This phenomenon is not limited to graduates of social work programs. Supervisors and administrators ( n = 589) of agencies serving youth and families in Canada and the United States were asked how their masters-level practitioners with degrees in social work, psychology, or counseling learn evidence-based practice ( Barwick, 2011 ). Most of the responding supervisors and administrators (73%) believed that evidence-based practice was important but noted that the necessary research and analytical skills, including the appraisal and use of empirical literature, were abilities developed in the work setting rather than in graduate studies.

Critics of evidence-based practice believe that specification of elements, activities, and phases of a service model and measures of fidelity constrain the creativity they believe is the basis for effective, individualized service ( Addis et al., 1999 ). Roberts and Yeager (2004) suggest that evidence-based practices are viewed with skepticism by some academics who misperceive the emphasis on empirical testing and use of well-defined models as reductionist.

Model Definition and Theory Base(s)

The rationale for engaging participants in specified elements, activities, or phases of a program may have multiple theory bases. When this is so, it is important that they complement or are congruent with each other. Who is engaged, the focus and process of assessment, and interventions should focus through and be congruent with a practice’s theory base and theory of change and should be a good match to client context and characteristics ( Bertram, Blase, & Fixsen, 2015 ). For example, multisystemic therapy (MST) clearly and explicitly embraces ecological systems theory ( Bronfenbrenner, 1979 ) as a unifying theory base in its theory of change. This focuses and supports a search for multiple factors in the youth, family, peers, school, and community that shape youth behavior. Prosocial ecological interventions target these factors to eliminate the behaviors of concern. This clarity enables MST purveyors to train clinicians and supervisors efficiently and effectively, using specific case data to evaluate model fidelity and inform staff coaching and development ( Henggeler et al., 2009 ).

However, all too often programs or practitioners assert that they use an eclectic approach based upon each client’s needs. For example, a review of evaluations of program implementation at 34 MSW student field placement sites in Kansas City identified that staff often suggested a systems theory construct such as “person-in-environment” shaped assessments and interventions. However, they also asserted that constructs from individual psychodynamic theory such as projection and transference were additional bases for assessment and intervention activities. Invariably, when there were incongruent or unclear theory bases, the practice model was not well defined or supported ( Bertram, King, et al., 2014 ).

Target Population Characteristics, Model Definition, and Theory of Change

Careful consideration of target population includes age, gender, ethnicity and race, socioeconomic and cultural factors, behaviors of concern, and multisystem engagement. These factors and organization resources should shape the selection or rejection of a practice model. Based upon population characteristics and program resources, there should be explicit reasons for choosing to provide or not provide a practice model ( Bertram, Blase, & Fixsen, 2015 ; Fixsen et al., 2005 ). For example, individual psychotherapy is not a productive practice with gang affiliating youth. Their aggressive or substance-using behaviors are shaped by interactions between and within the community, family, school, and youth peer groups. Removal of such youth from prosocial peer interactions and placing them in restrictive program settings with similar antisocial peers tends to reenforce antisocial behavior ( Henggeler et al., 2009 ).

How do program elements and activities contribute to desired improvements in client context or behaviors? How does the focus of assessment and the design and delivery of interventions diminish or eliminate factors contributing to those behaviors of concern? If implemented with fidelity, how does these processes create change? Every practice model has a theory of change. Careful consideration of theory of change should guide administrators and communities before they seek to secure or commit funds and resources ( Bertram, Blase, et al., 2011 ; Bertram & Kerns, 2019 ). Describing a program or practice model’s elements, activities, and theory of change should be an assignment in social work practice classes. This will diminish students’ tendency to study techniques in an eclectic approach to practice. When practitioners or programs use only select elements or activities of a proven practice model, they ignore the fact that research demonstrating its effectiveness focused on delivery with fidelity of all the essential elements and activities ( Bertram & Kerns, 2019 ; Bertram, King, et al., 2014 ).

Implementation Drivers

Implementation drivers are the essential components of organization infrastructure that support effective, sustainable programs ( Bertram, Blase, & Fixsen, 2015 ; Bertram, Blase, et al., 2011 ; Blase et al., 2012 ). Competency drivers support development of practitioner competence and confidence through model-pertinent staff selection, training, coaching, and performance (fidelity) assessment. Organization drivers establish the administrative, funding, policy, and procedural environments to ensure that competency drivers are consistent, integrated, accessible, and effective. They also establish and support continuous quality monitoring and improvement feedback loops while attending to client outcomes ( Bertram, Blase, & Fixsen, 2015 ; Bertram, Blase, et al., 2011 ; Blase et al., 2012 ; Fixsen et al., 2009 ). Depending upon circumstances, adjusting competency and organization drivers requires different types of leadership. Leadership drivers discriminate between complex and technical implementation challenges to apply appropriate leadership strategies and expertise ( Bertram, Blase, & Fixsen, 2015 ; Bertram & Kerns, 2019 ; Heifetz & Laurie, 1997 ).

These drivers must be thoughtfully repurposed by thinking through the intervention components and adjusting implementation drivers so they function in an integrated and compensatory manner to support effective service delivery with fidelity. Organized in this way, weakness or limitations in one driver can be addressed by other drivers ( Bertram, Blase, & Fixsen, 2015 ; Bertram & Kerns, 2019 ). For example, increasing the frequency of model-pertinent, data-informed coaching may compensate for limited training funds or opportunities. When implementation drivers are carefully considered and adjusted to support a practice model, organizational culture and climate will gradually be reshaped ( Bertram, Schaffer, & Charnin, 2014 ; Fixsen et al., 2009 ; Glisson, 2007 ; Kimber et al., 2012 ).

Figure 1. Implementation drivers for improved outcomes.

Competency Drivers

Competency drivers promote practitioner competence and confidence so that high-fidelity service delivery produces improved client outcomes. By considering staff selection, training, coaching, and performance (fidelity) assessment in light of the practice model’s intervention components, competency drivers can function in an integrated and compensatory manner with each other and with other implementation drivers ( Bertram, Blase, & Fixsen, 2015 ; Bertram & Kerns, 2019 ).

Staff Selection

Staff selection is often not discussed and is seldom evaluated in the literature ( Fixsen et al., 2005 ). For example, a review of over two decades of literature addressing the wraparound model focused through NIRN implementation frameworks, and no publications were found regarding staff selection criteria or processes ( Bertram, Suter, et al., 2011 ). In a review of evaluations of program implementation at 34 service sites in or near Kansas City, the most common criteria used by programs in selecting staff was educational background and/or licensure. Only a few sites sought staff with experience, knowledge, or aptitude for engaging the target population, and most of these were psychiatric settings ( Bertram, King, et al., 2014 ). While it may be necessary to select licensed staff to meet insurance or funding requirements, it is critically important that staff selection criteria also seek model-pertinent or target-population-specific knowledge, skills, and aptitude ( Bertram, Blase, & Fixsen, 2015 ; Fixsen et al., 2009 ).

Some model-pertinent attributes may not be easily developed through training or coaching and therefore must be part of predetermined hiring or selection criteria. For example, compassion, patience, and comfort with cognitive, verbal, or behavioral challenges of a developmentally disabled client might be a prerequisite for staff selection at an independent living placement or for a work setting serving this population. Comfort with diverse and conflicting professional perspectives might be a criterion for a team facilitation role in multidisciplinary responses to domestic violence or child abuse ( Bertram, Blase, et al., 2011 ; Fixsen et al., 2009 ). Finally, staff selection criteria and processes should always evaluate a candidate’s willingness and ability to work with and apply coaching ( Bertram, Blase, & Fixsen, 2015 ).

Effective and sustainable implementation of any program requires behavior change in service providers, their supervisors, and administrators. When practitioners, supervisors, and other staff are carefully selected, training and coaching can more efficiently drive and enhance this behavior change. Training should develop knowledge of population characteristics and context, as well as the rationale for applying a specific program model. Program participants, elements, activities, phases, theory base, and theory of change should be understood throughout the organization. Finally, effective training should provide opportunities to practice model-pertinent skills and activities while receiving constructive feedback in a safe environment ( Bertram, Blase, & Fixsen, 2015 ; Blase et al., 2012 ). Implementation outcomes related to this competency driver are measurable. Through careful consideration of intervention components, an organization can evaluate pre- and post-training changes in staff knowledge and skills. These data provide baseline information for subsequent individualized coaching toward further staff development. Integrated examination of such data with fidelity performance assessments can then guide administrative evaluation of the training and coaching drivers ( Bertram & Kerns, 2019 ; Bertram, Schaffer, & Charnin, 2014 ; Bertram, Suter, et al., 2011 ; Kimber et al., 2012 ).

In addition to promoting knowledge and skill development, training supports improved staff investment in and understanding of the program model. However, increasingly competent and confident use of any service model requires skillful on-the-job coaching ( Ager & O’May, 2001 ; Denton et al., 2003 ; Schoenwald et al., 2004 ). Coaching is most effective when it uses multiple forms of information in an improvement cycle loop (e.g., observe, coach, data feedback, plan, reobserve). Coaching should always include some form of direct observation (e.g., in-person, audio, video) to accurately assess and develop staff skills and judgment. Best practices in coaching include developing and adhering to the formats, frequency, and focus delineated in a coaching plan as well as ensuring that supervisors and coaches are well selected, trained, coached, and held accountable for enhancing staff development ( Bertram, Blase, et al., 2011 ; Henggeler et al., 2009 ; Schoenwald et al., 2000 ).

Unfortunately, many organizations confuse supervision with coaching. For example, in a review of evaluations of program implementation at 34 service sites in or near Kansas City, most respondents indicated that coaching or supervision activities were ad hoc, not systematic, not data informed or focused upon enhancement of model-pertinent knowledge and skills. Instead, supervision focused upon risk containment or harm reduction in the most problematic cases, while also addressing bureaucratic administrative concerns. Further diminishing staff development and often in lieu of a well-considered coaching plan, many respondents in these organizations proudly noted they offered opportunity to take leave for additional training to earn continuing education credits toward licensure ( Bertram, King, et al., 2014 ). Such approaches to staff development ignore the fact that to develop staff confidence and competence, training alone is insufficient ( Fixsen et al., 2009 ; Schoenwald et al., 2004 ).

Performance Assessment

Creating competent, model-pertinent practitioner performance is the responsibility of the service organization. As a driver of sustainable program implementation, performance assessment should examine two forms of model fidelity. One is related to practitioner enactment of key elements, activities, and phases of the program model. Case-specific measures of model fidelity are necessary to evaluate how well the competency drivers of staff selection, training, and coaching are operating ( Schoenwald et al., 2004 ).

The second type of fidelity that should be routinely examined is organizational performance in each of the implementation drivers. For example, is training provided as planned and intended? Are pre- and post-training tests of model-pertinent and population-specific knowledge and skills informing individualized coaching plans? Does that coaching occur as scheduled? How does it reenforce training content? How well and how frequently is coaching informed by model-pertinent case data and by observations of practice? With such data the fidelity and effectiveness of staff selection, training, and coaching can be assessed. These data may suggest specific adjustments to policy or procedure as well as systems-level factors requiring attention because they constrain achieving model fidelity or desired client outcomes. Performance assessment informs continuous quality improvement of both organization drivers and competency drivers, as purveyors, administrators, supervisors, and practitioners use implementation data to guide staff and program development ( Bertram, Blase, & Fixsen, 2015 ; Bertram, Blase, et al., 2011 ; Bertram, Schaffer, & Charnin, 2014 ; Schoenwald et al., 2004 ; Kimber et al., 2012 ).

Organization Drivers

When performance assessment measures demonstrate that staff selection, training, or coaching are integrated and functioning with fidelity, then the organization drivers of facilitative administration, data support, and systems-level intervention are probably model focused and well integrated ( Bertram, Blase, & Fixsen, 2015 ). Improved organizational culture and climate ( Aarons & Sawitzky, 2006 ; Kimber et al., 2012 ) emerge when model-pertinent, integrated organization drivers support and sustain effective use of competency drivers that systematically review performance assessment and outcomes data for continuous quality improvement (see figure 1 ). Finally, adjustments of organization drivers prior to service delivery provide administrators and the community with a practical assessment of agency- and system-level readiness to deliver and sustain a practice model ( Bertram, Blase, et al., 2011 ; Bertram, Schaffer, & Charnin, 2014 ; Fixsen et al., 2009 ). The integrated and compensatory nature of organization and competency drivers offers a clear and practical focus for social work courses addressing program administration and evaluation and supervision.

Facilitative Administration

To provide clients with high-quality, effective service through a carefully selected practice model, administrators must be proactive. They should begin with identification of desired outcomes and work backward to facilitate adjustments to implementation drivers. Working within and through intervention components and implementation driver frameworks, the goal should be to adjust work conditions to accommodate and support new functions needed to implement the practice effectively with fidelity. This begins with the exploration and assessment of the target population or community needs and of the organization’s capacity to implement the program. Activities related to this organization driver specifically focus on what is required to implement the chosen model effectively with fidelity to sustain implementation over time and through turnover in practitioners and administrations ( Fixsen et al., 2009 ). During program installation, existing policies, procedures, and data support systems must receive close scrutiny. Are they appropriate for the practice model? Are there adequate human and technical resources and how might they be repurposed or reorganized to best effect? ( Bertram, Blase, et al., 2011 ).

Administrators can similarly think through implementation frameworks to improve current services as occurred at the SAMHSA Children’s Mental Health Initiative grant site in Houston, Texas. A participatory evaluation of program implementation conducted by a team of family members, supervisors, administrators, and an implementation consultant identified multiple factors compromising wraparound model fidelity. Job descriptions, caseload size, training content, coaching, and decision support data systems required model-pertinent repurposing and integration. Given wraparound’s elements and activities, caseloads were too large and were administratively reduced from 20 to a more manageable eight cases per wraparound care coordinator. Two key position descriptions were nearly alike and resembled case management descriptions used in the organization’s other programs. These position responsibilities were clarified and rewritten. Supervisor responsibilities were reorganized so all staff working with the same family would receive coaching from the same supervisor rather than through the organization’s accustomed structure of providing a different supervisor for each type of position. Revised training clarified and operationalized theory bases supporting wraparound elements and activities. Case data forms were revised to reenforce new training content while informing a systematic approach to staff development through regularly scheduled coaching rather than ad hoc, risk containment supervision. Biweekly Skype review of these data by the consultant, supervisors, and administrators identified subsequent implementation patterns and guided further adjustments to the focus, frequency, and formats of coaching. After 18 months of these integrated organizational changes, both Wraparound Fidelity Index scores (WFI-4) and target population outcomes improved to above the national mean ( Bertram, Schaffer, & Charnin, 2014 ).

The Houston experience also provides a good example of the integrated and compensatory nature of competency and organization drivers. When organized in this manner, practice can inform policy and policy can then be adjusted to enhance practice. In implementation frameworks this is called practice-informed policy (PIP) and policy-enabled practice (PEP) cycles of information and change. In these informational cycles, administrators initially track fidelity and outcome data to identify and correct model drift. They seek and respond to feedback directly provided from the practice level regarding constraining or facilitating factors to both implementation outcomes and to target population outcomes ( Bertram, Blase, et al., 2011 ; Bertram & Kerns, 2019 ). Transparent, responsive PIP and PEP feedback loops manifest continuous quality improvement through repeated plan-do-study-act cycles. In so doing, a facilitative administration reshapes organizational culture and climate to focus upon and actively support the achievement and sustainability of improved implementation fidelity and client outcomes ( Kimber et al., 2012 ). Later, when benchmarks for fidelity and outcomes are consistently achieved, the PIP and PEP cycles help administrators identify and facilitate development and testing of useful adaptations to the program’s practice model ( Blase et al., 2012 ; Schoenwald et al., 2004 ).

Systems-Level Intervention

Program implementation unfolds in a changing context of federal, state, community, and organizational factors, each of which may be shaped by changing cultural, socioeconomic, or political concerns. These factors unfold unevenly with differing effect and may constrain a program’s ability to achieve desired fidelity or client outcomes. When these factors align in a constraining manner that compromises program fidelity, outcomes, or sustainability, administrators must engage decision-makers at a systems level to build consensus on the nature of the challenge and how to address it ( Bertram, Blase, et al., 2011 ; Fixsen et al., 2005 , 2009 ).

An excellent example of the systems-level intervention driver in action was demonstrated in Kansas City, where administrators and supervisors from multiple systems engaged in the Missouri’s Children’s Division family support team model jointly examined contributing factors to its diminished fidelity ( Bertram, King, et al., 2014 ). A similar process had previously occurred in Kansas City’s multisystem investigation of and response to reports of child sexual abuse ( Bertram, 2008 ). Analysis of constraining or supporting factors influencing program fidelity and client outcomes is the responsibility of a vigilant facilitative administration that identifies and intervenes at the systems level. Influential persons from each system must be engaged to create consensus on the nature of the challenge, then to facilitate and sustain adjustments to policies, practices, or funding mechanisms so that a program model can be implemented with fidelity and achieve desired outcomes ( Bertram, Blase, & Fixsen, 2015 ; Bertram, Blase, et al., 2011 ).

Decision Support Data Systems

Model-pertinent data that guide decisions about organizational and staff performance and client outcomes are essential for continuous quality improvement through PIP and PEP cycles. These data help sustain the program model. To effectively drive implementation, data systems supporting decision-making should provide timely and valid information related to model fidelity for correlation with outcomes data. Data should be easily accessible and understandable for use by purveyors, administrators, supervisors, and staff to support evaluation and development of staff competencies as well as continuous organizational quality improvement. Data systems truly become decision support data systems when model-pertinent information is understandable and readily available to guide decisions by staff at every level of the organization to improve implementation and client outcomes ( Bertram, Blase, et al., 2011 ; Fixsen et al., 2009 ).

Ideally, decision support data systems should be established or repurposed during program installation and initial implementation. However, these adjustments can occur at any time. For example, in year four of a six-year SAMHSA Children’s Mental Health Initiative grant in Houston, Texas, administrators, supervisors, family members, and an implementation consultant determined that the existing data system did not support or inform wraparound implementation. Organized to support legally defined policy and practice requirements in child protective services, the site’s data system provided no model-pertinent information about wraparound team composition and structure; the depth, breadth, or utility of team assessments; nor the design, efficiency, or effectiveness of team interventions. Without timely model-pertinent case data to review, a risk containment supervisory focus shaped the staff to seek guidance on an ad hoc basis during case crises. Thus, administrators or supervisors might not discover constraining factors to effective, sustainable program implementation until grant-required aggregate fidelity and outcome data were generated each year. This was a disservice to clients and an ineffective and inefficient means to develop staff competence. Model-pertinent data forms were developed to reenforce revised training content from wraparound’s theory bases and to facilitate tracking elements and activities in its theory of change. These data were used in biweekly review by the consultant, administrator, and supervisors to identify the focus and formats for regularly scheduled, systematic staff coaching. After 18 months, these adjustments improved staff confidence and proficiency as wraparound fidelity and client outcomes improved in diverse community settings to well above the national mean ( Bertram, Schaffer, & Charnin, 2014 ).

Leadership Drivers

Different types of leadership are needed for different circumstances that may require either technical or adaptive strategies ( Bertram, Blase, et al., 2011 ; Fixsen et al., 2009 ; Heifetz & Laurie, 1997 ; Heifitz & Linsky, 2002 ). Heifetz and Laurie (1997) assert that a common leadership error is applying technical leadership strategies under conditions that call for adaptive leadership strategies. Not all persons in leadership positions are willing or able to easily recognize or transition smoothly to and from technical and adaptive leadership strategies and styles. However, both are required for successful implementation and sustainability of outcomes ( Bertram, Blase, & Fixsen, 2015 ; Bertram, Blase, et al., 2011 ).

Technical Leadership

Technical leadership is appropriate in circumstances characterized by greater certainty and agreement about both the nature of the challenge and the correct course of action. Challenges under these conditions respond well to more traditional management approaches that focus on a single point of accountability with clearly understood and well-accepted methods and processes that are known to produce fairly reliable outcomes ( Daly & Chrispeels, 2008 ; Waters et al., 2003 ). For example, once established, staff selection criteria and processes should employ clear, rather routine procedures. Likewise, once established, the flow of information through a data system should also rely upon clearly defined procedures. A breakdown in either implementation driver’s procedure would be commonly and readily understood and would have obvious solutions. Problems related to such circumstances would be resolved through technical forms of leadership.

Adaptive Leadership

Adaptive leadership is necessary when there is less certainty and less agreement about both the definition of the problems and their solutions. Complex, confusing, or less well understood conditions require adaptive leadership strategies such as convening groups to seek a common understanding of the challenge and to generate possible solutions through group learning and consensus ( Daly & Chrispeels, 2008 ; Waters et al., 2003 ). The implementation drivers of coaching, facilitative administration, and systems-level intervention are more likely to require more adaptive leadership strategies to determine what the problems are, what information and knowledge will be required to develop consensus about possible solutions, and then to monitor results of attempts to solve that problem ( Fixsen et al., 2009 ).

A practical example of adaptive leadership emerged in one of the programs evaluated in the review of program implementation at 34 Kansas City area service organizations. One county within the Missouri child protective services agency convened its administrators with administrative representatives from family court and the guardian ad litem’s office to clarify and address challenges to fidelity of the family support team model (FST) in which each system’s staff participated. Though FST teams were intended to develop individualized service plans shaped by family voice, they consistently produced the same service recommendations for nearly every family situation. This compromise of program fidelity had to be analyzed and commonly understood by leaders of participating systems and then resolved through building consensus ( Bertram, King, et al., 2014 ). The participatory evaluation of wraparound implementation in the Houston SAMHSA Children’s Mental Health Initiative grant offers another excellent example of adaptive leadership strategies. Instead of responding to concerns about wraparound fidelity and outcomes with more training (a technical solution), the implementation team of family caregivers, supervisors, administrators, and consultant built consensus through evaluation, then adjusted and monitored multiple competency and organization driver adjustments ( Bertram, Schaffer, & Charnin, 2014 ).

Implementation Stages

Program implementation is a process, a series of activities that focus through intervention components to repurpose or create the implementation drivers. When administrators consider intervention components and focus through implementation drivers, program implementation unfolds through four stages in a two- to four-year process. Without an integrated focus through and between the framework of intervention components and the framework of implementation drivers, organizations waste time as well as financial and human resources as they address in piecemeal manner factors constraining fidelity and client outcomes. Without a systematic focus through the intervention and implementation components, organizations may even question the original match of program model to population and attempt program adaptations before expected fidelity and client outcomes are achieved. Finally, program sustainability should be a focus of implementation activities in every stage ( Bertram, Blase, & Fixsen, 2015 ; Bertram, Blase, et al., 2011 ; Bertram & Kerns, 2019 ). Implementation stages provide yet another meaningful focus for social work courses on policy, program administration and evaluation, and staff supervision.

In 2005 , NIRN’s seminal monograph discussed program implementation as a six-stage process ( Fixsen et al., 2005 ). However, ensuing years have seen clarification and refinement of this framework (see figure 2 ). Innovation or adaptations of a program or practice model is no longer considered a separate stage. Instead, it is now seen as beginning a new process of implementation activities that should only be considered after a program achieves targeted benchmarks of fidelity and population outcomes ( Bertram, Blase, et al., 2011 ; Winter & Szulanski, 2001 ). Attempting innovation in a practice model prior to achieving these goals may constrain the full development and application of PIP and PEP information cycles that could suggest adjustments to implementation drivers to better support achieving desired fidelity and outcomes. Furthermore, innovation before achieving full implementation benchmarks can contribute to staff and organizational confusion and inefficiency ( Bertram, Blase, et al., 2011 ).

Figure 2. Implementation stages and activities.

Once full implementation benchmark targets for fidelity and outcomes are achieved, as innovations are considered, the service organization must readdress previous stage activities. Though figure 2 may appear to imply a linear progression through stages of implementation, a significant change in socioeconomic conditions, funding, leadership, staff turnover, or other events may require the organization to reconsider program implementation and readdress activities of earlier stages ( Bertram, Blase, & Fixsen, 2015 ). For example, three changes in grant leadership as well as the traumatic, disruptive effects of two major hurricanes adversely affected wraparound implementation in Houston. By evaluating and adjusting implementation drivers, the SAMHSA Children’s Mental Health Initiative grant site consciously chose to return to addressing installation and initial implementation stage activities ( Bertram, Schaffer, & Charnin, 2014 ).

Finally, program sustainability was initially conceived as an end stage of implementation ( Fixsen et al., 2005 ). However, ensuing discussions of implementation within and across fields of endeavor have clarified that program sustainability is never an end point but instead is an essential concern and focus within each activity in each stage of implementation ( Bertram, Blase, & Fixsen, 2015 ; Bertram, Blase, et al., 2011 ; Bertram & Kerns, 2019 ).

Exploration

This stage has also been called “exploration and adoption.” In this stage, the assessment of community and organization resources, target population characteristics and needs, and their potential match with a practice model should consider both desired population outcomes and likely adjustments to implementation drivers. Understandably, most attention has been paid to client outcomes. For example, a review of two decades of literature on the wraparound model noted that most publications reporting outcomes examined client outcomes ( n = 48), while only 15 publications presented implementation outcomes ( Bertram, Suter, et al., 2011 ).

In the exploration stage of implementation, the service organization should carefully consider the intervention components of the potential practice model. Ideally, an implementation team composed of administrators, supervisors, and community stakeholders should be engaged with purveyors or technical assistance providers who are familiar with the potential new practice. This team should carefully consider target population characteristics, organization and community resources, as well as the potential practice’s essential elements, activities (model definition), and its theory base and theory of change ( Bertram & Kerns, 2019 ). This exploration should guide the decision to proceed or not to proceed with its adoption. However, as noted in the discussion of the Houston experience, thinking through intervention components and implementation drivers can also guide reconsidering implementation of current service models.

Potential resources, supports, and barriers should be examined in this stage. This includes but is not limited to funding sources and their requirements, current versus needed staffing patterns, referral sources, and other organization- and systems-level changes that may be needed to support the sustainable implementation of the program with fidelity to achieve desired client outcomes ( Bertram & Kerns, 2019 ; Fixsen et al., 2009 ). This exploration should address questions such as: How is this potential program or practice model appropriate for the target population? What outcomes can be expected? What resources will the organization need? How ready is the organization to adopt this new practice? What are the tasks and timelines needed to facilitate its installation and initial implementation? This stage ends with a definitive decision and implementation plan. Proactive, small adjustments made in this stage of exploration can reap great benefits and efficiencies. Rushing this exploration will amplify future problems and challenges as the organization installs the program and begins service delivery ( Bertram, Blase, & Fixsen, 2015 ; Bertram, Blase, et al., 2011 ; Bertram & Kerns, 2019 ).

Installation

After a decision is made to begin a new program or to improve implementation of a current service model, there are key tasks to accomplish before consumers and other participants experience a change in practice. These tasks and associated activities define the installation stage of program implementation. In this stage resources begin to be applied to create or repurpose implementation drivers. These are instrumental concerns ( Fixsen et al., 2009 ). They require methodical examination and adjustment of implementation drivers (see figure 1 ) and also can and should be adjusted when an organization seeks to improve the delivery of current services ( Bertram, Schaffer, & Charnin, 2014 ).

Participants in installation-stage activities create or repurpose competency and organizational drivers for high-fidelity implementation and improved client outcomes. Participants may include a team of not only administrators and staff from the host organization but also purveyors or consultants versed in implementation of the practice model, as well as partners from other service systems and even representatives from the community and future or current consumers. Installation activities move beyond broad consideration and planning to systematically addressing each implementation driver ( Bertram, Blase, & Fixsen, 2015 ; Bertram & Kerns, 2019 ).

Specifically, model-pertinent criteria for staff selection and training should be developed. The formats, frequency, and focus for coaching should be described in the program’s practice manual, policies, and procedures. Data systems, policy, and procedural protocols should be developed for measuring fidelity. If the focus and actions of other service providers or systems may compromise program fidelity and client outcomes, then explicit cross-agency or systems protocols may need to be created through purposeful systems-level intervention by program administrators. A classic example of this need for explicit cross-systems protocol frequently occurs in the response of multiple agencies and systems to reports of child sexual abuse ( Bertram, 2008 ; Bertram, King, et al., 2014 ; Sternberg et al., 2001 ).

These and other activities require time, attention, expenditures, and resource utilization, but they are essential. By focusing program installation on tackling instrumental resource concerns and on developing or repurposing the framework of implementation drivers (see figure 1 ), an organization will be less likely to suffer the all too common error of inserting a new or refined practice model into an existing infrastructure and then producing poor fidelity and disappointing client outcomes ( Bertram, Blase, et al., 2011 ).

Initial Implementation

Initial implementation of a new program or practice is an inherently difficult and complex period for all staff. New practices require new actions to accomplish different tasks that may not yet be fully understood. In the stage of initial implementation, excitement about new service delivery meets fear of change, unexpected uncertainties, and investment in the status quo. This is an awkward period of high expectations, mixed with both anticipated and unexpected challenges and frustrations. To survive and thrive, the program must rely upon data systems, as well as its PEP and PIP information cycles, so it can learn from mistakes. In this stage, programs improve if they employ adaptive leadership strategies, systematically and systemically addressing challenges to fidelity rather than technically addressing each challenge in a piecemeal manner ( Bertram, Blase, et al., 2011 ; Bertram & Kerns, 2019 ).

Many constraining factors may emerge as new practices at every level of the organization are initially implemented, delivered to consumers, and experienced by participants within the organization and in other systems. People, organizations, and systems tend to become comfortable with or accustomed to the status quo. In the stage of initial implementation, concerns or uncertainty about changes in roles, responsibilities, and practices should be expected. Though there may be much outward enthusiasm during the exploration and installation stages, many staff at all levels will not fully embrace organization changes necessary to effectively implement the practice model. This very human tendency to be uncomfortable amidst change can combine with the natural challenge and complexity of implementing something new to test confidence in the decision to improve or implement a program. However, steady and adaptive leadership that normalizes these experiences and challenges, as well as refined and increased coaching and support for practitioners that activates and applies PIP and PEP information cycle problem solving will overcome the awkwardness of this stage while changing organizational culture and climate ( Bertram, Blase, & Fixsen, 2015 ; Bertram, Blase, et al., 2011 ; Bertram, Schaffer, & Charnin, 2014 ; Kimber et al., 2012 ).

Full Implementation

Programs become inefficient, poorly executed, ineffective, and unsustainable when the host organization moves into full program implementation without developing, repurposing, and working through the framework of implementation drivers (see figure 1 ). When model-pertinent implementation drivers are established, tested, and adjusted during installation and initial implementation stages, full implementation that achieves improved client outcomes with fidelity in a sustainable manner is more likely to occur ( Fixsen et al., 2009 ).

The time required to emerge from the awkwardness or uncertainties of initial implementation to full implementation will vary from setting to setting and practice to practice. Depending upon setting and practice, it may be very useful for the host organization to start small in the initial implementation activities by creating an implementation zone with a portion of staff, then learn through PEP and PIP information cycles, make adjustments, and scale up to include all staff. When most practitioners can routinely provide the new practices with good fidelity, they will more likely achieve client outcomes like those attained in research or in other service settings. Desired fidelity and outcomes are sustained when implementation drivers are accessible and function well using model-pertinent information ( Bertram, Blase, & Fixsen, 2015 ; Bertram, Blase, et al., 2011 ; Bertram & Kerns, 2019 ).

Implications and Opportunities

Careful consideration and attention to intervention and implementation frameworks can improve organization climate and culture, staff competence and confidence, and program fidelity and client outcomes. Focusing through these frameworks can also support a common focus for the integration of theory, practice, policy, administration, and research courses in social work curricula. Social work administrators and educators should frequent, learn from, and contribute to forums provided by the Active Implementation Research Network , the National Implementation Research Network , as well as the biennial Global Implementation Conference and its young prodigy, the Global Implementation Society . By bringing discourse and lessons from these and other forums into our social work teaching, research, and practice, we can provide a context for exploring and beginning to resolve the differing and sometimes conflicted perspectives or misperceptions about evidence-based practice and meeting client needs, about program development and organization, and about supporting social work practitioner competence, confidence, and creativity.

Further Reading

  • Baker, L. R. , Hudson, A. , & Pollio, D. E. (2011). Assessing student perception of practice evaluation knowledge in introductory research methods. Journal of Social Work Education , 47 (3), 555–564.
  • Daniels, A. , England, M. , Page, A. K. , & Corrigan, J. (2005). Crossing the quality chasm: Adaptation to mental health and addictive disorders. International Journal of Mental Health , 34 (1), 5–9.
  • Gambrill, E. (2007). Views of evidence-based practice: Social workers’ code of ethics and accreditation standards as guides for choice. Journal of Social Work Education , 43 , 447–462.
  • Hall, G. E. , & Hord, S. M. (2006). Implementing change: Patterns, principles and potholes (2nd ed.). Allyn and Bacon.
  • Henggeler, S. W. , Melton, G. B. , Brondino, M. J. , Scherer, D. G. , & Hanley, J. H. (1997). Multisystemic therapy with violent and chronic juvenile offenders and their families: The role of treatment fidelity in successful dissemination. Journal of Consulting and Clinical Psychology , 65 (5), 821–833.
  • Henggeler, S. W. , Pickrel, S. G. , & Brondino, M. J. (1999). Multisystemic treatment of substance abusing and dependent delinquents: Outcomes, treatment fidelity, and transportability. Mental Health Services Research , 1 (3), 171–184.
  • Henggeler, S. W. , Schoenwald, S. K. , Liao, J. G. , Letourneau, E. J. , & Edwards, D. L. (2002). Transporting efficacious treatments to field settings: The link between supervisory practices and therapist fidelity in MST programs. Journal of Clinical Child & Adolescent Psychology , 31 (2), 155–167.
  • Howard, M. , Allen-Meares, P. , & Ruffolo, M. C. (2007). Teaching evidence-based practice: Strategic and pedagogical recommendations for schools of social work. Research on Social Work Practice , 17 (5), 561–568.
  • Huser, M. , Cooney, S. , Small, S. , O’Connor, C. , & Mather, R. (2009). Evidence-based program registries . Wisconsin Research to Practice Series. University of Wisconsin Madison/Extension.
  • Jensen, P. S. , Weersing, R. , Hoagwood, K. E. , & Goldman, E. (2005). What is the evidence for evidence-based treatments? A hard look at our soft underbelly. Mental Health Services Research , 7 (1), 53–74.
  • Marsiglia, F. F. , & Booth, J. M. (2015). Cultural adaptation of interventions in real practice settings. Research on Social Work Practice , 25 (4), 423–432.
  • McBeath, B. , & Austin, M. J. (2015). The organizational context of research-minded practitioners: Challenges and opportunities. Research on Social Work Practice , 25 (4), 446–459.
  • Metz, A. , Bartley, L. , Ball, H. , Wilson, D. , Naoom, S. , & Redmond, P. (2015). Active implementation frameworks for successful service delivery: Catawba county child wellbeing project. Research on Social Work Practice , 25 (4), 415–422.
  • Mowbray, C. T. , Holter, M. C. , Teague, G. B. , & Bybee, D. (2003). Fidelity criteria: Development, measurement, and validation. American Journal of Evaluation , 24 , 315–340.
  • Mullen, E. J. , Bellamy, J. L. , Bledsoe, S. E. , & Francois, J. (2007). Teaching evidence-based practice. Research on Social Work Practice , 17 (5), 574–582.
  • Proctor, E. K. , Knudsen, K. J. , Fedoravicus, N. , Hovmans, P. , Rosen, A. , & Perron, B. (2007). Implementation of evidence-based practice in community behavioral health: Agency director perspectives. Administration and Policy in Mental Health and Mental Health Services Research , 34 (5), 479–488.
  • Rubin, A. , Robinson, B. , & Valutis, S. (2010). Social work education and student research projects: A survey of program directors. Journal of Social Work Education , 46 (1), 39–55.
  • Schoenwald, S. K. , Chapman, J. E. , Sheidow, A. J. , & Carter, R. E. (2009). Long-term youth criminal outcomes in MST transport: The impact of therapist adherence and organizational climate and structure. Journal of Clinical Child and Adolescent Psychology , 38 (1), 91–105.
  • Thyer, B. A. (2015). Preparing current and future practitioners to integrate research in real practice settings. Research on Social Work Practice , 25 (4), 463–472.
  • Thyer, B. A. , & Pignotti, M. (2011). Clinical social work and evidence-based practice: An introduction to the special issue. Clinical Social Work Journal , 39 (4), 325–327.
  • Webster-Stratton, C. , & Herman K. C. (2010). Disseminating Incredible Years Series early intervention programs: Integrating and sustaining services between school and home. Psychology in the Schools , 47 , 36–54.
  • Webster-Stratton, C. , Reinke, W. M. , Herman, K. C. , & Newcomer, L. (2011). The Incredible Years Teacher Classroom Management training: The methods and principles that support fidelity of training delivery. School Psychology Review , 40 (4), 509–529.
  • Winter, S. G. , & Szulanski, G. (2001). Replication as strategy. Organization Science , 12 (6), 730–743.
  • Aarons, G. A. , & Sawitzky, A. C. (2006). Organizational culture and climate and mental health provider attitudes toward evidence-based practice. Psychological Services , 3 (1), 61–72.
  • Addis, M. E. , Wade, W. A. , & Hatgis, C. (1999). Barriers to dissemination of evidence-based practices: Addressing practitioners’ concerns about manual-based psychotherapies. Clinical Psychology: Science and Practice , 6 (4), 430–441.
  • Ager, A. , & O’May, F. (2001). Issues in the definition and implementation of “best practice” for staff delivery of interventions for challenging behavior. Journal of Intellectual & Developmental Disability , 26 (3), 243–256.
  • Barwick, M. (2011). Masters level clinician competencies in child and youth behavioral health care. Report on Emotional and Behavioral Disorders in Youth , 11 (2), 32–39.
  • Bellamy, J. L. , Bledsoe, S. E. , Mullen, E. J. , Fang, L. , & Manuel, J. (2008). Agency-university partnership for evidence-based practice in social work. Journal of Social Work Education , 44 (3), 55–76.
  • Bertram, R. (2008). Establishing a basis for multi-system collaboration: Systemic team development. Journal of Sociology and Social Welfare , 35 (4), 9–27.
  • Bertram, R. M. , Blase, K. , Shern, D. , Shea, P. , & Fixsen, D. (2011). Implementation opportunities and challenges for prevention and health promotion initiatives . National Association of State Mental Health Directors.
  • Bertram, R. M. , Blase, K. A. , & Fixsen, D. L. (2015). Improving programs and outcomes: Implementation frameworks and organization change. Research on Social Work Practice , 25 (4), 477–487.
  • Bertram, R. M. , Charnin, L. A. , Kerns, S. E. , & Long, A. C. (2015). Evidence-based practices in North American MSW curricula. Research on Social Work Practice , 25 (6), 737–748.
  • Bertram, R. , Collins, C. , & Elsen, M. (2020). University partnership in child welfare workforce preparation and program implementation. Journal of Family Social Work , 23 (2), 151–163.
  • Bertram, R. M. , Decker, T. , Gillies, M. E. , & Choi, S. W. (2017). Transforming Missouri's child welfare system: Community conversations, organizational assessment, and university partnership. Families in Society , 98 (1), 9–17.
  • Bertram, R. M. , & Kerns, S. E. (2019). Selection and implementation of evidence-based practice: A practical guide for academic and behavioral health programs . Springer Press.
  • Bertram, R. M. , King, K. , Pederson, R. , & Nutt, J. (2014). Program implementation: An examination of the interface of curriculum and practice. Journal of Evidence-Based Social Work , 11 , 193–207.
  • Bertram, R. M. , Schaffer, P. , & Charnin, L. (2014). Changing organization culture: Data driven participatory evaluation and revision of wraparound implementation. Journal of Evidence-Based Social Work , 11 , 18–29.
  • Bertram, R. M. , Suter, J. , Bruns, E. , & O’Rourke, K. (2011). Implementation research and wraparound literature: Building a research agenda. Journal of Child and Family Studies , 20 (6), 713–726.
  • Blase, K. , van Dyke, M. , Fixsen, D. , & Bailey, F. W. (2012). Implementation science: Key concepts; Themes and evidence for practitioners in educational psychology. In B. Kelly & D. Perkins (Eds.), Handbook of implementation science for psychology in education: How to promote evidence-based practice (pp. 13–36). Cambridge University Press.
  • Bronfenbrenner, U. (1979). The ecology of human development: Experiments by nature and design . Harvard University Press.
  • Daly, A. J. , & Chrispeels, J. (2008). A question of trust: Predictive conditions for adaptive and technical leadership in educational contexts. Leadership and Policy in Schools , 7 , 30–63.
  • D’Aprix, A. S. , Dunlap, K. M. , Abel, E. , & Edwards, R. L. (2004). Goodness of fit: Career goals of MSW students and the aims of the social work profession in the United States. Social Work Education , 23 (3), 265–280.
  • Denton, C. A. , Vaughn, S. , & Fletcher, J. M. (2003). Bringing research-based practice in reading intervention to scale. Learning Disabilities Research & Practice , 18 (3), 201–211.
  • Eccles, M. P. , & Mittman, B. S. (2006). Welcome to Implementation Science. Implementation Science , 1 (1), 1–3.
  • Fixsen, D. L. , Blase, K. A. , Naoom, S. F. , & Wallace, F. (2009). Core implementation components. Research on Social Work Practice , 19 (5), 531–540.
  • Fixsen, D. L. , Naoom, S. F. , Blase, K. A. , Friedman, R. M. , & Wallace, F. (2005). Implementation research: A synthesis of the literature (FMHI Publication No. 231). University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network.
  • Gambrill, E. , & Gibbs, L. (2009). Developing well-structured questions for evidence-informed practice. In A. R. Roberts (Ed.), Social workers’ desk reference (2nd ed., pp. 1120–1126). Oxford University Press.
  • Gibbs, L. , & Gambrill, E. (2002). Evidence-based practice: Counterarguments to objections. Research on Social Work Practice , 12 (3), 452–476.
  • Glisson, C. (2007). Assessing and changing organizational culture and climate for effective services. Research on Social Work Practice , 17 (6), 736–747.
  • Green, R. G. , Bretzin, A. , Leininger, C. , & Stauffer, R. (2001). Research learning attributes of graduate students in social work, psychology, and business. Journal of Social Work Education , 37 (2), 333–341.
  • Hardcastle, D. A. , & Bisman, C. D. (2003). Innovations in teaching social work research. Social Work Education , 22 (1), 31–43.
  • Heifetz, R. A. , & Laurie, D. L. (1997). The work of leadership. Harvard Business Review , 75 (1), 124–134.
  • Heifitz, R. A. , & Linsky, M. (2002). Leadership on the line . Harvard School Press.
  • Henggeler, S. W. , Schoenwald, S. K. , Borduin, C. M. , Rowland, M. D. , & Cunningham, P. B. (2009). Multisystemic therapy for anti-social behavior in children and adolescents (2nd ed.). Guilford Press.
  • Howard, M. , McMillen, C. J. , & Pollio, D. E. (2003). Teaching evidence-based practice: Toward a new paradigm for social work education. Research on Social Work Practice , 13 , 234–259.
  • Kimber, M. , Barwick, M. , & Fearing, G. (2012). Becoming an evidence-based service provider: Staff perceptions of organizational change. Journal of Behavioral Health Services and Research , 39 (3), 314–332.
  • Manuel, J. I. , Mullen, E. J. , Fang, L. , Bellamy, J. L. , & Bledsoe, S. E. (2009). Preparing social work practitioners to use evidence-based practice: A comparison of experiences from an implementation project. Research on Social Work Practice , 19 (5), 613–627.
  • Mullen, E. J. , Bledsoe, S. E. , & Bellamy, J. L. (2008). Implementing evidence-based social work practice. Research on Social Work Practice , 18 (4), 325–338.
  • Roberts, A. R. , & Yeager, K. R. (Eds.). (2004). Evidence-based practice manual: Research and outcome measures in health and human services . Oxford University Press.
  • Rubin, A. , & Parrish, D. (2007). Views of evidence-based practice among faculty in Master of Social Work programs: A national survey. Research on Social Work Practice , 17 (1), 110–122.
  • Rubin, D. , Robinson, B. , & Valutis, S. (2010). Social work education and student research projects: A survey of program directors. Journal of Social Work Education , 46 (1), 39–55.
  • Sackett, D. L. , Rosenberg, W. M. C. , Gray, J. A. M. , Haynes, R. B. , & Richardson, W. S. (1996). Evidence based medicine: What it is and what it isn’t; It’s about integrating individual clinical expertise and the best external evidence. British Medical Journal , 312 , 71–72.
  • Schoenwald, S. K. , Brown, G. L. , & Henggeler, S. W. (2000). Inside multi-systemic therapy: Therapist, supervisory, and program practices. Journal of Emotional and Behavioral Disorders , 8 , 113–127.
  • Schoenwald, S. K. , Sheidow, A. J. , & Letourneau, E. J. (2004). Toward effective quality assurance in evidence-based practice: Links between expert consultation, therapist fidelity, and child outcomes. Journal of Clinical Child and Adolescent Psychology , 33 , 94–104.
  • Smith, C. A. , Cohen-Callow, A. , Hall, D. M. , & Hayward, R. A. (2007). Impact of a foundation-level MSW research course on students’ critical appraisal skills. Journal of Social Work Education , 43 (3), 481–495.
  • Sternberg, K. J. , Lamb, M. E. , Orbach, Y. , Esplin, P. W. , & Mitchell, S. (2001). Use of a structured investigative protocol enhances young children’s responses to free-recall prompts in the course of forensic interviews. Journal of Applied Psychology , 86 (5), 997–1005.
  • Straus, S. E. , Glasziou, P. , Richardson, W. S. , & Haynes, R. B. (2011). Evidence-based medicine: How to practice and teach it (4th ed.). Churchill Livingstone.
  • Waters, J. T. , Marzano, R. J. , & McNulty, B. (2003). Balanced leadership: What 30 years of research tells us about the effect of leadership on student achievement . Mid-Continent Research for Education and Learning.

Related Articles

  • Evidence-Based Practice
  • Baccalaureate Social Workers
  • Social Work Education: Overview
  • Social Work Education: Research

Printed from Encyclopedia of Social Work. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 18 June 2024

  • Cookie Policy
  • Privacy Policy
  • Legal Notice
  • Accessibility
  • [185.148.24.167]
  • 185.148.24.167

Character limit 500 /500

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Clin Transl Sci
  • v.4(3); 2020 Jun

Logo of jctsci

Designs and methods for implementation research: Advancing the mission of the CTSA program

Soohyun hwang.

1 Department of Health Policy and Management, UNC Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA

Sarah A. Birken

Cathy l. melvin.

2 Department of Public Health Sciences, Medical University of South Carolina, Charleston, SC, USA

Catherine L. Rohweder

3 UNC Center for Health Promotion and Disease Prevention, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA

Justin D. Smith

4 Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, USA

Introduction:

The US National Institutes of Health (NIH) established the Clinical and Translational Science Award (CTSA) program in response to the challenges of translating biomedical and behavioral interventions from discovery to real-world use. To address the challenge of translating evidence-based interventions (EBIs) into practice, the field of implementation science has emerged as a distinct discipline. With the distinction between EBI effectiveness research and implementation research comes differences in study design and methodology, shifting focus from clinical outcomes to the systems that support adoption and delivery of EBIs with fidelity.

Implementation research designs share many of the foundational elements and assumptions of efficacy/effectiveness research. Designs and methods that are currently applied in implementation research include experimental, quasi-experimental, observational, hybrid effectiveness–implementation, simulation modeling, and configurational comparative methods.

Examples of specific research designs and methods illustrate their use in implementation science. We propose that the CTSA program takes advantage of the momentum of the field's capacity building in three ways: 1) integrate state-of-the-science implementation methods and designs into its existing body of research; 2) position itself at the forefront of advancing the science of implementation science by collaborating with other NIH institutes that share the goal of advancing implementation science; and 3) provide adequate training in implementation science.

Conclusions:

As implementation methodologies mature, both implementation science and the CTSA program would greatly benefit from cross-fertilizing expertise and shared infrastructures that aim to advance healthcare in the USA and around the world.

Implementation Research: Definition and Aims

The US National Institutes of Health (NIH) established the Clinical and Translational Science Award (CTSA) program in response to the challenges of translating biomedical and behavioral interventions from discovery to real-world use [ 1 ]. By the time the CTSA program was established, hundreds of millions of NIH dollars had been spent on developing evidence to influence a wide swath of clinical and preventive interventions for improving patient-level outcomes (e.g., observable and patient-reported symptoms, functioning, and biological markers). This emphasis on “The 7 Ps”: pills, programs, practices, principles, products, policies, and procedures [ 2 ] resulted in little to show in terms of improved health at the population level. When the CTSA program was first created, comparative effectiveness research was viewed as an important approach for moving the results of efficacy and effectiveness studies into practice [ 3 ]. By comparing multiple evidence-based interventions (EBIs), clinicians and public health practitioners would be armed with information regarding which treatments and interventions to pursue for specific populations. However, establishing the best available EBI among multiple alternatives only closes the research-to-practice-gap by a small margin. How to actually “make it work” (i.e., implementation) in an expeditious and cost-effective manner remains largely uninformed by traditional comparative effectiveness research approaches. The need for implementation research was discussed in the 2010 publication of “Training and Career Development for Comparative Effectiveness Research Workforce Development” as a necessary means of ensuring that comparative effectiveness research findings are integrated into practice [ 3 ]. This translation has not yet been fully realized within the CTSA program.

According to the NIH, implementation research is “the scientific study of the use of strategies to adopt and integrate evidence-based health interventions into clinical and community settings in order to improve patient outcomes and benefit population health. Implementation research seeks to understand the behavior of healthcare professionals and support staff, healthcare organizations, healthcare consumers and family members, and policymakers in context as key influences on the adoption, implementation and sustainability of evidence-based interventions and guidelines [ 4 ].” In contrast to effectiveness research, which seeks to assess the influence of interventions on patient outcomes, implementation research evaluates outcomes such as rates of EBI adoption, reach, acceptability, fidelity, cost, and sustainment [ 5 ]. The objective of implementation research is to identify the behaviors, strategies, and characteristics of multiple levels of the healthcare system that support the use of EBIs to improve patient and community health outcomes, to better address health disparities [ 6 ].

With the distinction between EBI effectiveness research and implementation research comes differences in study design and methodology. This article describes designs and methods that are currently applied in implementation research. We begin by defining common terms, describing the goals, and presenting some overarching considerations and challenges for designing implementation research studies. We then describe experimental, quasi-experimental, observational, effectiveness–implementation “hybrid,” and simulation modeling designs and offer examples of each. We conclude with recommendations for how the CTSA program can build capacity for implementation research to advance its mission of reducing the lag from discovery to patient and population benefit [ 7 ].

Definition of Terms

In this article, we often use “implementation” as shorthand for a multitude of processes and outcomes of interest in the field: diffusion, dissemination, adoption, adaptation, tailoring, implementation, scale-up, sustainment, etc. We use the term “implementation science” to refer to the field of study and “implementation research” in reference to the act of studying implementation. We define “design” as the planned set of procedures to: (a) select subjects for study; (b) assign subjects to (or observe their natural) conditions; and (c) assess before, during, and after assignment in the conduct of the study. With many resources for measurement and evaluation of implementation research trials in the literature [ 8 , 9 ], we focus on the selection and assignment of subjects within the design for the purposes of drawing conclusions about the effects of implementation strategies [ 10 , 11 ]. The goals of implementation research are multifaceted and largely fall within two broad categories: (1) examining the implementation of EBIs in communities or service delivery systems; and (2) evaluating the impact of strategies to improve implementation. The approaches and techniques by which healthcare providers and healthcare systems more generally implement EBIs are via “implementation strategies.” Strategies may target one or more levels within a community or healthcare delivery system (e.g., clinicians, administrators, teams, organizations, and the external environment) and can be used individually or packaged to form multicomponent strategies. Some implementation studies are designed to test, evaluate, or observe the impact of one or more implementation strategies. Others seek to understand implementation context, determinants, barriers, and facilitators that will inform the study design [ 12 ].

Characteristics of Implementation Research Designs

Study design.

Study design refers to the overall strategy chosen for integrating different aspects of a study in a coherent and logical way to address the research questions. Implementation research designs share many of the foundational elements and assumptions of efficacy research. In many experimental and quasi-experimental implementation research studies, the independent variable of interest is an implementation strategy; in other implementation research studies, variables of interest relate to the implementation context or process. Much like an EBI in a traditional clinical trial, the construct must be well-defined, particularly when conducting an experimental study, a topic we will explore in later sections. Three broad types of study designs for implementation research are experimental/quasi-experimental, observational, and simulation. The basic difference among these types is that experimental and quasi-experimental designs feature a well-defined, investigator-manipulated, or controlled condition (often an implementation strategy) that is hypothesized to effect desired outcomes, whereas observational studies are meant to understand implementation strategies, contexts, or processes. Of note, quasi-experiments apply statistical methods to data from quasi-experimental designs to approximate what, from a scientific perspective, would ideally be achieved with random assignment. Whereas quasi-experiments attempt to predict relationships among constructs, observational studies seek to describe phenomena. Simulation may feature experimental or observational design characteristics using synthetic (not observed) data. Table ​ Table1 1 provides a summary of the definition and uses of specific research designs covered in this article along with references to published studies illustrating their use in implementation science literatures.

Design types, definitions, uses, and examples from implementation science

Design typesDefinitionsUsesExamples from implementation science
Experimental design
 Between-site designThis design compares processes and output among sites having different exposuresAllows investigators to compare processes and output among sites that have different exposuresAyieko [ ]
Finch [ ]
Kilbourne [ ]
 Within- and between-site designThe comparisons can be made with crossover designs where sites begin in one implementation condition and move to anotherReceiving the new implementation strategy, or when it is unethical to withhold a new implementation strategy throughout the studySmith and Hasan [ ]
Fuller [ ]
Quasi-experimental design
 Within-site designThis design examines changes over time within one or more sites exposed to the same dissemination or implementation strategyThese single-site or single-unit (practitioner, clinical team, healthcare system, and community) designs are most commonly compared to their own prior performanceSmith [ ]
Smith [ ]
Taljaard [ ]
Yelland [ ]
Observational
Observational (descriptive)Describes outcomes of interest and their antecedents in their natural contextUseful for evaluating the real-world applicability of evidenceHarrison [ ]
Salanitro [ ]
Other designs/methods
 Configurational comparative methodsCombine within-case analysis and logic-based cross-case analysis to identify determinants of outcomes such as implementationUseful for identifying multiple possible combinations of intervention components and implementation and context characteristics that interact to produce outcomesKahwati [ ]
Breuer [ ]
 Simulation studiesA method for simulating the behavior of complex systems by describing the entities of a system and the behavioral rules that guide their interactionsOffer a solution for understanding the drivers of implementation and the potential effects of implementation strategiesZimmerman [ ]
Jenness [ ]
 Hybrid Type 1Tests a clinical intervention while gathering information on its delivery and/or on its potential for implementation in a real-world situation, with primary emphasis on assessing intervention effectivenessOffers an ideal opportunity to explore implementation to plan for future implementationLane-Fall [ ]
Ma [ ]
 Hybrid Type 2Simultaneously tests a clinical intervention and an implementation intervention/strategyAble to assess intervention effectiveness and feasibility and/or potential impact of an implementation strategy receive equal emphasisGarner [ ]
Smith [ ]
 Hybrid Type 3Primarily tests an implementation strategy while secondarily collecting data on the clinical intervention and related outcomesWhen researchers aim to proceed with implementation studies without completion of the full or at times even a modest portfolio of effectiveness studies beforehandBauer [ ]
Kilbourne [ ]

Experimental Designs

Experimental design is regarded as the most rigorous approach to show causal relationships and is labeled as the “gold-standard” in research designs with respect to internal validity [ 34 ]. Experimental design relies on the random assignment of subjects to the condition of interest; random assignment is intended to uphold the assumption that groups (usually experimental vs. control) are probabilistically equivalent, allowing the researcher to isolate the effect of the intervention on the outcome of interest. In implementation research, the experimental condition is often a specific implementation strategy, and the control condition is most often “implementation as usual.” Brown et al. [ 2 ] described three broad categories of designs providing within-site, between-site, and within- and between-site comparisons of implementation strategies. Within-site designs are discussed in the section on quasi-experimental designs as they generally lack the replicability standard given their focus on one site or unit. It is important to acknowledge that other authors, such as Miller et al. [ 35 ] and Mazzucca et al. [ 36 ], have categorized certain designs somewhat differently than we have here.

As research advances through the translational research pipeline (efficacy to effectiveness to dissemination and implementation), study design tends to shift from valuing internal validity (in efficacy trials) to achieving a greater balance between internal and external validity in effectiveness and implementation research. Much in the same way that inclusion criteria for patients are often relaxed in an effectiveness study of an EBI to better represent real-world populations, implementation research includes delivery systems and clinicians or stakeholders that are representative of typical practices or communities that will ultimately implement an EBI. The high degree of heterogeneity in implementation determinants, barriers, and facilitators associated with diverse settings makes isolating the influence of an implementation strategy challenging and is further complicated by nesting of clinicians within practices, hospitals within healthcare systems, regions within states, etc. Thus, the implementation researcher seeks to ensure that any observed effects are attributable to the implementation strategy/ies being investigated and attempts to balance internal and external validity in the design.

Between-site designs

In between-site designs, the EBI is held constant across all units to ensure that observed differences are the result of the implementation strategy and not the EBI. Between-site designs allow investigators to compare processes and output among sites that have different exposures. Most commonly the comparison is between an implementation strategy and implementation as usual. Brown and colleagues emphasize that randomization should be at the “level of implementation” in the between-site designs to avoid cross-contamination [ 2 ]. Ayieko et al. [ 13 ] used a between-site design to examine the effect of enhanced audit and feedback (an implementation strategy) on uptake of pneumonia guidelines by clinical teams within Kenyan county hospitals. They performed restricted randomization, which involved retaining balance between treatment and control arms on key covariates including geographic location and monthly pneumonia admissions. The study used random intercept multilevel models to account for any residual imbalances in performance at baseline so that the findings could be attributed to the audit and feedback, the implementation strategy of interest [ 12 ].

A variant between-site design is the “head-to-head” or “comparative implementation” trial in which the investigator controls two or more strategies, no strategy is implementation as usual, no site receives all strategies, and results are compared [ 2 ]. Finch et al. [ 14 ] examined the effectiveness of two implementation strategies, performance review and facilitated feedback, in increasing the implementation of healthy eating and physical activity-promoting policies and practices in childcare services in a parallel group randomized controlled trial design. At completion of the intervention period, childcare services that received implementation as usual were also offered resources to use the implementation strategies.

When achieving a large sample size is challenging, researchers may consider matched-pair randomized designs, with fewer units of randomization, or other adaptive designs for randomized trials [ 37 ] such as the factorial/fractional factorial [ 38 ] or sequential multiple assignment randomized trial (SMART) design. The SMART design allows for building time-varying adaptive implementation strategies (or stepped-care strategies) based on the order in which components are presented and the additive and combined effects of multiple strategies [ 15 ]. Kilbourne et al. assessed the effectiveness of an adaptive implementation intervention involving three implementation strategies (replicating effective programs [ 39 ], coaching, and facilitation) on cognitive behavioral therapy delivery among schools in a clustered SMART design [ 40 ]. In the first phase, eligible schools were randomized with equal probability to a single strategy vs. the same strategy combined with another implementation strategy. In subsequent phases, schools were re-randomized with different combinations of implementation strategies based on the assessment of whether potential benefit was derived from a combination of strategies. Similar to the SMART design is the full or fractional factorial design in which units are assigned a priori to different combinations of strategies, and main and lower order effects are tested to determine the additive impact of specific strategies and their interactions [ 41 ].

Another between-site design variant, the incomplete block, is useful when two implementation strategies cannot or were not initially intended to be directly compared. The incomplete block design allows for an indirect comparison of the two strategies by drawing from two independent samples of units, one in which sites are randomized to either strategy A or implementation as usual, and the other in which sites are randomized to strategy B or implementation as usual [ 42 ]. The two samples are completely independent and can occur either in parallel or in sequence, and statistical tests are performed for indirect comparison of the impacts of the two strategies “as if” they were directly compared. This requires a single EBI to be implemented and some degree of homogeneity across both of the groups. The incomplete block design is useful when it is not possible to test both strategies in a single study, or when a prior or concurrent study can be leveraged to compare two strategies.

Although the examples of between-site designs are randomized at the site- and organization-level, smaller units within each organization such as the team or clinician may also be randomized to an intervention [ 2 ]. Smith, Stormshak, & Kavanagh [ 18 ] present the results of a study in which clinicians were randomized to receive training or not, and their assigned families were randomized to receive the EBI or usual services. Effectiveness (family functioning and child behaviors) and implementation outcomes (adoption and fidelity) were evaluated after the 2-year period of intervention delivery.

Within- and between-site designs

This design involves crossovers where units begin in one condition and move to another (within-site element), which is repeated across units (or clusters of units) with staggered crossover points (between-site element). This broad class of designs has been referred to as “roll-out” designs [ 43 ] and dynamic wait-list designs [ 44 ]. We use the term “roll-out” to describe within- and between-site designs. The defining characteristic of roll-out designs is the assignment of all units in the study to the time when the implementation strategy will begin (i.e., the crossover). Assignments within roll-out designs can either be random, non-random, or quasi-random. In the context of implementation research, the roll-out design offers three practical and scientific advantages. First, all units in the trial will eventually receive the implementation strategy. Ensuring that all participating units receive the strategy promotes equity and enables all participants to contribute data. Second, the roll-out design allows the research team and the partner organizations to distribute resources required to administer the implementation strategy over time, rather than having to implement in all sites simultaneously as might be done in another type of multisite design. Third, the design allows researchers to account for the effect of unanticipated confounders (e.g., change in accreditation standards that requires use of the implementation strategy) that can occur during the trial period. For example, if some sites start implementation before an external event occurs, and other sites start afterwards, the impact of the event on the implementation process and resulting outcomes can be measured.

A common roll-out design is the stepped-wedge. The stepped-wedge is a specific design in which measurement of all units begins simultaneously at T0 and units cross over from one condition (e.g., implementation as usual or usual care) to the experimental implementation strategy condition following a series of “steps” at a predetermined interval (steps refer to the crossover). The result is a “wedge” below the steps of implementation as usual that can be compared to the wedge above the step representing the implementation strategy condition. The stepped-wedge is illustrated in Fig. ​ Fig.1 1 (panel a).

An external file that holds a picture, illustration, etc.
Object name is S2059866120000163_fig1.jpg

Roll-out designs: the stepped wedge (panel a) and incomplete wedge (panel b).

A variant of this design is the incomplete (or modified) wedge roll-out design (Fig. ​ (Fig.1, 1 , panel b). The difference from the stepped-wedge is that pre-implementation outcomes measurement begins immediately prior (e.g., 4–6 months) to the step rather than at T0 [ 16 ]. Incomplete wedge roll-out designs might be preferred to the traditional stepped-wedge design because there is less burden on participating sites to collect data for long periods and it allows researchers the option of staged enrollment in the trial if needed to achieve the full target sample in a way that does not threaten the study protocol. In this latter situation, randomization would occur in as few stages as possible to maintain balance and a variable for stage of enrollment would be included in all analyses to account for any differences in early vs. later enrollees. Last, the unit of randomization can be single units, clusters, or repeated, matched pairs [ 45 ]. Smith and Hasan [ 16 ] provide a case example of an incomplete wedge roll-out design in a trial testing the implementation of the Collaborative Care Model for depression management in primary care practices within a large university health system. In that trial, measurement of implementation began 6 months prior to the crossover to implementing the Collaborative Care Model in each primary care practice in a multi-year roll-out.

Quasi-Experimental Designs

Quasi-experimental designs share experimental design goals of assessing the effect of an intervention on outcomes of interest. Unlike experiments, however, quasi-experiments do not randomly assign participants to intervention and usual care groups. This key distinction limits the internal validity of quasi-experimental designs because differences between groups cannot be attributed exclusively to the intervention. However, when randomization is not possible or desirable for assessing the effectiveness of an implementation strategy or other intervention, quasi-experimental designs are appealing. Internal validity is strengthened when techniques of varying strength are used to bolster internal validity in lieu of randomization, including pre- and post-; interrupted time-series; non-equivalent group; propensity score matching; synthetic control; and regression-discontinuity designs [ 46 ].

In the context of implementation research, quasi-experimental designs fall under Brown and colleagues’ broad category of within-site designs. These single-site or single-unit (practitioner, clinical team, healthcare system, and community) designs are most commonly compared to their own prior performance. The simplest variant of a within-site study is the post design. This design is relevant when a site or unit has not delivered a service before, and thus, has no baseline or pre-implementation strategy data for comparison. The result of such a study is a single “post” implementation outcome that can only be compared to a criterion metric or the results of published studies. In contrast to a post design where data are only available after an implementation strategy or other intervention is introduced, a pre-post design compares outcomes following the introduction of an implementation strategy to the results from care as usual prior to introducing the implementation strategy.

To increase power and internal validity of within-site studies, interrupted time-series designs can be used [ 47 ]. Time-series designs involve multiple observations of the dependent variable (e.g., implementation) before and after the introduction of the implementation strategy, which “disrupts” the time-series data stream. Time-series designs are highly flexible and can involve multiple sites in the multiple baseline and replicated single-case series variants, which increase internal validity through replication of the effect. Examples of interrupted time-series studies exist in implementation research that exemplify their practicality for studying implementation (see Table ​ Table1). 1 ). Limitations of this design in implementation research include the challenge of defining the interruption (i.e., when the implementation began) and that the effects of new implementations are unlikely to be immediate. Therefore, analysis of interrupted time-series in implementation research might favor examining changes in slope between pre-implementation and implementation phases, rather than testing immediate changes in level of the outcome after the interruption.

Observational Designs

In observational studies, the investigator does not intervene with study participants but instead describes outcomes of interest and their antecedents in their natural context [ 48 ]. As such, observational studies may be particularly useful for evaluating the real-world applicability of evidence. Observational designs may use approaches to data collection and analysis that are quantitative [ 16 ] (e.g., survey), qualitative [ 49 ] (e.g., semi-structured in-depth interviews), or mixed methods [ 50 ] (e.g., sequential, convergent analysis of quantitative and qualitative results). Quantitative, qualitative, and mixed methods can be especially helpful in observational studies for systematically assessing implementation contexts and processes.

Hybrid Designs

With the goal of more rapidly translating evidence into routine practice, Curran et al. [ 51 , 52 ] proposed methods for blending: 1) design components of experiments intended to test the effectiveness of clinical interventions and 2) approaches to assessing their implementation. Such hybrid designs provide benefits over pursuing these lines of research independently or sequentially, both of which slow the progress of translation. Curran and colleagues state that effectiveness–implementation hybrid designs have a dual, a priori focus on assessing clinical effectiveness and implementation [ 51 , 52 ]. Hybrids focus on both effectiveness and implementation but do not specify a particular trial design. That is, the aforementioned experimental and observational designs can be used for any of the hybrid types. References to hybrid studies in implementation science are provided in Table ​ Table1 1 .

Curran et al. describe the conditions under which three different types of hybrid designs should be used, which helps researchers determine the most appropriate type based on whether evidence of effectiveness and implementation exists. Linking clinical effectiveness and implementation research designs may be challenging, as the ideal approaches for each often do not share many design features. Clinical trials typically rely on controlling/ensuring delivery of the clinical intervention (often by using experimental designs) with little attention to implementation processes likely to be relevant to translating the intervention to general practice settings. In contrast, implementation research often focuses on the adoption and uptake of clinical interventions by providers and/or systems of care [ 53 ] often with the assumption of clinical effectiveness demonstrated in previous studies. The three hybrid designs are described below.

Hybrid Type 1

Hybrid Type 1 tests a clinical intervention while gathering information on its delivery and/or potential for implementation in a real-world context, with primary emphasis on clinical effectiveness. This type of design advocates process evaluations of delivery/implementation during clinical effectiveness trials to collect information that may be valuable in subsequent implementation research studies, answering questions such as: What potential modifications to the clinical intervention could be made to maximize implementation? What are potential barriers and facilitators to implementing this intervention in the “real world”? Hybrid Type 1 designs provide the opportunity to explore implementation and plan for future implementation.

Hybrid Type 2

Hybrid Type 2 simultaneously tests a clinical intervention and an implementation intervention/strategy. In contrast to the Hybrid Type 1 design, the Hybrid Type 2 design puts equal emphasis on assessing both intervention effectiveness and feasibility and/or potential impact of an implementation strategy. In a Hybrid Type 2 study, where an implementation intervention/strategy is simultaneously tested to promote uptake of the clinical intervention under study. Type 2 hybrid designs appear less frequently than the other two types due to the resources required.

Hybrid Type 3

Hybrid Type 3 primarily tests an implementation strategy while secondarily collecting data on the clinical intervention and related outcomes. This design can be used when researchers aim to proceed with implementation studies without an existing portfolio of effectiveness studies. Examples of these conditions are when: health systems attempt implementation of a clinical intervention without comprehensive clinical effectiveness data; there is strong indirect efficacy or effectiveness data; and potential risks of the intervention are limited. National priorities (e.g., the opioid epidemic) may also drive implementation before effectiveness data are robust.

Implementation research is, by definition, a systems science in that it simultaneously studies the influence of individuals, organizations, and the environment on implementation [ 54 ]. The field of systems science is devoted to understanding complex behaviors that are both highly variant and strongly dependent on the behaviors of other parts of the system. Systems science is a challenging field to study using traditional clinical trial methods for various reasons, most notably the complexity involved in the many interactions and dynamics of multiple levels, constant change, and interdependencies. Simulation studies offer a solution for understanding the drivers of implementation and the potential effects of implementation strategies [ 55 ]. Modeling typically involves simulating the addition or configuration of one or more specific implementation strategies to determine which path should be taken in the real world, but it can also be used to test the likely effect of implementing one or more EBIs to determine impact for specific populations.

Agent-based modeling (ABM) [ 56 ] and participatory systems dynamics modeling (PSDM) [ 57 ] have both been used in implementation research to model the behavior of systems and determine the impact of moving certain implementation “levers” in the system. ABM is a method for simulating the behavior of complex systems by describing the entities (called “agents”) of a system and the behavioral rules that guide their interactions [ 56 ]. These agents, which can be any element of a system (e.g., clinicians, patients, and stakeholders), interact with each other and the environment to produce emergent, system-level outcomes [ 58 ], many of which are formal implementation outcomes. As ABM produces a mechanistic model, researchers are able to identify the implementation drivers that should be leveraged to most effectively achieve the predicted impacts in practice. Whereas ABM has wide ranging applications for implementation science, PSDM is an example of a method for a specific implementation challenge. Zimmerman et al. [ 26 ] used PSDM to triangulate stakeholder expertise, healthcare data, and modeling simulations to refine an implementation strategy prior to being used in practice. In PSDM, clinic leadership and staff define and evaluate the determinants (e.g., clinician knowledge, implementation leadership, and resources) and mechanisms (e.g., self-efficacy, feasible workflow) that determine local capacity for implementation of an EBI using a visual model. Given local capacity and other factors, simulations predict overall system behavior when the EBI is implemented. The process is iterative and has been used to prepare for large initiatives where testing implementation using standard trial methods was infeasible or undesirable due to the cost and time involved.

Configurational Comparative Methods

Configurational comparative methods, which are an umbrella term for methods that include but are not limited to qualitative comparative analysis [ 59 ], combine within-case analysis and logic-based cross-case analysis to identify determinants of outcomes such as implementation. Configurational comparative methods define causal relationships by identifying INUS conditions: those that are an Insufficient but Necessary part of a condition that is itself Unnecessary but Sufficient for the outcome. Configurational comparative methods may be preferable to standard regression analyses often used in quasi-experiments when the influence of an intervention on an outcome is not easily disentangled from how it is implemented or the context in which it is implemented – i.e., complex interventions. Complex interventions often have interdependent components whose unique contributions to a given outcome can be challenging to isolate. Furthermore, complex interventions are characterized by blurry boundaries among the intervention, its implementation, and the context in which it is implemented [ 60 ]. For example, the effectiveness of care plans for cancer survivors in improving care coordination and communication among providers likely depends upon a care plan's content, its delivery, and the functioning of the cancer program in which it is delivered [ 61 ]. Configurational comparative methods facilitate identifying multiple possible combinations of intervention components and implementation and context characteristics that interact to produce outcomes. To date, qualitative comparative analysis is the type of configurational comparative methods that has been most frequently applied in implementation research [ 62 ]. To identify determinants of medication adherence, Kahwati et al. [ 24 ] used qualitative comparative analysis to analyze data from 60 studies included in a systematic review. Breuer et al. [ 25 ] used qualitative comparative analysis to identify determinants of mental health services utilization.

Relevance and Opportunities for Application in CTSAs

In the early days of the CTSA program, resources allocated to implementation science were most frequently embedded in clinical or effectiveness research studies, and few had robust, standalone implementation science programs [ 63 , 64 ]. As the National Center for Advancing Translational Sciences (NCATS) and other federal and non-federal sources have increased their investment in implementation science capacity, the field has grown dramatically. More CTSAs are developing implementation research programs and incorporating stakeholders more fully in this process, as reflected in the results of the Dolor et al [ 65 ] environmental scan. Washington University and the University of California at Los Angeles have documented their efforts to engage practice and community partners, offer professional development opportunities, and provide consultations to investigators both in and outside the field of implementation science [ 66 , 67 ]. The CTSA program could take advantage of this momentum in three ways: integrate state-of-the-science implementation methods into its existing body of research; position itself at the forefront of advancing the science of implementation science by collaborating with other NIH institutes that share the goal of advancing implementation science, such as NCI and NHLBI; and providing training in implementation science.

Integrating state-of-the-science implementation methods to CTSAs’ existing bodies of research

Many CTSAs have the expertise to consult with their institution's investigators on the potential role of implementation science in their research. Implementation research consultations involve creating awareness and appropriate use of specific study designs and methods that match investigators’ needs and result in meaningful findings for real-world clinical and policy environments. As described by Glasgow and Chambers, these include rapid, adaptive, and convergent methods that consider contextual and systems perspectives and are pragmatic in their approach [ 68 ]. They state that “CTSA grantees, among others, are in a position to lead such a change in perspective and methods, and to evaluate if such changes do in fact result in more rapid, relevant solutions” to pressing public health problems. Through consultation services, CTSAs can encourage the use of implementation science early (e.g., designing for dissemination and implementation [ 69 ]) and often, positioning CTSAs – the hub for translation – to fulfill their mission by reducing the lag from discovery to patient and population benefit.

Advancing the science of implementation science

The centers funded by the CTSA program are able to conduct large-scale implementation research using the multisite U01 mechanism which requires the involvement of three centers. With the challenges of recruitment, generalizability, and power that are inherent in many implementation trials, the inclusion of three or more CTSAs, ideally representing diversity in region, populations, and healthcare systems, can provide the infrastructure for cutting-edge implementation science. Thus far, there are few examples of this mechanism being used for implementation research. In addition, with the charge of speeding translation of bench and clinical science discoveries to population impact, CTSAs have both the incentive and perspective to conduct implementation research early and consistently in the translational pipeline. As the hybrid design illustrates, there has been a paradigmatic shift away from the sequential translational research pipeline to more innovative methods that reduce the lag between translational steps.

Training in implementation science

NIH has funded several formal training programs in implementation science, including the Training Institute in Dissemination and Implementation in Health [ 70 ], Implementation Research Institute [ 71 ], and Mentored Training in Dissemination and Implementation Research in Cancer [ 72 ]. These training programs address the need to gain greater clarity around the implementation research designs described in this article, but the demand for training outpaces available resources. CTSAs could provide an avenue for meeting the needs of the field for training in dissemination and implementation science methods. CTSA faculty with expertise in implementation research could offer implementation research training programs for scholars on many levels using the T32, KL2, K12, TL1, R25, and other mechanisms. Chambers and colleagues have recently noted these capacity-building and training opportunities funded by the NIH [ 73 ]. Indeed, given the mission of the CTSA program, they are the ideal setting for implementation research training programs.

The field of implementation science has established methodologies for understanding the context, strategies, and processes needed to translate EBIs into practice. As they mature alongside one another, both implementation science and the CTSA program would greatly benefit from cross-fertilizing expertise, infrastructure, and aim to advance healthcare in the USA and around the world.

Acknowledgements

The authors wish to thank Hendricks Brown and Geoffrey Curran who provided input at different stages of developing the ideas presented in this manuscript.

Research reported in this publication was supported, in part, by the National Center for Advancing Translational Sciences, grant UL1TR001422 (Northwestern University), grant UL1TR002489 (UNC Chapel Hill), and grant UL1TR001450 (Medical University of South Carolina); by National Institute on Drug Abuse grant DA027828; and by the Implementation Research Institute (IRI) at the George Warren Brown School of Social Work, Washington University in St. Louis through grant MH080916 from the National Institute of Mental Health and the Department of Veterans Affairs, Health Services Research and Development Service, Quality Enhancement Research Initiative (QUERI) to Enola Proctor. Dr. Birken's effort was supported by the National Center for Advancing Translational Sciences, National Institutes of Health, through grant KL2TR002490. The opinions expressed herein are the views of the authors and do not necessarily reflect the official policy or position of the National Institute for Advancing Translational Science, the National Institute on Drug Abuse, the National Institute of Mental Health, the Department of Veterans Affairs, or any other part of the US Department of Health and Human Services.

Disclosures

The authors have no conflicts of interest to declare.

Stanford Social Innovation Review Logo

  • Arts & Culture
  • Civic Engagement
  • Economic Development
  • Environment
  • Human Rights
  • Social Services
  • Water & Sanitation
  • Foundations
  • Nonprofits & NGOs
  • Social Enterprise
  • Collaboration
  • Design Thinking
  • Impact Investing

Measurement & Evaluation

  • Organizational Development
  • Philanthropy & Funding
  • Current Issue
  • Sponsored Supplements
  • Global Editions
  • In-Depth Series
  • Stanford PACS
  • Submission Guidelines

Equitable Implementation at Work

Equity must be integrated into implementation research and practice. Here are 10 recommendations for putting equitable implementation into action.

  • order reprints
  • related stories

By Allison Metz, Beadsie Woo & Audrey Loper Summer 2021

Group of eight men and women standing together on a pile of shapes

The field of implementation science needs to prioritize evidence-informed interventions that fit the daily lives of the communities in which they will be delivered. Early prevention and intervention efforts have the potential to achieve goals related to service access and outcomes, but without an explicit focus on equity, most fail to do so. Equitable implementation occurs when strong equity components—including explicit attention to the culture, history, values, assets, and needs of the community—are integrated into the principles, strategies, frameworks, and tools of implementation science. While implementation science includes many frameworks, theories, and models, a blueprint for equitable implementation does not yet exist.

Bringing Equity to Implementation

implementation research in social work practice

Implementation science—the study of the uptake, scale, and sustainability of social programs—has failed to advance strategies to address equity. This collection of articles reviews case studies and articulates lessons for incorporating the knowledge and leadership of marginalized communities into the policies and practices intended to serve them. Sponsored by the Anne E. Casey Foundation

Equity Is Fundamental to Implementation Science

Trust the people, youth leadership in action, community takes the wheel, equity in implementation science is long overdue, listening to black parents | 3, faith-based organizations as leaders of implementation, community-defined evidence as a framework for equitable implementation, community-driven health solutions on chicago’s south side.

This supplement addresses critical aspects of equitable implementation and attempts to define concrete strategies for advancing equity in implementation and in efforts to scale it. The core elements for equitable implementation include building trusting relationships, dismantling power structures, making investments and decisions that advance equity, developing community-defined evidence, making cultural adaptations, and reflecting critically about how current implementation science theories, models, and frameworks do (or do not) advance equity. Case examples described in this supplement demonstrate how specific activities across these core implementation elements can address cultural, systemic, and structural norms that have embedded specific barriers against Black, Indigenous, and other communities of color. 

implementation research in social work practice

We wanted two types of articles for this supplement: case examples from the field of implementation science that explicitly focus on equity, and case examples from community-driven implementation efforts to inform implementation science in the future. We required that community members serve as co-authors with implementation scientists and funders. The range of perspectives and experiences shared in these articles provides us with an important vantage point for exploring equitable implementation. In response to questions about the process of writing for this supplement, several authors stressed the necessary challenge of balancing the different stakeholder perspectives and voices to write concise and compelling articles.

We attempt to summarize what we’ve learned about equitable implementation over the course of working on this supplement and in our own research. Here are 10 recommendations we have for putting equitable implementation into action.

Build Trusting Relationships

Implementation relies on collaborative learning, risk-taking, and openness to failure. At the center of this dynamic is vulnerability and trust. Trust engenders faith that partners can rely on each other to deliver on agreements and to understand—and even anticipate—each others’ interests and needs. 1  A recommendation for building trusting relationships is:

1. Take the time to build trust through small, frequent interactions. Trust is not built through sweeping gestures, but through everyday interactions where people feel seen and heard. Trust requires long-term commitment, clear and comprehensive communication, and time. As described in the article about the partnership between ArchCity Defenders and Amplify Fund, implementation moves at the speed of trust, and that can take longer than we think. Funders need to provide the time and resources to build trust between themselves, other leaders, and community members and to support trust-building among stakeholders in the community.

Dismantle Power Structures

Power differentials exist in implementation efforts where specific individuals or groups have greater authority, agency, or influence over others. Implementation strategies should be chosen to address power differentials and position community members at the center of decision-making and implementation activities. Recommendations for dismantling power structures include:

2. Shed the solo leader model of implementation. Implementation science should promote collaborative leadership rather than rely on the charisma and energy of a single individual or organization. When leaders engage with community members and diverse stakeholder groups in meaningful activities that are ongoing, they develop a shared understanding of problems and potential solutions, develop strategies that address community needs and assets, and create a sense of mutual accountability for building the system of supports needed to sustain change and advance equitable outcomes. 2

3. Distribute information and decision-making authority to those whose lives are most affected by the implementation. Empowering community members to make decisions about what is implemented and what strategies are used to carry out the work is critical for implementation to be relevant, successful, and sustainable. By recognizing the knowledge and experience that community stakeholders have and using that expertise to make decisions, public officials, funders, and practitioners create an environment of mutual comfort and respect. The central role that young people play in the development of Youth Thrive illustrates how an organization deliberately changed its work in order to ensure that nothing about young people was done without them having a collaborative role in shaping and delivering the curriculum. 

Invest and Make Decisions to Advance Equity

Successful implementation is the product of dozens of shared decisions. In all implementation efforts, opportunities exist for critical decision-making that can either increase or decrease the likelihood that implementation will result in equitable outcomes. Recommendations include:

4. Engage in deliberate and transparent decision-making. Implementation decisions should be conscious, reflective, well thought through, and paced in a way that unintended consequences can be assessed. By taking the time to reflect, we can make course corrections for decisions that yield any unexpected results. Decision-making should also be transparently communicated with stakeholders at all levels of implementation.

5. Engage community members in interpreting and using data to support implementation. As described in this supplement, the success and sustainability of implementation are related to the alignment with and deep understanding of the needs of a community as defined by the community members themselves. The Children and Youth Cabinet in Rhode Island developed a resident advisory board and offered community members regular data review sessions. At these sessions, community members shared relevant context for findings and applied their experience to quality improvement.

Develop Community-Defined Evidence

Equitable implementation starts with how the evidence we seek to implement is developed. Research evidence often demonstrates different levels of effectiveness for different groups of people when replicated or scaled widely, leading to inequitable outcomes. As interventions are developed, it is critical to consider diversity in all its forms—including geographical, racial and ethnic, socioeconomic, cultural, and access—and to do this through the involvement of local communities. A recommendation for developing community-defined evidence is:

6. Co-design interventions with community members. This ensures interventions are relevant, desired by communities, and feasible to implement. Village of Wisdom created workshops by and for Black parents to share their parenting insights. These workshops became the foundation for developing culturally affirming instruction and for formulating tools and strategies that could create environments to encourage the intellectual curiosity and racial identity of Black children. By using the experiences and knowledge of Black parents to develop learning environments that nurture well-being, Village of Wisdom asserts the value of growing up Black and parenting Black children. To develop the Bienvenido Program, staff recruited leaders across the community as cocreators of a mental health needs assessment and the knowledge developed from it. The program was designed in response to Latinx residents’ experiences and the challenges they face in accessing mental health services. In both of these examples, community members’ experiences and perspectives were used to develop interventions that were aligned with community needs as they described them.

Make Adaptations

In order to reduce disparities in outcomes and advance equitable implementation, interventions and services must reach specific groups of people and demonstrate effectiveness in improving outcomes for them. 3 Adaptations, especially cultural adaptations, must be made for both interventions and for implementation strategies to ensure the reach and relevance needed for equitable implementation. Recommendations for making adaptations include:

7. Seek locally based service delivery platforms. Implementation often relies on traditional institutions (e.g., hospitals) and systems of care (e.g., public health departments) that may limit or even impede access for specific groups of people. Two articles in this supplement discuss the importance of local, faith-based groups for supporting implementation—the parenting program in Travis County, Texas, and the cardiovascular health initiative in Chicago. Both case examples elevate the importance of adapting service delivery mechanisms to trusted community organizations to increase access for and uptake by local residents.

8. Address issues of social justice. Specific groups of people face significant stressors and barriers to care that are rooted in systemic and structural racism. Authors in this supplement emphasize the importance of adaptations that address issues related to these stressors. As noted in the article on culturally adapting a parenting intervention, parents may not be able to access and benefit from a parenting program if they are dealing with immigration policies and fear of deportation. In this case, adaptations to the program would need to include immigration counseling to support equitable implementation. 

Critical Perspectives on Implementation Science

While implementation science is undergirded by theories, models, and frameworks, notably missing in the field are critical perspectives. The article on critical perspectives seeks to address this gap by discussing the methods used in implementation science and how they might perpetuate or exacerbate inequities. The authors also raise the importance of context and how it is addressed in implementation research and practice.

In the field of implementation science, context includes three levels: macro, organizational, and local. 4 Macro context refers to socio-political and economic forces that either facilitate or hinder implementation efforts. Organizational context refers to organizational culture and climate that influence the behavior of staff. Local context refers to the community activities and relationships that influence implementation and behavior. Implementation strategies at the local or organizational level are limited in their impact on systemic and structural issues. In several articles of the supplement, authors advocate for doing more than describing the macro context. Implementation science needs to develop strategies that can address macro issues that foster or perpetuate disparities in outcomes. Recommendations include:

9. Develop implementation strategies that address the contextual factors that contribute to disparities in outcomes. Advocacy and policy implementation strategies focused on the macro context are more likely to advance equity than implementation strategies at organizational or local levels. Articles in this supplement describe the importance of building the capacity of community leaders to create advocacy networks for policies and funding that will help to sustain local programming. The example from ArchCity Defenders and Amplify Fund describes the critical role of funders in supporting changes to the social, political, and economic environments that grantees operate within to advance equity and promote sustainability. To cite another example, training community members to facilitate local programs and deliver interventions (as demonstrated in the Bienvenido Program and the cardiovascular health project in Chicago) ensures that implementation is tailored to the culture, history, and values of the local community; that interventions are delivered by trusted individuals; and that communities will be able to sustain the interventions.

10. Seek long-term outcomes that advance equity. The selection of interventions should include an assessment of the interventions’ likely influence on outcomes beyond near-term changes. Selecting programs that have the potential of a spillover effect in outcomes is a mechanism for equitable implementation. As described in a case example in this supplement, participants in the Bienvenido Program developed confidence and knowledge about participating in community meetings and engaging with locally elected officials and pursued careers in the mental health field. In the critical perspectives article, authors explained that some parenting programs demonstrate evidence for outcomes beyond strengthening parenting practices, such as reduction in substance abuse or increases in employment and stable housing.    

The purpose of implementation science is to integrate research and practice in ways that will improve outcomes for people and communities. However, implementation frameworks, theories, and models have not explicitly focused on how implementation can and should advance equity. The recommendations that emerged across the diverse case examples in this supplement provide a starting point for changing and improving the methods and strategies used in implementation to ensure that equity is at the center of the work. As Ana A. Baumann and Pamela Denise Long argue in “Equity in Implementation Science Is Long Overdue,” implementation scientists must engage in critical reflection on the gaps between the intentions and the results of their work. We hope this supplement sparks reflection in funders, researchers, and practitioners involved in supporting implementation efforts with the hope of making people’s lives better and inspires their resolve and courage to shift toward learning from those who have the greatest stake in successful and equitable outcomes.

Support  SSIR ’s coverage of cross-sector solutions to global challenges.  Help us further the reach of innovative ideas.  Donate today .

Read more stories by Allison Metz , Beadsie Woo & Audrey Loper .

SSIR.org and/or its third-party tools use cookies, which are necessary to its functioning and to our better understanding of user needs. By closing this banner, scrolling this page, clicking a link or continuing to otherwise browse this site, you agree to the use of cookies.

SIRC Logo

  • Anti-racism Statement
  • Conferences

Implementation Research and Practice

  • D & I Training Opportunities
  • Implementation Development Workshop (IDW)
  • Instrument Review Project
  • Mechanisms Network of Expertise
  • PNoE Implementation Infrastructure Initiative (I3)
  • SIRC City Gatherings
  • SIRC Webinars
  • Established Investigator Sub-NoE
  • New Investigator Sub-NoE
  • Trainee Sub-NoE
  • Provider Sub-NoE
  • Intermediary Sub-NoE
  • Policy Sub-NoE
  • SIRC Mentorship
  • Helpful Links
  • Account – SIRC Member

The Society for Implementation Research Collaboration is dedicated to bringing together researchers and multi-level stakeholders to improve the implementation of evidence-based psychosocial interventions. To achieve this mission, SIRC, in partnership with SAGE Publications, launched Implementation Research and Practice in May of 2020. Click here to access the journal website and submission portal .

implementation research in social work practice

Please email our co-founding editors-in-chief — Cara Lewis and Sonja Schoenwald — at [email protected] with any questions about the journal or submission interest.

Implementation Research and Practice is an international, peer-reviewed, open access, online-only journal providing rapid publication of interdisciplinary research that advances the implementation in diverse contexts of effective approaches to assess, prevent, and treat mental health, substance use, or other addictive behaviors, or their co-occurrence , in the general population or among those at-risk or suffering from these disorders.

The journal welcomes a wide range of research including:

  • development and testing of strategies to advance the implementation, sustainment, and scale-out of effective prevention and treatment approaches; and, to decrease the use of approaches that are untested and ineffective;
  • evaluation of the impact of innovative design, content, or delivery methods intended to optimize effective prevention and treatment approaches;
  • evaluation of the effectiveness of novel assessment, preventive or treatment approaches that includes a robust evaluation of their implementation;
  • development and evaluation of research methods to advance the science, such as innovative research design, measurement, analytic, and data and knowledge synthesis methods;
  • research on the dissemination of information about effective approaches to the detection, prevention, and treatment of mental health, substance use, and other addictive behaviors; and, of information regarding effective methods to promote their adoption and implementation;
  • systematic reviews in the field of implementation synthesizing the evidence for frameworks, measures, strategies, and outcomes, for instance.

Outside journal scope is:

  • research exclusively focused on health promotion or health behaviors in the general population without consideration of impact on mental health, substance use, or addiction;
  • research exclusively focused on the intervention (its development, efficacy, effectiveness) with no evaluation of implementation processes or outcomes

The IRP’s Diversity, Equity, and Inclusion (DEI) Advisory Group helps guide IRP’s efforts to ensure the content and editorial practices and policies of the journal embody diversity, equity, and inclusion.

Current DEI Advisory Group Members

implementation research in social work practice

William Marti nez, PhD., Associate Professor of Psychiatry and Behavioral Sciences, UCSF – Dr. Martinez’s overall clinical, administrative, and research aims are concentrated on eliminating behavioral health inequities among racially and ethnically minoritized youths, with a specific focus on Latinx and immigrant populations. Dr. Martinez takes a socio-ecological approach to understanding these concerns across three areas of inquiry: 1) the impact of social determinants on behavioral health disparities; 2) implementation and dissemination of evidence-based prevention and intervention programming; and 3) policy and advocacy focused on improving conditions for immigrant youths. He is excited to be a member of the IRP’s DEI Advisory Group, which is much aligned with his own goals of increasing the visibility of implementation scientists and practitioners of traditionally underrepresented backgrounds.

implementation research in social work practice

Stephanie Yu, M.A. (she/her) Clinical Psychology PhD. Candidate, UCLA – Stephanie is passionate about mental health equity and community-engaged research aiming to reduce mental health disparities for racial/ethnic marginalized groups. Her research focuses on culturally-responsive adaptation and implementation of evidence-based practices in public systems of care serving marginalized communities through community partnership. She is also interested in how individual and systemic conditions, such as those stemming from racism and discrimination, can be addressed to improve well-being outcomes for marginalized communities.

  • UB Directory
  • School of Social Work >
  • Academics >
  • DSW Online >

Implementation Science in Social Work

Read article.

By Sue Coyle, MSW Social Work Today

Our DSW program focuses on using implementation science to advance social work practice. 

What is implementation science?

Implementation science focuses on the thoughtful and strategic process of implementing an intervention. The emphasis is on evaluating both the intervention itself and the implementation of intervention. This includes assessing the strategies used to increase the uptake and use of the intervention.  

Why is implementation science important in social work?

This is an exciting time to think about using implementation science in social work practice! There are a multitude of interventions that are effective in detecting, preventing, and treating conditions that affect the health and well-being of those served by social workers. However, we also have gaps between what we know from science works and what social workers actually do in practice. These gaps have contributed to racial, ethnic, socioeconomic, and other disparities in health and health-related outcomes.  By using implementation science we can help reduce racial and ethnic disparities in mental health care in the United States and improve the lives of those we serve. 

What are some examples of how implementation science is used in practice?

Example one: assessing strategies to implementing a virtual reality program in respite care.

A great example is the Expanding Worlds: Assessing Strategies to Implementing a Virtual Reality Program in Respite Care project.

  • Implementation science was used to identify perceived barriers and facilitators to implementing a virtual reality (VR) intervention within a volunteer respite program The VR intervention is being delivered by staff and volunteers to caregivers of community-dwelling persons living with dementia receiving respite services to help reduce social isolation and loneliness.
  • Focus groups and surveys were completed with staff and volunteers to assess their thoughts and concerns about the VR program implementation.  Some of the barriers they identified through this process included lack of VR knowledge and skills, concerns about safety, and the need to strengthen buy-in from staff.
  • Evidence-based implementation strategies were selected and tailored to help support effective uptake and behavior change among staff and volunteers. These strategies included training and shadowing opportunities, identifying champions, and developing a trauma-informed approach manual.
  • Implementation outcomes or how well our staff and volunteers implemented the intervention were assessed , as well as outcomes related to whether the VR intervention lessened social isolation and loneliness among caregivers.

Example Two: Components for Enhancing Clinical Engagement and Reducing Trauma (CE-CERT)

Another example involves the use of the Components for Enhancing Clinical Engagement and Reducing Trauma (CE-CERT) model to help reduce compassion fatigue and burnout among trauma therapists and supervisors in a child welfare setting.

  • Surveys were completed by supervisors and identified barriers to using CE-CERT in a community-based agency. These included a lack of knowledge, the need for supervisory support, and the need for guidance with using CE-CERT.
  • Evidence-based strategies were selected and included reflective supervision, ongoing consultation, peer support, the use of champions, and ongoing training.
  • Implementation outcomes in relation to how well the identified strategies addressed the barriers identified by the supervisors were assessed . In addition, intervention outcomes pertaining to whether the CE-CERT intervention had an impact on compassion fatigue and burnout among trauma therapists were evaluated.

Additional Resources

  • Listen to our inSocialWork podcast episode on using implementation science to address research-to-practice gaps
  • Read the article: Implementation Science: why it matters for the future of social work
  • Read the article: An introduction to implementation science for the non-specialist

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Implementation Science: Why it matters for the future of social work

Affiliation.

  • 1 School of Social Work, Columbia University.
  • PMID: 28216992
  • PMCID: PMC5312777

Bridging the gap between research and practice is a critical frontier for the future of social work. Integrating implementation science into social work can advance our profession's effort to bring research and practice closer together. Implementation science examines the factors, processes, and strategies that influence the uptake, use, and sustainability of empirically-supported interventions, practice innovations, and social policies in routine practice settings. The aims of this paper are to describe the key characteristics of implementation science, illustrate how implementation science matters to social work by describing several contributions this field can make to reducing racial and ethnic disparities in mental health care, and outline a training agenda to help integrate implementation science in graduate-level social work programs.

Keywords: Implementation science; racial and ethnic disparities in mental health care; social work education; social work research.

PubMed Disclaimer

Similar articles

  • Consideration of Factors Influencing Weight Outcomes among U.S. Racial-Ethnic Minority Populations in the Social Work Literature. Melius J, Barr-Anderson DJ, Orekoya O. Melius J, et al. Soc Work Public Health. 2019;34(2):158-175. doi: 10.1080/19371918.2019.1575309. Epub 2019 Feb 15. Soc Work Public Health. 2019. PMID: 30767623
  • Annual Research Review: The state of implementation science in child psychology and psychiatry: a review and suggestions to advance the field. Williams NJ, Beidas RS. Williams NJ, et al. J Child Psychol Psychiatry. 2019 Apr;60(4):430-450. doi: 10.1111/jcpp.12960. Epub 2018 Aug 25. J Child Psychol Psychiatry. 2019. PMID: 30144077 Free PMC article. Review.
  • Harnessing Implementation Science to Increase the Impact of Health Equity Research. Chinman M, Woodward EN, Curran GM, Hausmann LRM. Chinman M, et al. Med Care. 2017 Sep;55 Suppl 9 Suppl 2(Suppl 9 2):S16-S23. doi: 10.1097/MLR.0000000000000769. Med Care. 2017. PMID: 28806362 Free PMC article.
  • Social science research in malaria prevention, management and control in the last two decades: an overview. Mwenesi HA. Mwenesi HA. Acta Trop. 2005 Sep;95(3):292-7. doi: 10.1016/j.actatropica.2005.06.004. Acta Trop. 2005. PMID: 16011829 Review.
  • Reducing disparity in behavioral health services: a report from the American College of Mental Health Administration. Dougherty RH; American College of Mental Health Administration. Dougherty RH, et al. Adm Policy Ment Health. 2004 Jan;31(3):253-63. doi: 10.1023/b:apih.0000018833.22506.fc. Adm Policy Ment Health. 2004. PMID: 15160787
  • Intentional practice: a common language, approach and set of methods to design, adapt and implement contextualised wellbeing solutions. Raymond IJ. Raymond IJ. Front Health Serv. 2023 Jun 16;3:963029. doi: 10.3389/frhs.2023.963029. eCollection 2023. Front Health Serv. 2023. PMID: 37395993 Free PMC article.
  • Evidence-Based Intervention Adaptations Within the Veterans Health Administration: a Scoping Review. Kroll-Desrosiers A, Finley EP, Hamilton AB, Cabassa LJ. Kroll-Desrosiers A, et al. J Gen Intern Med. 2023 Aug;38(10):2383-2395. doi: 10.1007/s11606-023-08218-z. Epub 2023 May 30. J Gen Intern Med. 2023. PMID: 37254009 Review.
  • Usability Issues in Evidence-Based Psychosocial Interventions and Implementation Strategies: Cross-project Analysis. Munson SA, Friedman EC, Osterhage K, Allred R, Pullmann MD, Areán PA, Lyon AR; UW ALACRITY Center Researchers. Munson SA, et al. J Med Internet Res. 2022 Jun 14;24(6):e37585. doi: 10.2196/37585. J Med Internet Res. 2022. PMID: 35700016 Free PMC article.
  • Designing the Future of Children's Mental Health Services. Lyon AR, Dopp AR, Brewer SK, Kientz JA, Munson SA. Lyon AR, et al. Adm Policy Ment Health. 2020 Sep;47(5):735-751. doi: 10.1007/s10488-020-01038-x. Adm Policy Ment Health. 2020. PMID: 32253634 Free PMC article.
  • Rural access to MAT in Pennsylvania (RAMP): a hybrid implementation study protocol for medication assisted treatment adoption among rural primary care providers. Cochran G, Cole ES, Warwick J, Donohue JM, Gordon AJ, Gellad WF, Bear T, Kelley D, DiDomenico E, Pringle J. Cochran G, et al. Addict Sci Clin Pract. 2019 Aug 1;14(1):25. doi: 10.1186/s13722-019-0154-4. Addict Sci Clin Pract. 2019. PMID: 31366408 Free PMC article.
  • Aarons GA, Green AE, Palinkas LA, Self-Brown S, Whitaker DJ, Lutzker JR, Chaffin MJ. Dynamic adaptation process to implement an evidence-based child maltreatment intervention. Implementation Science. 2012;7(1):32. - PMC - PubMed
  • Aarons GA, Horowitz JD, Dlugosz LR, Ehrhart MG. The role of organizational processes in dissemination and implementation research. In: Brownson RCCGA, Proctor EK, editors. Dissemination and Implementation Research in Health: Tranlsating Science to Practice. New York, NY: Oxford University Press; 2012. pp. 128–153.
  • Aisenberg E. Evidence-based practice in mental health care to ethnic minority communities: Has its practice fallen short of its evidence? Social Work. 2008;53(4):297–306. - PubMed
  • Alegria M, Chatterji P, Wells K, Cao Z, Chen CN, Takeuchi D, Meng XL. Disparity in depression treatment among racial and ethnic minority populations in the Psychiatric Services. 2008;59(11):1264–1272. - PMC - PubMed
  • American Academy of Social Work and Social Welfare. Introduction and Context for Grand Challenges for Social Work Grand Challenges for Social Work Initiative, Working Paper No. 1. Baltimore, MD: 2013.

Grants and funding

  • K01 MH091118/MH/NIMH NIH HHS/United States
  • R01 MH104574/MH/NIMH NIH HHS/United States

LinkOut - more resources

Full text sources.

  • Europe PubMed Central
  • PubMed Central

Miscellaneous

  • NCI CPTAC Assay Portal
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Library Home

Graduate research methods in social work

(3 reviews)

implementation research in social work practice

Matt DeCarlo, La Salle University

Cory Cummings, Nazareth University

Kate Agnelli, Virginia Commonwealth University

Copyright Year: 2021

ISBN 13: 9781949373219

Publisher: Open Social Work Education

Language: English

Formats Available

Conditions of use.

Attribution-NonCommercial-ShareAlike

Learn more about reviews.

Reviewed by Erin Boyce, Full Time Faculty, Metropolitan State University of Denver on 6/3/24

This book provides a strong comprehensive overview of each step in the research & evaluation process for students, clearly outlining each step with clarity and direction. read more

Comprehensiveness rating: 5 see less

This book provides a strong comprehensive overview of each step in the research & evaluation process for students, clearly outlining each step with clarity and direction.

Content Accuracy rating: 5

Content in this text is accurate, needing no clarification or added information, and is presented in an unbiased manner.

Relevance/Longevity rating: 5

The relevance of this text is it's greatest strength. It is one of the strongtest research texts I've encountered, and while change always comes this text will survive new iterations of research, only needing minimal and straightforward updates.

Clarity rating: 5

As a research text, this is extremely user friendly. It is easy to read, direct, and does not interfere with student understanding. Students come away with a good understanding of the concepts from this text, and many continue to use it beyond the classroom.

Consistency rating: 5

This text is consistent with research methods and frameworks and stands alone among social work research texts as the most accessbile due to it's status as an OER and as a social work textbook.

Modularity rating: 5

This text is easily divisible into smaller readings, it works great for courses in which assignments are scaffolded to move students through the research process.

Organization/Structure/Flow rating: 5

This text is organized to walk the student through the research process from start to finish, and is easily adjusted for different teaching styles.

Interface rating: 5

This text has no significant interface issues, the readings, links, and images are easily accessbile and are presented in a way that does not interfere with student learning.

Grammatical Errors rating: 5

This text is well edited and formatted.

Cultural Relevance rating: 5

This text is culturally relevant, addresses issues of cultural relevance to social work, and highlights the role of social work values within the realm of social work research.

This is one of the best research texts I've encounted in over a decade of teaching. It is so easily digested and presents information in a direct and understandable way, and is one of the best texts for those teaching graduate level research for social workers. It is an inclusive text that honors the multiple levels of knowledge that our students come to us with, which helps sets it apart. And, the committment throughout the text to social work values and ethics is critical for todays social worker.

Reviewed by Laura Montero, Full-time Lecturer and Course Lead, Metropolitan State University of Denver on 12/23/23

Graduate Research Methods in Social Work by DeCarlo, et al., is a comprehensive and well-structured guide that serves as an invaluable resource for graduate students delving into the intricate world of social work research. The book is divided... read more

Comprehensiveness rating: 4 see less

Graduate Research Methods in Social Work by DeCarlo, et al., is a comprehensive and well-structured guide that serves as an invaluable resource for graduate students delving into the intricate world of social work research. The book is divided into five distinct parts, each carefully curated to provide a step-by-step approach to mastering research methods in the field. Topics covered include an intro to basic research concepts, conceptualization, quantitative & qualitative approaches, as well as research in practice. At 800+ pages, however, the text could be received by students as a bit overwhelming.

Content appears consistent and reliable when compared to similar textbooks in this topic.

The book's well-structured content begins with fundamental concepts, such as the scientific method and evidence-based practice, guiding readers through the initiation of research projects with attention to ethical considerations. It seamlessly transitions to detailed explorations of both quantitative and qualitative methods, covering topics like sampling, measurement, survey design, and various qualitative data collection approaches. Throughout, the authors emphasize ethical responsibilities, cultural respectfulness, and critical thinking. These are crucial concepts we cover in social work and I was pleased to see these being integrated throughout.

The level of the language used is appropriate for graduate-level study.

Book appears to be consistent in the tone and terminology used.

Modularity rating: 4

The images and videos included, help to break up large text blocks.

Topics covered are well-organized and comprehensive. I appreciate the thorough preamble the authors include to situate the role of the social worker within a research context.

Interface rating: 4

When downloaded as a pdf, the book does not begin until page 30+ so it may be a bit difficult to scroll so long for students in order to access the content for which they are searching. Also, making the Table of Contents clickable, would help in navigating this very long textbook.

I did not find any grammatical errors or typos in the pages reviewed.

I appreciate the efforts made to integrate diverse perspectives, voices, and images into the text. The discussion around ethics and cultural considerations in research was nuanced and comprehensive as well.

Overall, the content of the book aligns with established principles of social work research, providing accurate and up-to-date information in a format that is accessible to graduate students and educators in the field.

Reviewed by Elisa Maroney, Professor, Western Oregon University on 1/2/22

With well over 800 pages, this text is beyond comprehensive! read more

With well over 800 pages, this text is beyond comprehensive!

I perused the entire text, but my focus was on "Part 4: Using qualitative methods." This section seems accurate.

As mentioned above, my primary focus was on the qualitative methods section. This section is relevant to the students I teach in interpreting studies (not a social sciences discipline).

This book is well-written and clear.

Navigating this text is easy, because the formatting is consistent

My favorite part of this text is that I can be easily customized, so that I can use the sections on qualitative methods.

The text is well-organized and easy to find and link to related sections in the book.

There are no distracting or confusing features. The book is long; being able to customize makes it easier to navigate.

I did not notice grammatical errors.

The authors offer resources for Afrocentricity for social work practice (among others, including those related to Feminist and Queer methodologies). These are relevant to the field of interpreting studies.

I look forward to adopting this text in my qualitative methods course for graduate students in interpreting studies.

Table of Contents

  • 1. Science and social work
  • 2. Starting your research project
  • 3. Searching the literature
  • 4. Critical information literacy
  • 5. Writing your literature review
  • 6. Research ethics
  • 7. Theory and paradigm
  • 8. Reasoning and causality
  • 9. Writing your research question
  • 10. Quantitative sampling
  • 11. Quantitative measurement
  • 12. Survey design
  • 13. Experimental design
  • 14. Univariate analysis
  • 15. Bivariate analysis
  • 16. Reporting quantitative results
  • 17. Qualitative data and sampling
  • 18. Qualitative data collection
  • 19. A survey of approaches to qualitative data analysis
  • 20. Quality in qualitative studies: Rigor in research design
  • 21. Qualitative research dissemination
  • 22. A survey of qualitative designs
  • 23. Program evaluation
  • 24. Sharing and consuming research

Ancillary Material

About the book.

We designed our book to help graduate social work students through every step of the research process, from conceptualization to dissemination. Our textbook centers cultural humility, information literacy, pragmatism, and an equal emphasis on quantitative and qualitative methods. It includes extensive content on literature reviews, cultural bias and respectfulness, and qualitative methods, in contrast to traditionally used commercial textbooks in social work research.  

Our author team spans across academic, public, and nonprofit social work research. We love research, and we endeavored through our book to make research more engaging, less painful, and easier to understand. Our textbook exercises direct students to apply content as they are reading the book to an original research project. By breaking it down step-by-step, writing in approachable language, as well as using stories from our life, practice, and research experience, our textbook helps professors overcome students’ research methods anxiety and antipathy.  

If you decide to adopt our resource, we ask that you complete this short  Adopter’s Survey  that helps us keep track of our community impact. You can also contact  [email protected]  for a student workbook, homework assignments, slideshows, a draft bank of quiz questions, and a course calendar. 

About the Contributors

Matt DeCarlo , PhD, MSW is an assistant professor in the Department of Social Work at La Salle University. He is the co-founder of Open Social Work (formerly Open Social Work Education), a collaborative project focusing on open education, open science, and open access in social work and higher education. His first open textbook, Scientific Inquiry in Social Work, was the first developed for social work education, and is now in use in over 60 campuses, mostly in the United States. He is a former OER Research Fellow with the OpenEd Group. Prior to his work in OER, Dr. DeCarlo received his PhD from Virginia Commonwealth University and has published on disability policy.

Cory Cummings , Ph.D., LCSW is an assistant professor in the Department of Social Work at Nazareth University. He has practice experience in community mental health, including clinical practice and administration. In addition, Dr. Cummings has volunteered at safety net mental health services agencies and provided support services for individuals and families affected by HIV. In his current position, Dr. Cummings teaches in the BSW program and MSW programs; specifically in the Clinical Practice with Children and Families concentration. Courses that he teaches include research, social work practice, and clinical field seminar. His scholarship focuses on promoting health equity for individuals experiencing symptoms of severe mental illness and improving opportunities to increase quality of life. Dr. Cummings received his PhD from Virginia Commonwealth University.

Kate Agnelli , MSW, is an adjunct professor at VCU’s School of Social Work, teaching masters-level classes on research methods, public policy, and social justice. She also works as a senior legislative analyst with the Joint Legislative Audit and Review Commission (JLARC), a policy research organization reporting to the Virginia General Assembly. Before working for JLARC, Ms. Agnelli worked for several years in government and nonprofit research and program evaluation. In addition, she has several publications in peer-reviewed journals, has presented at national social work conferences, and has served as a reviewer for Social Work Education. She received her MSW from Virginia Commonwealth University.

Contribute to this Page

  • Biochemistry and Molecular Biology
  • Biostatistics
  • Environmental Health and Engineering
  • Epidemiology
  • Health Policy and Management
  • Health, Behavior and Society
  • International Health
  • Mental Health
  • Molecular Microbiology and Immunology
  • Population, Family and Reproductive Health
  • Program Finder
  • Admissions Services
  • Course Directory
  • Academic Calendar
  • Hybrid Campus
  • Lecture Series
  • Convocation
  • Strategy and Development
  • Implementation and Impact
  • Integrity and Oversight
  • In the School
  • In the Field
  • In Baltimore
  • Resources for Practitioners
  • Articles & News Releases
  • In The News
  • Statements & Announcements
  • At a Glance
  • Student Life
  • Strategic Priorities
  • Inclusion, Diversity, Anti-Racism, and Equity (IDARE)
  • What is Public Health?

research@BSPH

The School’s research endeavors aim to improve the public’s health in the U.S. and throughout the world.

  • Funding Opportunities and Support
  • Faculty Innovation Award Winners

Conducting Research That Addresses Public Health Issues Worldwide

Systematic and rigorous inquiry allows us to discover the fundamental mechanisms and causes of disease and disparities. At our Office of Research ( research@BSPH), we translate that knowledge to develop, evaluate, and disseminate treatment and prevention strategies and inform public health practice. Research along this entire spectrum represents a fundamental mission of the Johns Hopkins Bloomberg School of Public Health.

From laboratories at Baltimore’s Wolfe Street building, to Bangladesh maternity wards in densely   packed neighborhoods, to field studies in rural Botswana, Bloomberg School faculty lead research that directly addresses the most critical public health issues worldwide. Research spans from molecules to societies and relies on methodologies as diverse as bench science and epidemiology. That research is translated into impact, from discovering ways to eliminate malaria, increase healthy behavior, reduce the toll of chronic disease, improve the health of mothers and infants, or change the biology of aging.

120+ countries

engaged in research activity by BSPH faculty and teams.

of all federal grants and contracts awarded to schools of public health are awarded to BSPH. 

citations on  publications where BSPH was listed in the authors' affiliation in 2019-2023. 

 publications where BSPH was listed in the authors' affiliation in 2019-2023.

Departments

Our 10 departments offer faculty and students the flexibility to focus on a variety of public health disciplines

Centers and Institutes Directory

Our 80+ Centers and Institutes provide a unique combination of breadth and depth, and rich opportunities for collaboration

Institutional Review Board (IRB)

The Institutional Review Board (IRB) oversees two IRBs registered with the U.S. Office of Human Research Protections, IRB X and IRB FC, which meet weekly to review human subjects research applications for Bloomberg School faculty and students

Generosity helps our community think outside the traditional boundaries of public health, working across disciplines and industries, to translate research into innovative health interventions and practices

Introducing the research@BSPH Ecosystem

The   research@BSPH   ecosystem aims to foster an interdependent sense of community among faculty researchers, their research teams, administration, and staff that leverages knowledge and develops shared responses to challenges. The ultimate goal is to work collectively to reduce administrative and bureaucratic barriers related to conducting experiments, recruiting participants, analyzing data, hiring staff,   and more, so that faculty can focus on their core academic pursuits.

research@BSPH Ecosystem Graphic

Research at the Bloomberg School is a team sport.

In order to provide  extensive guidance, infrastructure, and support in pursuit of its research mission,   research@BSPH  employs three core areas: strategy and development, implementation and impact, and integrity and oversight. Our exceptional research teams comprised of faculty, postdoctoral fellows, students, and committed staff are united in our collaborative, collegial, and entrepreneurial approach to problem solving. T he Bloomberg School ensures that our research is accomplished according to the highest ethical standards and complies with all regulatory requirements. In addition to our institutional review board (IRB) which provides oversight for human subjects research, basic science studies employee techniques to ensure the reproducibility of research. 

Research@BSPH in the News

Four bloomberg school faculty elected to national academy of medicine.

Considered one of the highest honors in the fields of health and medicine, NAM membership recognizes outstanding professional achievements and commitment to service.

The Maryland Maternal Health Innovation Program Grant Renewed with Johns Hopkins

Lerner center for public health advocacy announces inaugural sommer klag advocacy impact award winners.

Bloomberg School faculty Nadia Akseer and Cass Crifasi selected winners at Advocacy Impact Awards Pitch Competition

  • Skip to main content
  • Skip to main navigation

Why the Garland School of Social Work for a Master of Social Work degree?

At the Garland School, we believe social work is about service and justice, healing and restoration, and the dignity of each individual. Through innovative academics and experiential learning opportunities, both in-person and online , we strive to train and equip social work professionals to support the needs of clients through the ethical integration of faith and practice. Students at the GSSW are challenged by expert faculty members, rigorous curriculum and outstanding peer cohorts in an environment that allows every student to select a community that will allow them to thrive as they complete their coursework, whether that be in-person or online, clinical or community practice. 

Gabby White, MSW alumna— “I chose social work because helping people from a social justice standpoint drew me in, and I chose Baylor Social Work because the integration of faith and practice showed me that I could bring all parts of me to the profession.”

Faith & practice:  Our 10th Competency

Social work—particularly social work education at Baylor—recognizes diverse expressions of faith and seeks to honor the role of spirituality as part of what wholistically shapes a person, their family and community. Perhaps you are interested in social work because of the way your faith has motivated you or others? As part of our program, students learn about the influence of these beliefs and values in the profession. We call this our 10th competency. The Council on Social Work Education requires nine competencies as part of our accreditation, but the GSSW has a 10th. 

10th competency at the GSSW

Faculty mentorship and research opportunities through our Waco MSW & online MSW programs

GSSW faculty are renowned, expert leaders in the social work profession, have a passion for sharing knowledge with the students of the GSSW, and lead by example in and out of the classroom. They model servant leadership and offer students a unique opportunity through mentorship. Faculty mentors are resources for students as they navigate the MSW program. Mentors often continue beyond graduation to develop professional relationships as colleagues with their mentees.  Students also have the opportunity to partner with faculty on research through specialization projects, throughout their MSW experience.

Small, lively classes both in-person and online

The GSSW student-to-professor ratio is 10:1, and the average class size is 15 students,  which translates into an environment of active, meaningful learning. The size and style of classes offered at the GSSW, both in-person and online, give students the opportunity to connect with classmates and professors on a deeper level. Just like in the profession and across the world, professors and students come from all walks of life and bring their own unique perspectives into the learning environment. Our classes allow for deep engagement, lively discussion and true connection with each other.

Dual degree options in Waco

The GSSW partners with departments at the university to provide dual degree options that allow students to connect their passions.  Dual degrees are offered at our Waco campus in partnership with Baylor’s Hankamer School of Business and George W. Truett Theological Seminary. Degrees offered include: MSW/MBA, MSW/MDiv, MSW/MTS . Learn more here.

Keep up with the Garland School of Social Work in all the different ways we're out there: GSSW News, Social & More . 

  • Diana R. Garland School of Social Work

811 Washington Ave. Waco, TX 76701

  • General Information
  • Academics & Research
  • Administration
  • Gateways for ...
  • About Baylor
  • Give to Baylor
  • Pro Futuris
  • Social Media
  • College of Arts & Sciences
  • George W. Truett Theological Seminary
  • Graduate School
  • Hankamer School of Business
  • Honors College
  • Louise Herrington School of Nursing
  • Research at Baylor University
  • Robbins College of Health and Human Sciences
  • School of Education
  • School of Engineering & Computer Science
  • School of Music
  • University Libraries, Museums, and the Press
  • More Academics
  • Compliance, Risk and Safety
  • Human Resources
  • Marketing and Communications
  • Office of General Counsel
  • Office of the President
  • Office of the Provost
  • Operations, Finance & Administration
  • Senior Administration
  • Student Life
  • University Advancement
  • Undergraduate Admissions
  • Graduate Admissions
  • Baylor Law School Admissions
  • Social Work Graduate Programs
  • George W. Truett Theological Seminary Admissions
  • Online Graduate Professional Education
  • Virtual Tour
  • Visit Campus
  • Alumni & Friends
  • Faculty & Staff
  • Prospective Faculty & Staff
  • Prospective Students
  • Anonymous Reporting
  • Annual Fire Safety and Security Notice
  • Cost of Attendance
  • Digital Privacy
  • Legal Disclosures
  • Mental Health Resources
  • Web Accessibility
  • Open access
  • Published: 18 June 2024

Adapting and testing measures of organizational context in primary care clinics in KwaZulu-Natal, South Africa

  • Hannah H. Leslie 1 ,
  • Sheri A. Lippman 1 , 2 ,
  • Alastair van Heerden 3 , 4 ,
  • Mbali Nokulunga Manaka 3 ,
  • Phillip Joseph 3 ,
  • Bryan J. Weiner 5 &
  • Wayne T. Steward 1  

BMC Health Services Research volume  24 , Article number:  744 ( 2024 ) Cite this article

Metrics details

Implementation science frameworks situate intervention implementation and sustainment within the context of the implementing organization and system. Aspects of organizational context such as leadership have been defined and measured largely within US health care settings characterized by decentralization and individual autonomy. The relevance of these constructs in other settings may be limited by differences like collectivist orientation, resource constraints, and hierarchical power structures. We aimed to adapt measures of organizational context in South African primary care clinics.

We convened a panel of South African experts in social science and HIV care delivery and presented implementation domains informed by existing frameworks and prior work in South Africa. Based on panel input, we selected contextual domains and adapted candidate items. We conducted cognitive interviews with 25 providers in KwaZulu-Natal Province to refine measures. We then conducted a cross-sectional survey of 16 clinics with 5–20 providers per clinic ( N  = 186). We assessed reliability using Cronbach’s alpha and calculated interrater agreement (a wg ) and intraclass correlation coefficient (ICC) at the clinic level. Within clinics with moderate agreement, we calculated correlation of clinic-level measures with each other and with hypothesized predictors – staff continuity and infrastructure – and a clinical outcome, patient retention on antiretroviral therapy.

Panelists emphasized contextual factors; we therefore focused on elements of clinic leadership, stress, cohesion, and collective problem solving (critical consciousness). Cognitive interviews confirmed salience of the domains and improved item clarity. After excluding items related to leaders’ coordination abilities due to missingness and low agreement, all other scales demonstrated individual-level reliability and at least moderate interrater agreement in most facilities. ICC was low for most leadership measures and moderate for others. Measures tended to correlate within facility, and higher stress was significantly correlated with lower staff continuity. Organizational context was generally more positively rated in facilities that showed consistent agreement.

Conclusions

As theorized, organizational context is important in understanding program implementation within the South African health system. Most adapted measures show good reliability at individual and clinic levels. Additional revision of existing frameworks to suit this context and further testing in high and low performing clinics is warranted.

Peer Review reports

Despite the large investment in research to identify clinical and behavioral interventions to improve HIV prevention and care, many efficacious programs never get incorporated into policy or scaled into clinical settings; others fail when put into practice [ 1 , 2 ]. In contexts such as South Africa, with 7.7 million people living with HIV (PLHIV), 4.8 million on antiretroviral therapy (ART) [ 3 ], and an aging population of PLHIV who have increasingly complex care needs [ 4 , 5 ], scaling interventions that ensure effective, evidence-based care is a priority [ 6 ]. To this end, the field of implementation science has begun to shed light on why some efficacious interventions have not translated into programmatic successes, noting factors that must be addressed within the clinical environment to improve implementation and sustainment [ 1 , 7 , 8 ].

Implementation science frameworks situate interventions within the organizational context of a health care setting. The Exploration, Preparation, Implementation, Sustainment (EPIS) conceptual framework includes absorptive capacity, culture, climate, and leadership as elements of the context that shape exploration of interventions [ 9 , 10 ], while the Consolidated Framework for Implementation Research (CFIR) identifies domains such as culture, implementation climate, and readiness for implementation as key factors at the organizational or team level [ 11 , 12 ]. Recent updates to CFIR have focused on clarifying these domains as antecedents on the pathway to implementation outcomes [ 12 ]. The theory of Organizational Readiness for Change similarly identifies contextual factors such as the culture and climate of the organization that help to shape readiness for a specific change, which in turn affects implementation effectiveness [ 13 ]. Researchers have drawn on these definitions in efforts to better measure organizational characteristics: a 2017 systematic review found 76 articles attempting to measure organizational context, a majority of which were based in the United States; the authors recommended greater efforts to use mixed-methods research to develop and test measures in a range of settings [ 14 ].

Measures developed within the US health care system reflect the decentralized nature of the system, the national culture of individualism, and high levels of clinical autonomy that distinguish the US health care system from that of many other nations with more hierarchical, top-down power structures. The lack of validated measures of organizational context in centralized health systems, particularly in low-resource countries where primary care clinics are overextended, contributes to a clear gap in understanding which contextual factors impact successful program implementation and how these factors can be addressed [ 15 , 16 ] . Research in South Africa from our team and others has found that program implementation can be heavily influenced by clinic leadership, particularly leaders’ problem-solving skills, in addition to provider teamwork and clinic environment such as material and human resources [ 17 , 18 , 19 , 20 ]. Qualitative assessment across multiple levels of the health system in KwaZulu-Natal Province identified perceived benefits of a particular program as well as broader resource availability and clear communication as factors shaping integration of HIV programming into general care [ 21 ]. Recent research on implementation of maternal health quality improvement underscored the importance of leadership, teamwork, and provider motivation in maintaining consistent implementation of interventions, particularly in the face of external factors such as the COVID-19 pandemic, budget cuts, and labor actions [ 20 ].

In this study, we aimed to adapt implementation science frameworks to the context of primary care in South Africa, to develop and test measures of organizational context based on the adapted framework, and to assess if the resulting measures demonstrated associations with hypothesized determinants and outcomes of organizational context.

We report this study, which included formative qualitative work and a cross-sectional survey, based on recommendations for scale development and testing studies [ 22 ] and following STROBE guidelines for observational research (Additional file 1).

Study setting

This study took place in uMgungundlovu District in KwaZulu-Natal Province, South Africa. The district includes the capital city of Pietermaritzburg but is otherwise largely rural. Adult HIV prevalence is estimated at 30%, and the 57 Department of Health (DOH) facilities provide ART for approximately 140,000 individuals [ 23 , 24 ]. The U.S. President’s Emergency Plan for AIDS Relief (PEPFAR) supports the national HIV response in this district by funding implementing partner organizations that second staff to DOH facilities in support of HIV care and that provide specific services like clinical mentoring or client tracing following disengagement from care.

Domain and item generation

We followed an iterative approach to define priority domains and identify potential items. We first synthesized implementation science literature and existing research in South Africa into a conceptual model delineating organizational context as an overall determinant of program-specific domains and ultimately organizational readiness for change (Fig.  1 ). We defined key elements of organizational context as service readiness (the resources available for service provision), stress and workload, leadership (including leadership practice, communication, and direction on roles and responsibilities), learning climate as a space for shared trial and evaluation, and team cohesion (trust and shared values). Primary care clinics in this area typically operate with an Operational Manager (OM) who is also a professional nurse, a deputy manager, and a small number of nurses who rotate through clinical services; we initially conceptualized the full clinic as the organizational unit, the OM and deputy as leaders, and the nursing and support staff as the relevant team. We used literature searches, investigator knowledge and networks, and the Community Advisory Board of the Human Sciences Research Council to identify participants for an Expert Panel composed of health care providers, program managers, social scientists, and DOH representatives all familiar with the provision of HIV care in the province. We convened the panel in a hybrid format and presented the initial conceptual framework for their review. The Expert Panel affirmed the primacy of organizational context in shaping program implementation in this setting, and noted major aspects to consider: that program implementation and sustainment is driven by top-down directives, that the context of overburdened facilities and resource constraints shapes program uptake, that leadership communication and management of complex relationships within and beyond facilities is critical, and that providers face high demands and shifting roles that can make teamwork particularly important. Panelists noted that the concept of a learning climate informed by a quality improvement feedback loop was not present within primary facilities; they highlighted leadership’s role in monitoring performance as more salient, complemented by providers’ active interest in solving implementation problems. Panelists concurred with conceptualizing clinics as an organizational unit with distinct leaders and a single team of providers. Following the Panel discussion, we returned to the literature to clarify candidate constructs and identify potential items. We also mapped service readiness to specific human and material resources that inform organizational context for the full clinic.

figure 1

Original conceptual model of factors shaping program implementation

We identified 7 latent constructs for measurement, including 4 related to leadership; Table  1 defines each construct and the accompanying measure. Scales on leadership engagement, feedback and monitoring, and stress addressed our revised conceptual framework, and each was based on an existing measure validated in other contexts [ 25 , 26 ]. To measure teamwork, we adapted a measure of cohesion we had previously validated in community settings in South Africa.[ 27 ] In place of directly assessing “learning climate”, we adapted a measure on critical consciousness, which we had originally developed to capture community empowerment and learning culture [ 27 , 28 ], to address problem solving within the facility as described by the Expert Panel. Two scales were not direct adaptations. Based on other research in South African clinics [ 17 ] and Expert Panel input on the importance of leaders in optimizing implementation in light of resource constraints, we drew from and expanded on existing items on change efficacy [ 29 , 30 ] to create a leadership-focused scale on resource mobilization and problem solving. To capture coordination in this setting, we developed new items to capture the specifics of coordinating facility staff, implementing partners, and community leaders. The Expert Panel reviewed and revised items for clarity; we reconvened a subset of the panel to finalize items collectively. Items were translated into isiZulu and back translated to English by co-author MM.

We conducted cognitive interviews with 25 participants in 2 stages from June to September 2021 for content validation of the proposed items, allowing time to revise between stages. We sampled 19 providers (primarily nurses) and 6 organizational managers (OMs, clinic leaders) from 5 facilities and tested 3 to 4 scales per interview. We selected nurses and OMs for cognitive interviews based on their central responsibility for delivering care and hence capacity to answer all proposed items, including items on clinical care delivery not included in this analysis. We used the think-aloud method and probed respondents on clarity of the item and response choices, thought processes leading to their response, and ease of answering. We asked respondents to identify overlapping or redundant items. Interviews were conducted in English or isiZulu. We recorded interviews and translated and transcribed for analysis. We conducted rapid analysis to assess key terms such as “clinic leaders” and “clinic staff,” and we iteratively revised items and scales for clarity and efficiency. For instance, we revised an item on whether “leaders make use of all available staff to implement clinic programs” to whether “leaders make the most of the staff available” based on responses that not all staff are needed or appropriate for a given program. Interviewees provided consistent responses in conceptualizing their clinics as a single unit, identifying the OM and deputy as the relevant leaders, and defining nursing and support staff personnel as a team. The final instrument included 4 or 5 items per scale with 4 response options for each statement (See Additional file 2, Table S1 for all items). Greater agreement indicated respondents perceived more of that construct within the clinic; all constructs except for stress were positive aspects of organizational culture and most items on these constructs had positive stems; items with negative stems were maintained as written for inter-item assessment and reverse-scored for subsequent analyses.

Data collection

To test the proposed measures, we calculated a minimum sample size of 12 providers each within 14 facilities (168 respondents) to provide > 80% power to detect a correlation of at least 0.57 with alpha of 0.05, one-sided. To ensure this sample size while including facilities with fewer than 12 providers total, we sampled 16 facilities using random selection among facilities with at least 100 patients on ART based on provincial TIER.net data, stratified by ART patient population size (< 2000, > 2000) to account for possible differences in smaller clinics and larger clinics with potentially more complex structures. Facilities participating in cognitive interviews were ineligible. Within facility, we selected all OMs and used stratified random sampling to select up to 8 higher-level nurses (Professional Nurses, Certified Nursing Professionals, Registered Nurses) and up to 8 auxiliary nurses and other patient-facing providers engaged in HIV care (Enrolled Nurses or Enrolled Nursing Assistants, Nutritionists, Pharmacists, Nutrition or Pharmacy Assistants, Lay Counselors), as the Expert Panel and cognitive interviews confirmed that these personnel were considered part of the provider team. Selection was conducted by ordering providers at random within strata to provide replacement respondents when possible in case selected providers were not available. We administered surveys via Research Electronic Data Capture (REDCap) to capture basic demographics on providers and their roles, the organizational context measures, and additional measures on conduct of specific programs analyzed elsewhere. Data collection took place from October 2021 – March 2022. National restrictions related to the COVID-19 pandemic, including limitations on clinic scheduling and staff meetings, were shifted to alert level 1 (the lowest level) on October 1 2021 and remained at that level throughout data collection [ 31 ]; routine clinical practices continued to be affected during the study period by considerations such as diverting staff for vaccination campaigns.

We conducted a concurrent facility audit of human and material resources using direct observation and a survey with the OM or a proxy if there was no OM available. From the audit, we calculated a summary score based on the presence of key infrastructure; indicators were adapted from the World Health Organization Service Availability and Readiness Assessment and are listed in Additional file 2, Table S2 [ 32 ]. We calculated staff continuity based on the number of clinical staff and number of clinical positions with turnover in the past year. We also extracted routine program data on HIV patient outcomes from the district and national reporting systems. We calculated aggregate retention on ART per facility as the number of patients remaining on ART as of March 2022 out of the number reported on ART in March 2021 or newly starting ART between March 2021 and February 2022.

Data analysis

Analysis proceeded in four stages. First, we conducted descriptive analysis of facilities, providers, and items. We used proportions and medians to summarize measures from the facility audit and provider surveys. We assessed incomplete responses and straightline (invariant) responses by provider and measure. Second, we conducted agreement and reliability checks at individual and facility levels. We quantified inter-item agreement with Cronbach’s alpha among complete responses. We calculated the a wg(j) statistic as a measure of agreement within facility; this calculation assumes a set of parallel items answered by multiple raters and can be interpreted similarly to Cohen’s kappa. We report mean a wg(j) across facilities and the number of facilities achieving at least moderate agreement (a wg(j)  ≥ 0.50) per measure [ 33 ]. We calculated mean respondent score per measure and used the intraclass correlation coefficient (ICC) to quantify facility-level agreement and reliability among all participants and again limited to professional nurses. Third, we conducted convergent validation using the Spearman rank correlation coefficient of all measures within facility and testing correlation with human and material resources hypothesized to shape organizational context – infrastructure and staff continuity – as well as with retention patients on ART as a clinical outcome. All validity analyses were limited to facilities demonstrating moderate agreement on the relevant measure. We assessed correlations against a pre-specified threshold of rho = 0.30 (moderate effects [ 34 ]) and reporting statistical significance. Fourth, as an exploratory analysis, we compared respondents’ average scale scores between facilities with a wg(j)  ≥ 0.50 (moderate agreement) on all measures versus facilities where moderate agreement was obtained on fewer measures; we used linear generalized estimating equation (GEE) models accounting for clustering within facility. All analyses were conducted in Stata version 17.

Ethics approvals

This study was approved by the Institutional Review Board at the University of California, San Francisco (20–31802), the Research Ethics Committee at the Human Sciences Research Council (REC 1/19/08/20), and the uMgungundlovu Health District; all methods were carried out in accordance with relevant guidelines and regulations. All facilities provided consent for inclusion and each participant provided written informed consent to participate.

A representative from each of the 16 facilities consented to participation and assisted in completion of the facility audit; data from all facilities were extracted in full from district reporting. The median facility had 14 full-time clinical staff; only 3 of 16 facilities had all positions filled with permanent personnel (no vacancies, no interim posts) at the time of assessment (Table  2 A). Most facilities demonstrated some gaps in core infrastructure: median score was 61%. Routine district data suggested retention on ART was high in all facilities, with a median of 95% of patients retained between March 2021 and March 2022.

Of 194 providers approached, 186 consented to participate (95.9%); those declining cited insufficient time. One respondent who worked as a data capturer rather than a patient-facing role was excluded from analysis. Consistent with health care providers in South Africa generally, most respondents were female (87.5%, Table  2 B). Due to extensive turnover in leadership of some clinics, surveys were completed by OMs at 14 of 16 facilities, including one interim OM. Just over one third of respondents were non-OM nurses. Respondents reported a median of 8 years of professional experience and 6 years at their current facility.

At the individual level, respondents tended to agree with most items: average scores for the proposed measures clustered near 3 = “Agree” out of the possible range 1 – 4 (Table  3 ). Straightline responses were common, ranging from 34% of respondents on the measure of stress to 54% for critical consciousness; nearly all such responses were all “Agree” except for the measure on stress, where straightline responses were split between “Agree” and “Disagree” (data not shown). Leadership coordination had the highest missingness, with 59 participants responding “Don’t know” or skipping at least one item, primarily two items related to the external clinic committee (whether the committee met regularly and if leaders acted on its input). Cronbach’s alpha indicated moderate to strong inter-item agreement for all measures except coordination. We excluded coordination from subsequent analysis given the high degree of missingness and inadequate inter-item agreement.

Facilities accounted for up to 22 to 23% (for critical consciousness and stress, respectively) of total variance in mean scores. ICC exceeded a minimum threshold of 0.05 for feedback and monitoring, stress, cohesion, and critical consciousness (Table  3 B); near zero ICC for leadership engagement and resource mobilization suggested these measures could not reliably distinguish between facilities. ICC was higher when limited to professional nurses for most measures, suggesting that for measures other than leadership engagement, professional nurses responded more consistently within facilities than other providers. Distribution of facility means underscores the homogeneity of scales like leadership engagement across all facilities (Additional file 2, Figure S1).

Item responses demonstrated moderate to strong agreement within facilities, with a wg(j) ranging from 0.57 for stress (12 of 16 facilities with at least moderate agreement) to 0.78 for critical consciousness (all facilities with at least moderate agreement). Facilities with a wg(j)  < 0.50 demonstrated inconsistent agreement across responses to consider a summary statistic representative of the facility as a whole.

Limiting analysis to facilities with at least moderate agreement on a given measure, we found that the 6 measures of organizational context showed substantial correlation within facility: absolute correlation exceeded the predetermined threshold of 0.30 in all cases and achieved statistical significance at p  < 0.05 for multiple assessments despite the small number of facilities (Table  4 A). When compared with predicted inputs and outcomes of facility climate (Table  4 B), correlation with staff continuity was moderate ( rho  > 0.30) for feedback and monitoring, resource mobilization, and stress, with only stress showing a statistically significant correlation (-0.68). Higher scores on feedback and monitoring were correlated with lower facility infrastructure ( rho  = -0.53), contrary to expectation. Cohesion was correlated with higher retention on ART ( rho  = 0.49, p  = 0.09).

In our exploratory analysis to understand potential differences in context in facilities where staff were largely in agreement on their scoring, we found that respondents in the 9 facilities with moderate agreement on all scales reported more positive organizational context than the other 7 facilities, with statistically significant differences in leadership engagement, resource mobilization, and stress (Table  5 ). The largest difference was in reported stress: average scores on the stress scale were 0.41 points lower (less stress) in facilities with agreement on all scales.

In this study, we developed and adapted measures for 7 domains of organizational context based on implementation science frameworks and expertise within primary care clinics in South Africa. The measures demonstrated reasonable individual-level consistency with our study population, except for the coordination scale created de novo; the remaining measures showed moderate to strong agreement and low to moderate reliability within facility. Variance between facilities was modest, possibly reflecting the shared context of a rural setting in a single district. Measures generally correlated with each other at the facility level, though we found limited evidence of relationships between the facility scores and hypothesized predictors and outcomes in validation analyses. Facilities with stronger agreement among respondents also tended to have a more positive context.

The Expert Panel concurred with existing literature and implementation science frameworks that facility leadership was critical to program implementation and sustainment. In this setting of relatively small clinics and distribution of responsibilities across staff, they prioritized overall leadership above leadership specific to implementation of one program, which has been more commonly measured in US-based implementation science research [ 35 ]. We adapted or developed measures for four aspects of overall leadership hypothesized to improve program implementation: engagement, feedback and monitoring, resource mobilization, and coordination. The newly created items on coordination with external partners and clinic committees proved difficult for some respondents to answer and showed limited agreement even within complete responses. Further efforts to capture this important construct, potentially as an index rather than a scale, are warranted. The other leadership measures demonstrated good item agreement and moderate agreement within facilities in our sample; as measures developed for use at an organizational level, adequate agreement across raters is critical. The finding that ICCs for leadership measures were generally higher among professional nurses, typically the most trained professional cadre in primary care facilities, indicates that these providers were relatively more consistent within facility compared to across facilities than all respondents, potentially due to greater exposure to clinic leaders or to differing interpretations of who qualifies as a ‘leader’ between professional nurses and other personnel. Our cognitive interviews demonstrated consistent understanding of ‘leader’ among the professional nurses and OMs we interviewed; including additional cadres could be useful to extend this evidence. The findings to date support use of these leadership measures within higher cadres such as professional nurses in similar settings, particularly for clinical interventions.

Beyond leadership, we tested measures of stress, cohesion, and critical consciousness hypothesized to shape uptake of new programs. These scales similarly demonstrated good item agreement and moderate inter-rater agreement within facilities; ICCs (0.23; 0.14; and 0.22, respectively) well exceeded the minimum threshold of 0.05 among all participants, suggesting greater consistency in respondents across cadres within facility than for leadership measures.

The six scales demonstrating sound measurement properties also tended to correlate together within facilities, potentially reflecting less random variation in these facilities and/or that respondents provided similar responses across scales and between raters within these facilities. Three scales demonstrated some correlation with inputs and outputs in accordance with predictions: resource mobilization, stress, and cohesion. Better resource mobilization was correlated with higher staff continuity. The scale for stress, the only construct indicating a negative climate and where agreement indicated worse performance, had less homogeneity than other scales, surfacing potential to further refine the measure to better distinguish between clinic contexts. Higher stress also showed a correlation with lower staff continuity. The scale for cohesion – teamwork within providers – demonstrated moderate heterogeneity between individuals and between facilities, and was correlated with retention on ART. Given the importance of provider burnout before and especially during the COVID-19 pandemic [ 36 , 37 , 38 ], better assessment of stress and cohesion can help to identify best performing clinics and to target facilities most in need of management or individual interventions on fostering teamwork and coping with stress.

The remaining scales—on leadership engagement, feedback and monitoring, and critical consciousness—demonstrated two drawbacks. The first was high levels of straightline responses, with approximately half of respondents indicating the same answer—typically “Agree”—to all items. These response patterns could be explained in several ways: 1) truly uniform conditions across and within these facilities within a single district, 2) insufficient distinction between items to capture indications of very low or very high levels of each construct, 3) social desirability within a hierarchical work setting, 4) lack of strong opinion, particularly given the strain providers face to deliver care amidst constrained resources, and/or 5) respondent inattention or fatigue. In the absence of a neutral response option (which we did not provide to avoid respondents defaulting to the median option), one or more of these explanations could have resulted in repeatedly agreeing. This degree of invariance can inflate apparent agreement within individuals and facilities, but it undermines the utility of the measures in distinguishing between facilities, should such distinctions exist.

The second drawback was inconsistent evidence of correlation with hypothesized predictors (staff continuity and infrastructure) and with patient outcomes (retention on ART) in the validation analysis. Careful consideration is required to understand these findings. It is possible that these initial efforts to adapt the constructs of organizational context did not fully capture the dynamics that most strongly shape performance in these facilities. This may be particularly salient given the time of the assessment following the upheaval of the COVID-19 pandemic, which shaped staff continuity, organizational context, and ART retention. An additional limitation is the relative insensitivity of the outcome measure: ART programs are longstanding, and our measure of patient retention based on (imperfect) aggregate data demonstrated little variability across the sampled facilities. Organizational context at the time of assessment may have had little influence on patient retention even had it been measured perfectly. Measures reflecting implementation or sustainment of more recently introduced programs would provide an indicator more sensitive to variation in organizational context.

Our study has multiple strengths, including use of implementation science frameworks and organizational readiness theory to propose measures of organizational context in primary care facilities in South Africa. This work expands on the qualitative work attesting to the importance of organizational context in implementing and maintaining interventions in this setting [ 17 , 20 , 21 , 39 ]. We relied on a majority South African Expert Panel to prioritize constructs and items for measurement, and we conducted detailed cognitive interviews to revise and clarify items. Limitations include difficulty in reaching providers – particularly clinic managers – amidst regular turnover and COVID-19 challenges (including the rapid changes in clinical responsibilities and locations, provider illnesses and deaths, and restrictions on routine activities) and the reliance on aggregate patient outcome data that were both imperfect and potentially insensitive to organizational context. Soliciting perspectives on leadership and organizational context in hierarchical settings is inherently fraught; it is difficult to disentangle social desirability from true agreement. Thresholds for agreement measures are imposed on a continuous metric and may not distinguish truly different performance levels [ 33 ].

This study was an initial effort to adapt and test measures of organizational context to better understand program implementation in primary care within the South African health system. This work confirms the importance of organizational context from the perspective of those working within primary care clinics and supports standing calls for further efforts to develop and test theories, frameworks, and measures that capture the dynamics of health settings in resource-constrained settings. While this initial effort at adaptation of theory and measurement to the context of South African clinics produced scales with sound measurement properties and several scales – notably resource management, stress, and cohesion – with promise for differentiating facilities, further work is needed to understand the most important domains of organizational context shaping patient outcomes. From there, further efforts to refine constructs and measures are warranted, including positive deviance assessments to ensure a sample of facilities with strongly divergent performance and inclusion of implementation outcomes more closely tied to organizational context.

Availability of data and materials

The datasets used during the current study are available from the corresponding author on reasonable request.

Abbreviations

Antiretroviral

Agreement within group

Consolidated Framework for Implementation Research

Coronavirus disease 2019

Department of Health

Exploration, Preparation, Implementation, Sustainment

Generalized estimating equation

Human immunodeficiency virus

Intraclass correlation coefficient

Operational manager

President’s Emergency Plan for AIDS Relief

Research Electronic Data Capture

Nutbeam D. Achieving ‘best practice’ in health promotion: improving the fit between research and practice. Health Educ Res. 1996;11(3):317–26.

Article   CAS   PubMed   Google Scholar  

Dionne KY. Doomed Interventions: The Failure of Global Responses to AIDS in Africa. Cambridge: Cambridge University Press; 2017. p. 214.

Book   Google Scholar  

UNAIDS: Joint UN Program on HIV/AIDS. UNAIDS. 2018 [cited 2019 Dec 11]. UNAIDS: South Africa. Available from: https://www.unaids.org/en/regionscountries/countries/southafrica

Gouda HN, Charlson F, Sorsdahl K, Ahmadzada S, Ferrari AJ, Erskine H, et al. Burden of non-communicable diseases in sub-Saharan Africa, 1990–2017: results from the Global Burden of Disease Study 2017. Lancet Glob Health. 2019;7(10):e1375–87.

Article   PubMed   Google Scholar  

Sharman M, Bachmann M. Prevalence and health effects of communicable and non-communicable disease comorbidity in rural KwaZulu-Natal. South Africa Trop Med Int Health. 2019;24:1198–207.

Croce D, Mueller D, Rizzardini G, Restelli U. Organising HIV ageing-patient care in South Africa : an implementation science approach. S Afr J Public Health. 2018;2(3):59–62.

Google Scholar  

Yamey G. What are the barriers to scaling up health interventions in low and middle income countries? A qualitative study of academic leaders in implementation science. Glob Health. 2012;8(1):11.

Article   Google Scholar  

Geng EH, Peiris D, Kruk ME. Implementation science: Relevance in the real world without sacrificing rigor. PLoS Med. 2017;14(4).

Article   PubMed   PubMed Central   Google Scholar  

Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Health. 2011;38(1):4–23.

Aarons GA, Green AE, Trott E, Willging CE, Torres EM, Ehrhart MG, et al. The roles of system and organizational leadership in system-wide evidence-based intervention sustainment: a mixed-method study. Adm Policy Ment Health. 2016;43(6):991–1008.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50.

Damschroder LJ, Reardon CM, Widerquist MAO, Lowery J. The updated Consolidated Framework for Implementation Research based on user feedback. Implement Sci. 2022;17(1):75.

Weiner BJ. A theory of organizational readiness for change. Implement Sci. 2009;4(1):67.

Allen JD, Towne SD, Maxwell AE, DiMartino L, Leyva B, Bowen DJ, et al. Measures of organizational characteristics associated with adoption and/or implementation of innovations: a systematic review. BMC Health Serv Res. 2017;17(1):591.

Daivadanam M, Ingram M, Annerstedt KS, Parker G, Bobrow K, Dolovich L, et al. The role of context in implementation research for non-communicable diseases: answering the ‘how-to’ dilemma. PLoS ONE. 2019;14(4).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Alonge O, Rodriguez DC, Brandes N, Geng E, Reveiz L, Peters DH. How is implementation research applied to advance health in low-income and middle-income countries? BMJ Global Health. 2019;4.

Gilson L, Ellokor S, Lehmann U, Brady L. Organizational change and everyday health system resilience: lessons from Cape Town, South Africa. Soc Sci Med. 2020;266:113407.

Julien A, Anthierens S, Van Rie A, West R, Maritze M, Twine R, et al. Health care providers’ challenges to high-quality HIV care and Antiretroviral treatment retention in Rural South Africa. Qual Health Res. 2021;31(4):722–35.

Leslie HH, West R, Twine R, Masilela N, Steward WT, Kahn K, et al. Measuring Organizational Readiness for Implementing Change in Primary Care Facilities in Rural Bushbuckridge, South Africa. Int J Health Policy Manag. 2020;11:912–8.

PubMed   PubMed Central   Google Scholar  

Odendaal W, Chetty T, Goga A, Tomlinson M, Singh Y, Marshall C, et al. From purists to pragmatists: a qualitative evaluation of how implementation processes and contexts shaped the uptake and methodological adaptations of a maternal and neonatal quality improvement programme in South Africa prior to, and during COVID-19. BMC Health Serv Res. 2023;23(1):819.

van Heerden A, Ntinga X, Lippman SA, Leslie HH, Steward WT. Understanding the factors that impact effective uptake and maintenance of HIV care programs in South African primary health care clinics. Arch Public Health. 2022;80(1):221.

Streiner DL, Kottner J. Recommendations for reporting the results of studies of instrument and scale development and testing. J Adv Nurs. 2014;70(9):1970–9.

Department of Health. Province of KwaZulu-Natal Annual Performance Plan 2018/19 - 2020/21 [Internet]. KwaZulu-Natal, South Africa: Department of Health, Republic of South Africa; [cited 2023 Feb 6]. Available from: http://www.kznhealth.gov.za/app/APP-2018-19.pdf

Dwyer-Lindgren L, Cork MA, Sligar A, Steuben KM, Wilson KF, Provost NR, et al. Mapping HIV prevalence in sub-Saharan Africa between 2000 and 2017. Nature. 2019;570(7760):189–93.

Fernandez ME, Walker TJ, Weiner BJ, Calo WA, Liang S, Risendal B, et al. Developing measures to assess constructs from the Inner Setting domain of the Consolidated Framework for Implementation Research. Implement Sci. 2018;13(1):52.

Helfrich CD, Li YF, Sharp ND, Sales AE. Organizational Readiness to Change Assessment (ORCA): Development of an instrument based on the Promoting Action on Research in Health Services (PARIHS) framework. Implement Sci. 2009;4(1):38.

Lippman SA, Neilands TB, Leslie HH, Maman S, MacPhail C, Twine R, et al. Development, validation, and performance of a scale to measure community mobilization. Soc Sci Med. 2016;157:127–37.

Lippman SA, Maman S, MacPhail C, Twine R, Peacock D, Kahn K, et al. Conceptualizing community mobilization for HIV prevention: implications for HIV prevention programming in the African context. PLoS ONE. 2013;8(10):e78208.

Shea CM, Jacobs SR, Esserman DA, Bruce K, Weiner BJ. Organizational readiness for implementing change: a psychometric assessment of a new measure. Implement Sci. 2014;10(9):7.

Zullig LL, Muiruri C, Abernethy A, Weiner BJ, Bartlett J, Oneko O, et al. Cancer registration needs assessment at a tertiary medical center in Kilimanjaro. Tanzania World Health Popul. 2013;14(2):12–23.

COVID-19 / Coronavirus | South African Government [Internet]. [cited 2023 Mar 20]. Available from: https://www.gov.za/Coronavirus

World Health Organization. Service Availability and Readiness Assessment (SARA) reference manual. Geneva, Switzerland: World Health Organization; 2013.

LeBreton JM, Senter JL. Answers to 20 Questions about interrater reliability and interrater agreement. Organ Res Methods. 2008;11(4):815–52.

Cohen J. Statistical power analysis for the behavioral sciences. 2nd ed. New York: Routledge; 1988. p. 567.

Aarons GA, Ehrhart MG, Farahnak LR. The Implementation Leadership Scale (ILS): development of a brief measure of unit level implementation leadership. Implementation Sci. 2014;9(1):45.

Khamisa N, Oldenburg B, Peltzer K, Ilic D. Work related stress, burnout, job satisfaction and general health of nurses. Int J Environ Res Public Health. 2015;12(1):652–66.

van der Colff JJ, Rothmann S. Occupational stress, sense of coherence, coping, burnout and work engagement of registered nurses in South Africa. SA J Ind Psychol. 2009;35(1):1–10.

McKnight J, Nzinga J, Jepkosgei J, English M. Collective strategies to cope with work related stress among nurses in resource constrained settings: An ethnography of neonatal nursing in Kenya. Soc Sci Med. 2020;1(245).

Gilson L, Barasa E, Nxumalo N, Cleary S, Goudge J, Molyneux S, et al. Everyday resilience in district health systems: emerging insights from the front lines in Kenya and South Africa. BMJ Glob Health. 2017;2:e000224.

Download references

Acknowledgements

The authors are grateful to Anna Leddy for her contributions to survey design and data collection, to the data collection team, the interview and survey respondents who provided their time and insights, and to the Expert Panel for insights and guidance throughout the project. Expert Panelists were: Lungile Mshengu, Nomusa Mtshali, Paul Nijas, Fiona Scorgie, Jonathan Stadler, Michéle Torlutte, Joslyn Walker, Bryan Weiner, and Petra Zama.

This work was supported by the National Institute of Mental Health R21MH123389 (Lippman & Steward). The funder had no role in the preparation of this manuscript.

Author information

Authors and affiliations.

Division of Prevention Science, Department of Medicine, University of California, San Francisco, San Francisco, USA

Hannah H. Leslie, Sheri A. Lippman & Wayne T. Steward

MRC/Wits Rural Public Health and Health Transitions Research Unit (Agincourt), School of Public Health, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa

Sheri A. Lippman

Division of Human and Social Capabilities, Human Sciences Research Council, Durban, South Africa

Alastair van Heerden, Mbali Nokulunga Manaka & Phillip Joseph

Department of Paediatrics, School of Clinical Medicine, Faculty of Health Sciences, SAMRC/WITS Developmental Pathways for Health Research Unit, University of the Witwatersrand, Johannesburg, South Africa

Alastair van Heerden

Departments of Global Health and Health Systems and Population Health, University of Washington, Seattle, USA

Bryan J. Weiner

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization: HHL, WTS, BJW, AVH, SAL. Methodology: HHL, WTS, BJW, MNM, AVH, SAL. Formal analysis: HHL. Investigation: HHL, WTS, MNM, PJ, AVH, SAL. Writing, original draft: HHL. Writing, review and editing: All. Supervision: WTS, SAL. Project administration: WTS, MNM, PJ, AVH, SAL. Funding acquisition: WTS, SAL, AVH, HHL, BJW.

Corresponding author

Correspondence to Hannah H. Leslie .

Ethics declarations

Ethics approval and consent to participate, consent for publication.

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., supplementary material 2., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Leslie, H.H., Lippman, S.A., van Heerden, A. et al. Adapting and testing measures of organizational context in primary care clinics in KwaZulu-Natal, South Africa. BMC Health Serv Res 24 , 744 (2024). https://doi.org/10.1186/s12913-024-11184-9

Download citation

Received : 04 August 2023

Accepted : 07 June 2024

Published : 18 June 2024

DOI : https://doi.org/10.1186/s12913-024-11184-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Organizational context
  • South Africa
  • Primary care
  • Critical consciousness
  • Instrument development
  • Reliability

BMC Health Services Research

ISSN: 1472-6963

implementation research in social work practice

IMAGES

  1. Applying Research Evidence in Social Work Practice: : Martin Webber

    implementation research in social work practice

  2. (PDF) Biographical research in social work

    implementation research in social work practice

  3. (PDF) Understanding and Using Research in Social Work (Mastering Social

    implementation research in social work practice

  4. [PDF] Considering a participatory approach to social work

    implementation research in social work practice

  5. What Is A Practice Framework In Social Work

    implementation research in social work practice

  6. Social Work Practice: Research Techniques and Intervention Models / 978

    implementation research in social work practice

VIDEO

  1. Principles of social casework

  2. Social Work Research: Steps of Research #researchstudy #socialresearch #BSW #MSW #UGC-NET

  3. Social Work Practice with Communities (Group 1)

  4. Social Work and Sustainability: The UN Sustainable Development Goals I UB School of Social Work

  5. Social Work Practice & Trauma

  6. Future Directions in Dissemination and Implementation Research in Health

COMMENTS

  1. Implementation Science: Why it matters for the future of social work

    Abstract. Bridging the gap between research and practice is a critical frontier for the future of social work. Integrating implementation science into social work can advance our profession's effort to bring research and practice closer together. Implementation science examines the factors, processes, and strategies that influence the uptake ...

  2. Moving from Implementation Science to Implementation Practice: The Need

    Introduction. Advances in research, practice, and policy related to behavioral health care have led to greater availability and emphasis on the use of interventions that have proven their effectiveness. 1,2 As a result, the interest, development, and implementation of evidence-based practices (EBPs) have grown exponentially. 3 Evidence-based practices, programs, interventions, and/or ...

  3. What is implementation research?: Rationale, concepts, and practices

    Despite the growing knowledge base on evidence-based practices in social work and medicine, there is a large gap between what is known and what is consistently done. Implementation research is the study of methods to promote the uptake of research findings into routine practice. In this article, we describe the rationale for implementation research and outline the concepts and effectiveness of ...

  4. Back to the Future: Using Social Work Research to Improve Social Work

    Abstract This article traces themes over time for conducting social work research to improve social work practice. The discussion considers 3 core themes: (a) the scientific practitioner, including different models for applying this perspective to research and practice; (b) intervention research; and (c) implementation science. While not intended to be a comprehensive review of these themes ...

  5. Implementation Science and Practice

    Introduction. Evidence-based practice (EBP) has been increasingly advocated and is gaining wider acceptance in social work. This signals a continuing reaffirmation of social work's commitment to generating and maintaining a scientific knowledge base in general and, more specifically, to an expectation that social work be informed by, and based on, evidence from scientific research.

  6. PDF Implementation Science: Why It Matters for the Future of Social Work

    In all, implementation science can help social work develop sustainable, bidirectional bridges between research and practice to increase the relevance, use, impact, and sustainability of the best available evidence from clinical and services studies to improve the access, quality, and outcomes of social work interventions, services, and social ...

  7. Program Implementation

    Matching service to population needs is an emphasis in social work, especially in discussions about evidence-based practice (Howard et al., 2003; Mullen et al., 2008).Client characteristics and concerns as well as research that identifies effective means to address them should inform the selection of interventions in a process of evidence-based practice (Gambrill & Gibbs, 2009; Gibbs ...

  8. Implementation Science: Why It Matters for the Future of Social Work

    Bridging the gap between research and practice is a critical frontier for the future of social work. Integrating implementation science into social work can advance our profession's effort to bring research and practice closer together. Implementation science examines the factors, processes, and strategies that influence the uptake, use, and ...

  9. Supporting Implementation of Innovative Social Work Practice: What

    Abstract. Achieving client outcomes is understood as a complex, dynamic interplay of elements including the client, worker/s, programme setting and practice approach. How an or-ganisation supports or constrains implementation of innovative social work practice is worthy of research attention. The emergence of frameworks for translating evidence ...

  10. What Is Implementation Research? http://rswp.sagepub

    Research on Social Work Practice Volume 19 Number 5 September 2009 491-502 # 2009 The Author(s) 10.1177/1049731509335528 ... social work. What is Implementation Research? In the field of medicine, implementation research is a relativelynewconcept,andaconsensusonthenamehas yet to emerge. In fact, different names have become

  11. Designs and methods for implementation research: Advancing the mission

    Results: Examples of specific research designs and methods illustrate their use in implementation science. We propose that the CTSA program takes advantage of the momentum of the field's capacity building in three ways: 1) integrate state-of-the-science implementation methods and designs into its existing body of research; 2) position itself at the forefront of advancing the science of ...

  12. PDF Research on Social Work Practice Implementing Evidence-Based Practice

    service settings, no prior reviews of EBP implementation in social work or the wider human services field were located (Bhattacharyya et al., 2009; Gira, Kessler, & ... of the electronic versions of journals Research on Social Work Practice (2000 to July 2010), Child and Family Social Work (2000 to July 2010), and Journal of Evidence-Based Social

  13. Improving programs and outcomes: Implementation frameworks and

    It updates and clarifies the frequently cited study conducted by the National Implementation Research Network that introduced these frameworks for application in diverse endeavors. ... D. L. (2015). Improving programs and outcomes: Implementation frameworks and organization change. Research on Social Work Practice, 25(4), 477-487. https ...

  14. Putting Equitable Implementation Science Into Research and Practice

    Equitable Implementation at Work. Equity must be integrated into implementation research and practice. Here are 10 recommendations for putting equitable implementation into action. The field of implementation science needs to prioritize evidence-informed interventions that fit the daily lives of the communities in which they will be delivered.

  15. Implementation Research and Practice

    Implementation Research and Practice is an international, peer-reviewed, ... Berit Ingersoll-Dayton Collegiate Professor of Social Work Associate Dean for Research and Innovation, Social Work - Rogério M. Pinto focuses on academic, sociopolitical and cultural venues for broadcasting voices of oppressed individuals and groups. Funded by the ...

  16. Constructing Pathways of Change: Using Implementation Science to

    Dr. Julia Moore. In this podcast our guest, Julia Moore, PhD, discusses why implementation science is relevant to the advancement of the Social Work profession and she addresses the research-to-practice gaps that currently exist.

  17. A Systematic Review of Strategies for Implementing Empirically ...

    There is a clear need for more rigorous research on the effectiveness of implementation strategies, and we provide several suggestions that could improve this research area. Res Soc Work Pract . 2014 Mar 1;24(2):192-212. doi: 10.1177/1049731513505778.

  18. Implementation Science in Social Work

    This is an exciting time to think about using implementation science in social work practice! There are a multitude of interventions that are effective in detecting, preventing, and treating conditions that affect the health and well-being of those served by social workers.

  19. Implementation Science: Why it matters for the future of social work

    Abstract. Bridging the gap between research and practice is a critical frontier for the future of social work. Integrating implementation science into social work can advance our profession's effort to bring research and practice closer together. Implementation science examines the factors, processes, and strategies that influence the uptake ...

  20. PDF Research on Social Work Practice Improving Programs and Outcomes: The

    2National Implementation Research Network, Franklin Porter Graham Child Development Institute, University of North Carolina-Chapel Hill, Chapel Hill, NC, USA Corresponding Author: Rosalyn M. Bertram, School of Social Work, University of Missouri-Kansas City, Kansas City, MO 64110, USA. Email: [email protected] Research on Social Work Practice

  21. PDF Practice-Informed Research: Contemporary Challenges and Ethical

    Review of Research Informed Practice When social work practitioners are positioned as producers (Dudley, 2010) or co-producers of ... research process, including practicalities associated with implementation, ethical considerations specific to the research topics and subjects, access to data, and sensitivity and competency with regard to issues ...

  22. Improving Programs and Outcomes: Implementation Frameworks and

    480 Research on Social Work Practice 25(4) by guest on June 17, 2015 rsw.sagepub.com Downloaded from Figure 3), an organization will b el e s sl i k e l yt os u f f e rt h ec o m -

  23. Social Work Theories in Context: Creating Frameworks for Practice

    Social Work Theories in Context: Creating Frameworks for Practice 3rd Edition, by Karen Healy, Sydney, Bloomsbury Academic, 2022, 336 pp., $61.95 (paperback), ISBN 9781350321571 Kathy Mendis Department of Community Services, Acknowledge Education Melbourne, Australia Correspondence [email protected]

  24. (PDF) Supporting Implementation of Innovative Social Work Practice

    including the client, worker/s, programme setting and practice approach. How an or-. ganisation supports or constrains implementation of innovative social work practice is. worthy of research ...

  25. Graduate research methods in social work

    We designed our book to help graduate social work students through every step of the research process, from conceptualization to dissemination. Our textbook centers cultural humility, information literacy, pragmatism, and an equal emphasis on quantitative and qualitative methods. It includes extensive content on literature reviews, cultural bias and respectfulness, and qualitative methods, in ...

  26. Full article: Social Work Education: Developing Professional Identity

    Social work education thus needs to prepare students as life-long learners for practice that meets current standards through the application of competencies, values, skills, and knowledge, and enables them to critically engage with, and reflect on emerging issues, contemporary practice, research, and evidence.

  27. research@BSPH

    Systematic and rigorous inquiry allows us to discover the fundamental mechanisms and causes of disease and disparities. At our Office of Research (research@BSPH), we translate that knowledge to develop, evaluate, and disseminate treatment and prevention strategies and inform public health practice.Research along this entire spectrum represents a fundamental mission of the Johns Hopkins ...

  28. Why the Garland School of Social Work for a Master of Social Work

    At the Garland School, we believe social work is about service and justice, healing and restoration, and the dignity of each individual. Through innovative academics and experiential learning opportunities, both in-person and online, we strive to train and equip social work professionals to support the needs of clients through the ethical integration of faith and practice.

  29. Adapting and testing measures of organizational context in primary care

    Implementation science frameworks situate intervention implementation and sustainment within the context of the implementing organization and system. Aspects of organizational context such as leadership have been defined and measured largely within US health care settings characterized by decentralization and individual autonomy. The relevance of these constructs in other settings may be ...

  30. Full article: A practice-based exploration of advocating for pet

    Introduction. Although the understanding of climate change as a social justice issue is increasingly accepted, the grasp of its complexity and convergence with other fields of practice such as human-animal interaction and homelessness is still emerging within social work discourse (Bezgrebelna et al., Citation 2021; Dietz et al., Citation 2020; Kidd et al., Citation 2023; Protopopova et al ...