Finch [ ]
Kilbourne [ ]
Experimental design is regarded as the most rigorous approach to show causal relationships and is labeled as the “gold-standard” in research designs with respect to internal validity [ 34 ]. Experimental design relies on the random assignment of subjects to the condition of interest; random assignment is intended to uphold the assumption that groups (usually experimental vs. control) are probabilistically equivalent, allowing the researcher to isolate the effect of the intervention on the outcome of interest. In implementation research, the experimental condition is often a specific implementation strategy, and the control condition is most often “implementation as usual.” Brown et al. [ 2 ] described three broad categories of designs providing within-site, between-site, and within- and between-site comparisons of implementation strategies. Within-site designs are discussed in the section on quasi-experimental designs as they generally lack the replicability standard given their focus on one site or unit. It is important to acknowledge that other authors, such as Miller et al. [ 35 ] and Mazzucca et al. [ 36 ], have categorized certain designs somewhat differently than we have here.
As research advances through the translational research pipeline (efficacy to effectiveness to dissemination and implementation), study design tends to shift from valuing internal validity (in efficacy trials) to achieving a greater balance between internal and external validity in effectiveness and implementation research. Much in the same way that inclusion criteria for patients are often relaxed in an effectiveness study of an EBI to better represent real-world populations, implementation research includes delivery systems and clinicians or stakeholders that are representative of typical practices or communities that will ultimately implement an EBI. The high degree of heterogeneity in implementation determinants, barriers, and facilitators associated with diverse settings makes isolating the influence of an implementation strategy challenging and is further complicated by nesting of clinicians within practices, hospitals within healthcare systems, regions within states, etc. Thus, the implementation researcher seeks to ensure that any observed effects are attributable to the implementation strategy/ies being investigated and attempts to balance internal and external validity in the design.
In between-site designs, the EBI is held constant across all units to ensure that observed differences are the result of the implementation strategy and not the EBI. Between-site designs allow investigators to compare processes and output among sites that have different exposures. Most commonly the comparison is between an implementation strategy and implementation as usual. Brown and colleagues emphasize that randomization should be at the “level of implementation” in the between-site designs to avoid cross-contamination [ 2 ]. Ayieko et al. [ 13 ] used a between-site design to examine the effect of enhanced audit and feedback (an implementation strategy) on uptake of pneumonia guidelines by clinical teams within Kenyan county hospitals. They performed restricted randomization, which involved retaining balance between treatment and control arms on key covariates including geographic location and monthly pneumonia admissions. The study used random intercept multilevel models to account for any residual imbalances in performance at baseline so that the findings could be attributed to the audit and feedback, the implementation strategy of interest [ 12 ].
A variant between-site design is the “head-to-head” or “comparative implementation” trial in which the investigator controls two or more strategies, no strategy is implementation as usual, no site receives all strategies, and results are compared [ 2 ]. Finch et al. [ 14 ] examined the effectiveness of two implementation strategies, performance review and facilitated feedback, in increasing the implementation of healthy eating and physical activity-promoting policies and practices in childcare services in a parallel group randomized controlled trial design. At completion of the intervention period, childcare services that received implementation as usual were also offered resources to use the implementation strategies.
When achieving a large sample size is challenging, researchers may consider matched-pair randomized designs, with fewer units of randomization, or other adaptive designs for randomized trials [ 37 ] such as the factorial/fractional factorial [ 38 ] or sequential multiple assignment randomized trial (SMART) design. The SMART design allows for building time-varying adaptive implementation strategies (or stepped-care strategies) based on the order in which components are presented and the additive and combined effects of multiple strategies [ 15 ]. Kilbourne et al. assessed the effectiveness of an adaptive implementation intervention involving three implementation strategies (replicating effective programs [ 39 ], coaching, and facilitation) on cognitive behavioral therapy delivery among schools in a clustered SMART design [ 40 ]. In the first phase, eligible schools were randomized with equal probability to a single strategy vs. the same strategy combined with another implementation strategy. In subsequent phases, schools were re-randomized with different combinations of implementation strategies based on the assessment of whether potential benefit was derived from a combination of strategies. Similar to the SMART design is the full or fractional factorial design in which units are assigned a priori to different combinations of strategies, and main and lower order effects are tested to determine the additive impact of specific strategies and their interactions [ 41 ].
Another between-site design variant, the incomplete block, is useful when two implementation strategies cannot or were not initially intended to be directly compared. The incomplete block design allows for an indirect comparison of the two strategies by drawing from two independent samples of units, one in which sites are randomized to either strategy A or implementation as usual, and the other in which sites are randomized to strategy B or implementation as usual [ 42 ]. The two samples are completely independent and can occur either in parallel or in sequence, and statistical tests are performed for indirect comparison of the impacts of the two strategies “as if” they were directly compared. This requires a single EBI to be implemented and some degree of homogeneity across both of the groups. The incomplete block design is useful when it is not possible to test both strategies in a single study, or when a prior or concurrent study can be leveraged to compare two strategies.
Although the examples of between-site designs are randomized at the site- and organization-level, smaller units within each organization such as the team or clinician may also be randomized to an intervention [ 2 ]. Smith, Stormshak, & Kavanagh [ 18 ] present the results of a study in which clinicians were randomized to receive training or not, and their assigned families were randomized to receive the EBI or usual services. Effectiveness (family functioning and child behaviors) and implementation outcomes (adoption and fidelity) were evaluated after the 2-year period of intervention delivery.
This design involves crossovers where units begin in one condition and move to another (within-site element), which is repeated across units (or clusters of units) with staggered crossover points (between-site element). This broad class of designs has been referred to as “roll-out” designs [ 43 ] and dynamic wait-list designs [ 44 ]. We use the term “roll-out” to describe within- and between-site designs. The defining characteristic of roll-out designs is the assignment of all units in the study to the time when the implementation strategy will begin (i.e., the crossover). Assignments within roll-out designs can either be random, non-random, or quasi-random. In the context of implementation research, the roll-out design offers three practical and scientific advantages. First, all units in the trial will eventually receive the implementation strategy. Ensuring that all participating units receive the strategy promotes equity and enables all participants to contribute data. Second, the roll-out design allows the research team and the partner organizations to distribute resources required to administer the implementation strategy over time, rather than having to implement in all sites simultaneously as might be done in another type of multisite design. Third, the design allows researchers to account for the effect of unanticipated confounders (e.g., change in accreditation standards that requires use of the implementation strategy) that can occur during the trial period. For example, if some sites start implementation before an external event occurs, and other sites start afterwards, the impact of the event on the implementation process and resulting outcomes can be measured.
A common roll-out design is the stepped-wedge. The stepped-wedge is a specific design in which measurement of all units begins simultaneously at T0 and units cross over from one condition (e.g., implementation as usual or usual care) to the experimental implementation strategy condition following a series of “steps” at a predetermined interval (steps refer to the crossover). The result is a “wedge” below the steps of implementation as usual that can be compared to the wedge above the step representing the implementation strategy condition. The stepped-wedge is illustrated in Fig. Fig.1 1 (panel a).
Roll-out designs: the stepped wedge (panel a) and incomplete wedge (panel b).
A variant of this design is the incomplete (or modified) wedge roll-out design (Fig. (Fig.1, 1 , panel b). The difference from the stepped-wedge is that pre-implementation outcomes measurement begins immediately prior (e.g., 4–6 months) to the step rather than at T0 [ 16 ]. Incomplete wedge roll-out designs might be preferred to the traditional stepped-wedge design because there is less burden on participating sites to collect data for long periods and it allows researchers the option of staged enrollment in the trial if needed to achieve the full target sample in a way that does not threaten the study protocol. In this latter situation, randomization would occur in as few stages as possible to maintain balance and a variable for stage of enrollment would be included in all analyses to account for any differences in early vs. later enrollees. Last, the unit of randomization can be single units, clusters, or repeated, matched pairs [ 45 ]. Smith and Hasan [ 16 ] provide a case example of an incomplete wedge roll-out design in a trial testing the implementation of the Collaborative Care Model for depression management in primary care practices within a large university health system. In that trial, measurement of implementation began 6 months prior to the crossover to implementing the Collaborative Care Model in each primary care practice in a multi-year roll-out.
Quasi-experimental designs share experimental design goals of assessing the effect of an intervention on outcomes of interest. Unlike experiments, however, quasi-experiments do not randomly assign participants to intervention and usual care groups. This key distinction limits the internal validity of quasi-experimental designs because differences between groups cannot be attributed exclusively to the intervention. However, when randomization is not possible or desirable for assessing the effectiveness of an implementation strategy or other intervention, quasi-experimental designs are appealing. Internal validity is strengthened when techniques of varying strength are used to bolster internal validity in lieu of randomization, including pre- and post-; interrupted time-series; non-equivalent group; propensity score matching; synthetic control; and regression-discontinuity designs [ 46 ].
In the context of implementation research, quasi-experimental designs fall under Brown and colleagues’ broad category of within-site designs. These single-site or single-unit (practitioner, clinical team, healthcare system, and community) designs are most commonly compared to their own prior performance. The simplest variant of a within-site study is the post design. This design is relevant when a site or unit has not delivered a service before, and thus, has no baseline or pre-implementation strategy data for comparison. The result of such a study is a single “post” implementation outcome that can only be compared to a criterion metric or the results of published studies. In contrast to a post design where data are only available after an implementation strategy or other intervention is introduced, a pre-post design compares outcomes following the introduction of an implementation strategy to the results from care as usual prior to introducing the implementation strategy.
To increase power and internal validity of within-site studies, interrupted time-series designs can be used [ 47 ]. Time-series designs involve multiple observations of the dependent variable (e.g., implementation) before and after the introduction of the implementation strategy, which “disrupts” the time-series data stream. Time-series designs are highly flexible and can involve multiple sites in the multiple baseline and replicated single-case series variants, which increase internal validity through replication of the effect. Examples of interrupted time-series studies exist in implementation research that exemplify their practicality for studying implementation (see Table Table1). 1 ). Limitations of this design in implementation research include the challenge of defining the interruption (i.e., when the implementation began) and that the effects of new implementations are unlikely to be immediate. Therefore, analysis of interrupted time-series in implementation research might favor examining changes in slope between pre-implementation and implementation phases, rather than testing immediate changes in level of the outcome after the interruption.
In observational studies, the investigator does not intervene with study participants but instead describes outcomes of interest and their antecedents in their natural context [ 48 ]. As such, observational studies may be particularly useful for evaluating the real-world applicability of evidence. Observational designs may use approaches to data collection and analysis that are quantitative [ 16 ] (e.g., survey), qualitative [ 49 ] (e.g., semi-structured in-depth interviews), or mixed methods [ 50 ] (e.g., sequential, convergent analysis of quantitative and qualitative results). Quantitative, qualitative, and mixed methods can be especially helpful in observational studies for systematically assessing implementation contexts and processes.
With the goal of more rapidly translating evidence into routine practice, Curran et al. [ 51 , 52 ] proposed methods for blending: 1) design components of experiments intended to test the effectiveness of clinical interventions and 2) approaches to assessing their implementation. Such hybrid designs provide benefits over pursuing these lines of research independently or sequentially, both of which slow the progress of translation. Curran and colleagues state that effectiveness–implementation hybrid designs have a dual, a priori focus on assessing clinical effectiveness and implementation [ 51 , 52 ]. Hybrids focus on both effectiveness and implementation but do not specify a particular trial design. That is, the aforementioned experimental and observational designs can be used for any of the hybrid types. References to hybrid studies in implementation science are provided in Table Table1 1 .
Curran et al. describe the conditions under which three different types of hybrid designs should be used, which helps researchers determine the most appropriate type based on whether evidence of effectiveness and implementation exists. Linking clinical effectiveness and implementation research designs may be challenging, as the ideal approaches for each often do not share many design features. Clinical trials typically rely on controlling/ensuring delivery of the clinical intervention (often by using experimental designs) with little attention to implementation processes likely to be relevant to translating the intervention to general practice settings. In contrast, implementation research often focuses on the adoption and uptake of clinical interventions by providers and/or systems of care [ 53 ] often with the assumption of clinical effectiveness demonstrated in previous studies. The three hybrid designs are described below.
Hybrid Type 1 tests a clinical intervention while gathering information on its delivery and/or potential for implementation in a real-world context, with primary emphasis on clinical effectiveness. This type of design advocates process evaluations of delivery/implementation during clinical effectiveness trials to collect information that may be valuable in subsequent implementation research studies, answering questions such as: What potential modifications to the clinical intervention could be made to maximize implementation? What are potential barriers and facilitators to implementing this intervention in the “real world”? Hybrid Type 1 designs provide the opportunity to explore implementation and plan for future implementation.
Hybrid Type 2 simultaneously tests a clinical intervention and an implementation intervention/strategy. In contrast to the Hybrid Type 1 design, the Hybrid Type 2 design puts equal emphasis on assessing both intervention effectiveness and feasibility and/or potential impact of an implementation strategy. In a Hybrid Type 2 study, where an implementation intervention/strategy is simultaneously tested to promote uptake of the clinical intervention under study. Type 2 hybrid designs appear less frequently than the other two types due to the resources required.
Hybrid Type 3 primarily tests an implementation strategy while secondarily collecting data on the clinical intervention and related outcomes. This design can be used when researchers aim to proceed with implementation studies without an existing portfolio of effectiveness studies. Examples of these conditions are when: health systems attempt implementation of a clinical intervention without comprehensive clinical effectiveness data; there is strong indirect efficacy or effectiveness data; and potential risks of the intervention are limited. National priorities (e.g., the opioid epidemic) may also drive implementation before effectiveness data are robust.
Implementation research is, by definition, a systems science in that it simultaneously studies the influence of individuals, organizations, and the environment on implementation [ 54 ]. The field of systems science is devoted to understanding complex behaviors that are both highly variant and strongly dependent on the behaviors of other parts of the system. Systems science is a challenging field to study using traditional clinical trial methods for various reasons, most notably the complexity involved in the many interactions and dynamics of multiple levels, constant change, and interdependencies. Simulation studies offer a solution for understanding the drivers of implementation and the potential effects of implementation strategies [ 55 ]. Modeling typically involves simulating the addition or configuration of one or more specific implementation strategies to determine which path should be taken in the real world, but it can also be used to test the likely effect of implementing one or more EBIs to determine impact for specific populations.
Agent-based modeling (ABM) [ 56 ] and participatory systems dynamics modeling (PSDM) [ 57 ] have both been used in implementation research to model the behavior of systems and determine the impact of moving certain implementation “levers” in the system. ABM is a method for simulating the behavior of complex systems by describing the entities (called “agents”) of a system and the behavioral rules that guide their interactions [ 56 ]. These agents, which can be any element of a system (e.g., clinicians, patients, and stakeholders), interact with each other and the environment to produce emergent, system-level outcomes [ 58 ], many of which are formal implementation outcomes. As ABM produces a mechanistic model, researchers are able to identify the implementation drivers that should be leveraged to most effectively achieve the predicted impacts in practice. Whereas ABM has wide ranging applications for implementation science, PSDM is an example of a method for a specific implementation challenge. Zimmerman et al. [ 26 ] used PSDM to triangulate stakeholder expertise, healthcare data, and modeling simulations to refine an implementation strategy prior to being used in practice. In PSDM, clinic leadership and staff define and evaluate the determinants (e.g., clinician knowledge, implementation leadership, and resources) and mechanisms (e.g., self-efficacy, feasible workflow) that determine local capacity for implementation of an EBI using a visual model. Given local capacity and other factors, simulations predict overall system behavior when the EBI is implemented. The process is iterative and has been used to prepare for large initiatives where testing implementation using standard trial methods was infeasible or undesirable due to the cost and time involved.
Configurational comparative methods, which are an umbrella term for methods that include but are not limited to qualitative comparative analysis [ 59 ], combine within-case analysis and logic-based cross-case analysis to identify determinants of outcomes such as implementation. Configurational comparative methods define causal relationships by identifying INUS conditions: those that are an Insufficient but Necessary part of a condition that is itself Unnecessary but Sufficient for the outcome. Configurational comparative methods may be preferable to standard regression analyses often used in quasi-experiments when the influence of an intervention on an outcome is not easily disentangled from how it is implemented or the context in which it is implemented – i.e., complex interventions. Complex interventions often have interdependent components whose unique contributions to a given outcome can be challenging to isolate. Furthermore, complex interventions are characterized by blurry boundaries among the intervention, its implementation, and the context in which it is implemented [ 60 ]. For example, the effectiveness of care plans for cancer survivors in improving care coordination and communication among providers likely depends upon a care plan's content, its delivery, and the functioning of the cancer program in which it is delivered [ 61 ]. Configurational comparative methods facilitate identifying multiple possible combinations of intervention components and implementation and context characteristics that interact to produce outcomes. To date, qualitative comparative analysis is the type of configurational comparative methods that has been most frequently applied in implementation research [ 62 ]. To identify determinants of medication adherence, Kahwati et al. [ 24 ] used qualitative comparative analysis to analyze data from 60 studies included in a systematic review. Breuer et al. [ 25 ] used qualitative comparative analysis to identify determinants of mental health services utilization.
In the early days of the CTSA program, resources allocated to implementation science were most frequently embedded in clinical or effectiveness research studies, and few had robust, standalone implementation science programs [ 63 , 64 ]. As the National Center for Advancing Translational Sciences (NCATS) and other federal and non-federal sources have increased their investment in implementation science capacity, the field has grown dramatically. More CTSAs are developing implementation research programs and incorporating stakeholders more fully in this process, as reflected in the results of the Dolor et al [ 65 ] environmental scan. Washington University and the University of California at Los Angeles have documented their efforts to engage practice and community partners, offer professional development opportunities, and provide consultations to investigators both in and outside the field of implementation science [ 66 , 67 ]. The CTSA program could take advantage of this momentum in three ways: integrate state-of-the-science implementation methods into its existing body of research; position itself at the forefront of advancing the science of implementation science by collaborating with other NIH institutes that share the goal of advancing implementation science, such as NCI and NHLBI; and providing training in implementation science.
Many CTSAs have the expertise to consult with their institution's investigators on the potential role of implementation science in their research. Implementation research consultations involve creating awareness and appropriate use of specific study designs and methods that match investigators’ needs and result in meaningful findings for real-world clinical and policy environments. As described by Glasgow and Chambers, these include rapid, adaptive, and convergent methods that consider contextual and systems perspectives and are pragmatic in their approach [ 68 ]. They state that “CTSA grantees, among others, are in a position to lead such a change in perspective and methods, and to evaluate if such changes do in fact result in more rapid, relevant solutions” to pressing public health problems. Through consultation services, CTSAs can encourage the use of implementation science early (e.g., designing for dissemination and implementation [ 69 ]) and often, positioning CTSAs – the hub for translation – to fulfill their mission by reducing the lag from discovery to patient and population benefit.
The centers funded by the CTSA program are able to conduct large-scale implementation research using the multisite U01 mechanism which requires the involvement of three centers. With the challenges of recruitment, generalizability, and power that are inherent in many implementation trials, the inclusion of three or more CTSAs, ideally representing diversity in region, populations, and healthcare systems, can provide the infrastructure for cutting-edge implementation science. Thus far, there are few examples of this mechanism being used for implementation research. In addition, with the charge of speeding translation of bench and clinical science discoveries to population impact, CTSAs have both the incentive and perspective to conduct implementation research early and consistently in the translational pipeline. As the hybrid design illustrates, there has been a paradigmatic shift away from the sequential translational research pipeline to more innovative methods that reduce the lag between translational steps.
NIH has funded several formal training programs in implementation science, including the Training Institute in Dissemination and Implementation in Health [ 70 ], Implementation Research Institute [ 71 ], and Mentored Training in Dissemination and Implementation Research in Cancer [ 72 ]. These training programs address the need to gain greater clarity around the implementation research designs described in this article, but the demand for training outpaces available resources. CTSAs could provide an avenue for meeting the needs of the field for training in dissemination and implementation science methods. CTSA faculty with expertise in implementation research could offer implementation research training programs for scholars on many levels using the T32, KL2, K12, TL1, R25, and other mechanisms. Chambers and colleagues have recently noted these capacity-building and training opportunities funded by the NIH [ 73 ]. Indeed, given the mission of the CTSA program, they are the ideal setting for implementation research training programs.
The field of implementation science has established methodologies for understanding the context, strategies, and processes needed to translate EBIs into practice. As they mature alongside one another, both implementation science and the CTSA program would greatly benefit from cross-fertilizing expertise, infrastructure, and aim to advance healthcare in the USA and around the world.
The authors wish to thank Hendricks Brown and Geoffrey Curran who provided input at different stages of developing the ideas presented in this manuscript.
Research reported in this publication was supported, in part, by the National Center for Advancing Translational Sciences, grant UL1TR001422 (Northwestern University), grant UL1TR002489 (UNC Chapel Hill), and grant UL1TR001450 (Medical University of South Carolina); by National Institute on Drug Abuse grant DA027828; and by the Implementation Research Institute (IRI) at the George Warren Brown School of Social Work, Washington University in St. Louis through grant MH080916 from the National Institute of Mental Health and the Department of Veterans Affairs, Health Services Research and Development Service, Quality Enhancement Research Initiative (QUERI) to Enola Proctor. Dr. Birken's effort was supported by the National Center for Advancing Translational Sciences, National Institutes of Health, through grant KL2TR002490. The opinions expressed herein are the views of the authors and do not necessarily reflect the official policy or position of the National Institute for Advancing Translational Science, the National Institute on Drug Abuse, the National Institute of Mental Health, the Department of Veterans Affairs, or any other part of the US Department of Health and Human Services.
The authors have no conflicts of interest to declare.
Equity must be integrated into implementation research and practice. Here are 10 recommendations for putting equitable implementation into action.
By Allison Metz, Beadsie Woo & Audrey Loper Summer 2021
The field of implementation science needs to prioritize evidence-informed interventions that fit the daily lives of the communities in which they will be delivered. Early prevention and intervention efforts have the potential to achieve goals related to service access and outcomes, but without an explicit focus on equity, most fail to do so. Equitable implementation occurs when strong equity components—including explicit attention to the culture, history, values, assets, and needs of the community—are integrated into the principles, strategies, frameworks, and tools of implementation science. While implementation science includes many frameworks, theories, and models, a blueprint for equitable implementation does not yet exist.
Implementation science—the study of the uptake, scale, and sustainability of social programs—has failed to advance strategies to address equity. This collection of articles reviews case studies and articulates lessons for incorporating the knowledge and leadership of marginalized communities into the policies and practices intended to serve them. Sponsored by the Anne E. Casey Foundation
Trust the people, youth leadership in action, community takes the wheel, equity in implementation science is long overdue, listening to black parents | 3, faith-based organizations as leaders of implementation, community-defined evidence as a framework for equitable implementation, community-driven health solutions on chicago’s south side.
This supplement addresses critical aspects of equitable implementation and attempts to define concrete strategies for advancing equity in implementation and in efforts to scale it. The core elements for equitable implementation include building trusting relationships, dismantling power structures, making investments and decisions that advance equity, developing community-defined evidence, making cultural adaptations, and reflecting critically about how current implementation science theories, models, and frameworks do (or do not) advance equity. Case examples described in this supplement demonstrate how specific activities across these core implementation elements can address cultural, systemic, and structural norms that have embedded specific barriers against Black, Indigenous, and other communities of color.
We wanted two types of articles for this supplement: case examples from the field of implementation science that explicitly focus on equity, and case examples from community-driven implementation efforts to inform implementation science in the future. We required that community members serve as co-authors with implementation scientists and funders. The range of perspectives and experiences shared in these articles provides us with an important vantage point for exploring equitable implementation. In response to questions about the process of writing for this supplement, several authors stressed the necessary challenge of balancing the different stakeholder perspectives and voices to write concise and compelling articles.
We attempt to summarize what we’ve learned about equitable implementation over the course of working on this supplement and in our own research. Here are 10 recommendations we have for putting equitable implementation into action.
Implementation relies on collaborative learning, risk-taking, and openness to failure. At the center of this dynamic is vulnerability and trust. Trust engenders faith that partners can rely on each other to deliver on agreements and to understand—and even anticipate—each others’ interests and needs. 1 A recommendation for building trusting relationships is:
1. Take the time to build trust through small, frequent interactions. Trust is not built through sweeping gestures, but through everyday interactions where people feel seen and heard. Trust requires long-term commitment, clear and comprehensive communication, and time. As described in the article about the partnership between ArchCity Defenders and Amplify Fund, implementation moves at the speed of trust, and that can take longer than we think. Funders need to provide the time and resources to build trust between themselves, other leaders, and community members and to support trust-building among stakeholders in the community.
Power differentials exist in implementation efforts where specific individuals or groups have greater authority, agency, or influence over others. Implementation strategies should be chosen to address power differentials and position community members at the center of decision-making and implementation activities. Recommendations for dismantling power structures include:
2. Shed the solo leader model of implementation. Implementation science should promote collaborative leadership rather than rely on the charisma and energy of a single individual or organization. When leaders engage with community members and diverse stakeholder groups in meaningful activities that are ongoing, they develop a shared understanding of problems and potential solutions, develop strategies that address community needs and assets, and create a sense of mutual accountability for building the system of supports needed to sustain change and advance equitable outcomes. 2
3. Distribute information and decision-making authority to those whose lives are most affected by the implementation. Empowering community members to make decisions about what is implemented and what strategies are used to carry out the work is critical for implementation to be relevant, successful, and sustainable. By recognizing the knowledge and experience that community stakeholders have and using that expertise to make decisions, public officials, funders, and practitioners create an environment of mutual comfort and respect. The central role that young people play in the development of Youth Thrive illustrates how an organization deliberately changed its work in order to ensure that nothing about young people was done without them having a collaborative role in shaping and delivering the curriculum.
Successful implementation is the product of dozens of shared decisions. In all implementation efforts, opportunities exist for critical decision-making that can either increase or decrease the likelihood that implementation will result in equitable outcomes. Recommendations include:
4. Engage in deliberate and transparent decision-making. Implementation decisions should be conscious, reflective, well thought through, and paced in a way that unintended consequences can be assessed. By taking the time to reflect, we can make course corrections for decisions that yield any unexpected results. Decision-making should also be transparently communicated with stakeholders at all levels of implementation.
5. Engage community members in interpreting and using data to support implementation. As described in this supplement, the success and sustainability of implementation are related to the alignment with and deep understanding of the needs of a community as defined by the community members themselves. The Children and Youth Cabinet in Rhode Island developed a resident advisory board and offered community members regular data review sessions. At these sessions, community members shared relevant context for findings and applied their experience to quality improvement.
Equitable implementation starts with how the evidence we seek to implement is developed. Research evidence often demonstrates different levels of effectiveness for different groups of people when replicated or scaled widely, leading to inequitable outcomes. As interventions are developed, it is critical to consider diversity in all its forms—including geographical, racial and ethnic, socioeconomic, cultural, and access—and to do this through the involvement of local communities. A recommendation for developing community-defined evidence is:
6. Co-design interventions with community members. This ensures interventions are relevant, desired by communities, and feasible to implement. Village of Wisdom created workshops by and for Black parents to share their parenting insights. These workshops became the foundation for developing culturally affirming instruction and for formulating tools and strategies that could create environments to encourage the intellectual curiosity and racial identity of Black children. By using the experiences and knowledge of Black parents to develop learning environments that nurture well-being, Village of Wisdom asserts the value of growing up Black and parenting Black children. To develop the Bienvenido Program, staff recruited leaders across the community as cocreators of a mental health needs assessment and the knowledge developed from it. The program was designed in response to Latinx residents’ experiences and the challenges they face in accessing mental health services. In both of these examples, community members’ experiences and perspectives were used to develop interventions that were aligned with community needs as they described them.
In order to reduce disparities in outcomes and advance equitable implementation, interventions and services must reach specific groups of people and demonstrate effectiveness in improving outcomes for them. 3 Adaptations, especially cultural adaptations, must be made for both interventions and for implementation strategies to ensure the reach and relevance needed for equitable implementation. Recommendations for making adaptations include:
7. Seek locally based service delivery platforms. Implementation often relies on traditional institutions (e.g., hospitals) and systems of care (e.g., public health departments) that may limit or even impede access for specific groups of people. Two articles in this supplement discuss the importance of local, faith-based groups for supporting implementation—the parenting program in Travis County, Texas, and the cardiovascular health initiative in Chicago. Both case examples elevate the importance of adapting service delivery mechanisms to trusted community organizations to increase access for and uptake by local residents.
8. Address issues of social justice. Specific groups of people face significant stressors and barriers to care that are rooted in systemic and structural racism. Authors in this supplement emphasize the importance of adaptations that address issues related to these stressors. As noted in the article on culturally adapting a parenting intervention, parents may not be able to access and benefit from a parenting program if they are dealing with immigration policies and fear of deportation. In this case, adaptations to the program would need to include immigration counseling to support equitable implementation.
While implementation science is undergirded by theories, models, and frameworks, notably missing in the field are critical perspectives. The article on critical perspectives seeks to address this gap by discussing the methods used in implementation science and how they might perpetuate or exacerbate inequities. The authors also raise the importance of context and how it is addressed in implementation research and practice.
In the field of implementation science, context includes three levels: macro, organizational, and local. 4 Macro context refers to socio-political and economic forces that either facilitate or hinder implementation efforts. Organizational context refers to organizational culture and climate that influence the behavior of staff. Local context refers to the community activities and relationships that influence implementation and behavior. Implementation strategies at the local or organizational level are limited in their impact on systemic and structural issues. In several articles of the supplement, authors advocate for doing more than describing the macro context. Implementation science needs to develop strategies that can address macro issues that foster or perpetuate disparities in outcomes. Recommendations include:
9. Develop implementation strategies that address the contextual factors that contribute to disparities in outcomes. Advocacy and policy implementation strategies focused on the macro context are more likely to advance equity than implementation strategies at organizational or local levels. Articles in this supplement describe the importance of building the capacity of community leaders to create advocacy networks for policies and funding that will help to sustain local programming. The example from ArchCity Defenders and Amplify Fund describes the critical role of funders in supporting changes to the social, political, and economic environments that grantees operate within to advance equity and promote sustainability. To cite another example, training community members to facilitate local programs and deliver interventions (as demonstrated in the Bienvenido Program and the cardiovascular health project in Chicago) ensures that implementation is tailored to the culture, history, and values of the local community; that interventions are delivered by trusted individuals; and that communities will be able to sustain the interventions.
10. Seek long-term outcomes that advance equity. The selection of interventions should include an assessment of the interventions’ likely influence on outcomes beyond near-term changes. Selecting programs that have the potential of a spillover effect in outcomes is a mechanism for equitable implementation. As described in a case example in this supplement, participants in the Bienvenido Program developed confidence and knowledge about participating in community meetings and engaging with locally elected officials and pursued careers in the mental health field. In the critical perspectives article, authors explained that some parenting programs demonstrate evidence for outcomes beyond strengthening parenting practices, such as reduction in substance abuse or increases in employment and stable housing.
The purpose of implementation science is to integrate research and practice in ways that will improve outcomes for people and communities. However, implementation frameworks, theories, and models have not explicitly focused on how implementation can and should advance equity. The recommendations that emerged across the diverse case examples in this supplement provide a starting point for changing and improving the methods and strategies used in implementation to ensure that equity is at the center of the work. As Ana A. Baumann and Pamela Denise Long argue in “Equity in Implementation Science Is Long Overdue,” implementation scientists must engage in critical reflection on the gaps between the intentions and the results of their work. We hope this supplement sparks reflection in funders, researchers, and practitioners involved in supporting implementation efforts with the hope of making people’s lives better and inspires their resolve and courage to shift toward learning from those who have the greatest stake in successful and equitable outcomes.
Support SSIR ’s coverage of cross-sector solutions to global challenges. Help us further the reach of innovative ideas. Donate today .
Read more stories by Allison Metz , Beadsie Woo & Audrey Loper .
SSIR.org and/or its third-party tools use cookies, which are necessary to its functioning and to our better understanding of user needs. By closing this banner, scrolling this page, clicking a link or continuing to otherwise browse this site, you agree to the use of cookies.
The Society for Implementation Research Collaboration is dedicated to bringing together researchers and multi-level stakeholders to improve the implementation of evidence-based psychosocial interventions. To achieve this mission, SIRC, in partnership with SAGE Publications, launched Implementation Research and Practice in May of 2020. Click here to access the journal website and submission portal .
Please email our co-founding editors-in-chief — Cara Lewis and Sonja Schoenwald — at [email protected] with any questions about the journal or submission interest.
Implementation Research and Practice is an international, peer-reviewed, open access, online-only journal providing rapid publication of interdisciplinary research that advances the implementation in diverse contexts of effective approaches to assess, prevent, and treat mental health, substance use, or other addictive behaviors, or their co-occurrence , in the general population or among those at-risk or suffering from these disorders.
The journal welcomes a wide range of research including:
Outside journal scope is:
The IRP’s Diversity, Equity, and Inclusion (DEI) Advisory Group helps guide IRP’s efforts to ensure the content and editorial practices and policies of the journal embody diversity, equity, and inclusion.
Current DEI Advisory Group Members
William Marti nez, PhD., Associate Professor of Psychiatry and Behavioral Sciences, UCSF – Dr. Martinez’s overall clinical, administrative, and research aims are concentrated on eliminating behavioral health inequities among racially and ethnically minoritized youths, with a specific focus on Latinx and immigrant populations. Dr. Martinez takes a socio-ecological approach to understanding these concerns across three areas of inquiry: 1) the impact of social determinants on behavioral health disparities; 2) implementation and dissemination of evidence-based prevention and intervention programming; and 3) policy and advocacy focused on improving conditions for immigrant youths. He is excited to be a member of the IRP’s DEI Advisory Group, which is much aligned with his own goals of increasing the visibility of implementation scientists and practitioners of traditionally underrepresented backgrounds.
Stephanie Yu, M.A. (she/her) Clinical Psychology PhD. Candidate, UCLA – Stephanie is passionate about mental health equity and community-engaged research aiming to reduce mental health disparities for racial/ethnic marginalized groups. Her research focuses on culturally-responsive adaptation and implementation of evidence-based practices in public systems of care serving marginalized communities through community partnership. She is also interested in how individual and systemic conditions, such as those stemming from racism and discrimination, can be addressed to improve well-being outcomes for marginalized communities.
Read article.
By Sue Coyle, MSW Social Work Today
Our DSW program focuses on using implementation science to advance social work practice.
Implementation science focuses on the thoughtful and strategic process of implementing an intervention. The emphasis is on evaluating both the intervention itself and the implementation of intervention. This includes assessing the strategies used to increase the uptake and use of the intervention.
This is an exciting time to think about using implementation science in social work practice! There are a multitude of interventions that are effective in detecting, preventing, and treating conditions that affect the health and well-being of those served by social workers. However, we also have gaps between what we know from science works and what social workers actually do in practice. These gaps have contributed to racial, ethnic, socioeconomic, and other disparities in health and health-related outcomes. By using implementation science we can help reduce racial and ethnic disparities in mental health care in the United States and improve the lives of those we serve.
Example one: assessing strategies to implementing a virtual reality program in respite care.
A great example is the Expanding Worlds: Assessing Strategies to Implementing a Virtual Reality Program in Respite Care project.
Another example involves the use of the Components for Enhancing Clinical Engagement and Reducing Trauma (CE-CERT) model to help reduce compassion fatigue and burnout among trauma therapists and supervisors in a child welfare setting.
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Email citation, add to collections.
Your saved search, create a file for external citation management software, your rss feed.
Affiliation.
Bridging the gap between research and practice is a critical frontier for the future of social work. Integrating implementation science into social work can advance our profession's effort to bring research and practice closer together. Implementation science examines the factors, processes, and strategies that influence the uptake, use, and sustainability of empirically-supported interventions, practice innovations, and social policies in routine practice settings. The aims of this paper are to describe the key characteristics of implementation science, illustrate how implementation science matters to social work by describing several contributions this field can make to reducing racial and ethnic disparities in mental health care, and outline a training agenda to help integrate implementation science in graduate-level social work programs.
Keywords: Implementation science; racial and ethnic disparities in mental health care; social work education; social work research.
PubMed Disclaimer
Full text sources.
NCBI Literature Resources
MeSH PMC Bookshelf Disclaimer
The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.
(3 reviews)
Matt DeCarlo, La Salle University
Cory Cummings, Nazareth University
Kate Agnelli, Virginia Commonwealth University
Copyright Year: 2021
ISBN 13: 9781949373219
Publisher: Open Social Work Education
Language: English
Conditions of use.
Learn more about reviews.
Reviewed by Erin Boyce, Full Time Faculty, Metropolitan State University of Denver on 6/3/24
This book provides a strong comprehensive overview of each step in the research & evaluation process for students, clearly outlining each step with clarity and direction. read more
Comprehensiveness rating: 5 see less
This book provides a strong comprehensive overview of each step in the research & evaluation process for students, clearly outlining each step with clarity and direction.
Content Accuracy rating: 5
Content in this text is accurate, needing no clarification or added information, and is presented in an unbiased manner.
Relevance/Longevity rating: 5
The relevance of this text is it's greatest strength. It is one of the strongtest research texts I've encountered, and while change always comes this text will survive new iterations of research, only needing minimal and straightforward updates.
Clarity rating: 5
As a research text, this is extremely user friendly. It is easy to read, direct, and does not interfere with student understanding. Students come away with a good understanding of the concepts from this text, and many continue to use it beyond the classroom.
Consistency rating: 5
This text is consistent with research methods and frameworks and stands alone among social work research texts as the most accessbile due to it's status as an OER and as a social work textbook.
Modularity rating: 5
This text is easily divisible into smaller readings, it works great for courses in which assignments are scaffolded to move students through the research process.
Organization/Structure/Flow rating: 5
This text is organized to walk the student through the research process from start to finish, and is easily adjusted for different teaching styles.
Interface rating: 5
This text has no significant interface issues, the readings, links, and images are easily accessbile and are presented in a way that does not interfere with student learning.
Grammatical Errors rating: 5
This text is well edited and formatted.
Cultural Relevance rating: 5
This text is culturally relevant, addresses issues of cultural relevance to social work, and highlights the role of social work values within the realm of social work research.
This is one of the best research texts I've encounted in over a decade of teaching. It is so easily digested and presents information in a direct and understandable way, and is one of the best texts for those teaching graduate level research for social workers. It is an inclusive text that honors the multiple levels of knowledge that our students come to us with, which helps sets it apart. And, the committment throughout the text to social work values and ethics is critical for todays social worker.
Reviewed by Laura Montero, Full-time Lecturer and Course Lead, Metropolitan State University of Denver on 12/23/23
Graduate Research Methods in Social Work by DeCarlo, et al., is a comprehensive and well-structured guide that serves as an invaluable resource for graduate students delving into the intricate world of social work research. The book is divided... read more
Comprehensiveness rating: 4 see less
Graduate Research Methods in Social Work by DeCarlo, et al., is a comprehensive and well-structured guide that serves as an invaluable resource for graduate students delving into the intricate world of social work research. The book is divided into five distinct parts, each carefully curated to provide a step-by-step approach to mastering research methods in the field. Topics covered include an intro to basic research concepts, conceptualization, quantitative & qualitative approaches, as well as research in practice. At 800+ pages, however, the text could be received by students as a bit overwhelming.
Content appears consistent and reliable when compared to similar textbooks in this topic.
The book's well-structured content begins with fundamental concepts, such as the scientific method and evidence-based practice, guiding readers through the initiation of research projects with attention to ethical considerations. It seamlessly transitions to detailed explorations of both quantitative and qualitative methods, covering topics like sampling, measurement, survey design, and various qualitative data collection approaches. Throughout, the authors emphasize ethical responsibilities, cultural respectfulness, and critical thinking. These are crucial concepts we cover in social work and I was pleased to see these being integrated throughout.
The level of the language used is appropriate for graduate-level study.
Book appears to be consistent in the tone and terminology used.
Modularity rating: 4
The images and videos included, help to break up large text blocks.
Topics covered are well-organized and comprehensive. I appreciate the thorough preamble the authors include to situate the role of the social worker within a research context.
Interface rating: 4
When downloaded as a pdf, the book does not begin until page 30+ so it may be a bit difficult to scroll so long for students in order to access the content for which they are searching. Also, making the Table of Contents clickable, would help in navigating this very long textbook.
I did not find any grammatical errors or typos in the pages reviewed.
I appreciate the efforts made to integrate diverse perspectives, voices, and images into the text. The discussion around ethics and cultural considerations in research was nuanced and comprehensive as well.
Overall, the content of the book aligns with established principles of social work research, providing accurate and up-to-date information in a format that is accessible to graduate students and educators in the field.
Reviewed by Elisa Maroney, Professor, Western Oregon University on 1/2/22
With well over 800 pages, this text is beyond comprehensive! read more
With well over 800 pages, this text is beyond comprehensive!
I perused the entire text, but my focus was on "Part 4: Using qualitative methods." This section seems accurate.
As mentioned above, my primary focus was on the qualitative methods section. This section is relevant to the students I teach in interpreting studies (not a social sciences discipline).
This book is well-written and clear.
Navigating this text is easy, because the formatting is consistent
My favorite part of this text is that I can be easily customized, so that I can use the sections on qualitative methods.
The text is well-organized and easy to find and link to related sections in the book.
There are no distracting or confusing features. The book is long; being able to customize makes it easier to navigate.
I did not notice grammatical errors.
The authors offer resources for Afrocentricity for social work practice (among others, including those related to Feminist and Queer methodologies). These are relevant to the field of interpreting studies.
I look forward to adopting this text in my qualitative methods course for graduate students in interpreting studies.
About the book.
We designed our book to help graduate social work students through every step of the research process, from conceptualization to dissemination. Our textbook centers cultural humility, information literacy, pragmatism, and an equal emphasis on quantitative and qualitative methods. It includes extensive content on literature reviews, cultural bias and respectfulness, and qualitative methods, in contrast to traditionally used commercial textbooks in social work research.
Our author team spans across academic, public, and nonprofit social work research. We love research, and we endeavored through our book to make research more engaging, less painful, and easier to understand. Our textbook exercises direct students to apply content as they are reading the book to an original research project. By breaking it down step-by-step, writing in approachable language, as well as using stories from our life, practice, and research experience, our textbook helps professors overcome students’ research methods anxiety and antipathy.
If you decide to adopt our resource, we ask that you complete this short Adopter’s Survey that helps us keep track of our community impact. You can also contact [email protected] for a student workbook, homework assignments, slideshows, a draft bank of quiz questions, and a course calendar.
Matt DeCarlo , PhD, MSW is an assistant professor in the Department of Social Work at La Salle University. He is the co-founder of Open Social Work (formerly Open Social Work Education), a collaborative project focusing on open education, open science, and open access in social work and higher education. His first open textbook, Scientific Inquiry in Social Work, was the first developed for social work education, and is now in use in over 60 campuses, mostly in the United States. He is a former OER Research Fellow with the OpenEd Group. Prior to his work in OER, Dr. DeCarlo received his PhD from Virginia Commonwealth University and has published on disability policy.
Cory Cummings , Ph.D., LCSW is an assistant professor in the Department of Social Work at Nazareth University. He has practice experience in community mental health, including clinical practice and administration. In addition, Dr. Cummings has volunteered at safety net mental health services agencies and provided support services for individuals and families affected by HIV. In his current position, Dr. Cummings teaches in the BSW program and MSW programs; specifically in the Clinical Practice with Children and Families concentration. Courses that he teaches include research, social work practice, and clinical field seminar. His scholarship focuses on promoting health equity for individuals experiencing symptoms of severe mental illness and improving opportunities to increase quality of life. Dr. Cummings received his PhD from Virginia Commonwealth University.
Kate Agnelli , MSW, is an adjunct professor at VCU’s School of Social Work, teaching masters-level classes on research methods, public policy, and social justice. She also works as a senior legislative analyst with the Joint Legislative Audit and Review Commission (JLARC), a policy research organization reporting to the Virginia General Assembly. Before working for JLARC, Ms. Agnelli worked for several years in government and nonprofit research and program evaluation. In addition, she has several publications in peer-reviewed journals, has presented at national social work conferences, and has served as a reviewer for Social Work Education. She received her MSW from Virginia Commonwealth University.
The School’s research endeavors aim to improve the public’s health in the U.S. and throughout the world.
Systematic and rigorous inquiry allows us to discover the fundamental mechanisms and causes of disease and disparities. At our Office of Research ( research@BSPH), we translate that knowledge to develop, evaluate, and disseminate treatment and prevention strategies and inform public health practice. Research along this entire spectrum represents a fundamental mission of the Johns Hopkins Bloomberg School of Public Health.
From laboratories at Baltimore’s Wolfe Street building, to Bangladesh maternity wards in densely packed neighborhoods, to field studies in rural Botswana, Bloomberg School faculty lead research that directly addresses the most critical public health issues worldwide. Research spans from molecules to societies and relies on methodologies as diverse as bench science and epidemiology. That research is translated into impact, from discovering ways to eliminate malaria, increase healthy behavior, reduce the toll of chronic disease, improve the health of mothers and infants, or change the biology of aging.
engaged in research activity by BSPH faculty and teams.
of all federal grants and contracts awarded to schools of public health are awarded to BSPH.
citations on publications where BSPH was listed in the authors' affiliation in 2019-2023.
publications where BSPH was listed in the authors' affiliation in 2019-2023.
Our 10 departments offer faculty and students the flexibility to focus on a variety of public health disciplines
Our 80+ Centers and Institutes provide a unique combination of breadth and depth, and rich opportunities for collaboration
The Institutional Review Board (IRB) oversees two IRBs registered with the U.S. Office of Human Research Protections, IRB X and IRB FC, which meet weekly to review human subjects research applications for Bloomberg School faculty and students
Generosity helps our community think outside the traditional boundaries of public health, working across disciplines and industries, to translate research into innovative health interventions and practices
The research@BSPH ecosystem aims to foster an interdependent sense of community among faculty researchers, their research teams, administration, and staff that leverages knowledge and develops shared responses to challenges. The ultimate goal is to work collectively to reduce administrative and bureaucratic barriers related to conducting experiments, recruiting participants, analyzing data, hiring staff, and more, so that faculty can focus on their core academic pursuits.
In order to provide extensive guidance, infrastructure, and support in pursuit of its research mission, research@BSPH employs three core areas: strategy and development, implementation and impact, and integrity and oversight. Our exceptional research teams comprised of faculty, postdoctoral fellows, students, and committed staff are united in our collaborative, collegial, and entrepreneurial approach to problem solving. T he Bloomberg School ensures that our research is accomplished according to the highest ethical standards and complies with all regulatory requirements. In addition to our institutional review board (IRB) which provides oversight for human subjects research, basic science studies employee techniques to ensure the reproducibility of research.
Four bloomberg school faculty elected to national academy of medicine.
Considered one of the highest honors in the fields of health and medicine, NAM membership recognizes outstanding professional achievements and commitment to service.
Lerner center for public health advocacy announces inaugural sommer klag advocacy impact award winners.
Bloomberg School faculty Nadia Akseer and Cass Crifasi selected winners at Advocacy Impact Awards Pitch Competition
At the Garland School, we believe social work is about service and justice, healing and restoration, and the dignity of each individual. Through innovative academics and experiential learning opportunities, both in-person and online , we strive to train and equip social work professionals to support the needs of clients through the ethical integration of faith and practice. Students at the GSSW are challenged by expert faculty members, rigorous curriculum and outstanding peer cohorts in an environment that allows every student to select a community that will allow them to thrive as they complete their coursework, whether that be in-person or online, clinical or community practice.
Gabby White, MSW alumna— “I chose social work because helping people from a social justice standpoint drew me in, and I chose Baylor Social Work because the integration of faith and practice showed me that I could bring all parts of me to the profession.”
Social work—particularly social work education at Baylor—recognizes diverse expressions of faith and seeks to honor the role of spirituality as part of what wholistically shapes a person, their family and community. Perhaps you are interested in social work because of the way your faith has motivated you or others? As part of our program, students learn about the influence of these beliefs and values in the profession. We call this our 10th competency. The Council on Social Work Education requires nine competencies as part of our accreditation, but the GSSW has a 10th.
GSSW faculty are renowned, expert leaders in the social work profession, have a passion for sharing knowledge with the students of the GSSW, and lead by example in and out of the classroom. They model servant leadership and offer students a unique opportunity through mentorship. Faculty mentors are resources for students as they navigate the MSW program. Mentors often continue beyond graduation to develop professional relationships as colleagues with their mentees. Students also have the opportunity to partner with faculty on research through specialization projects, throughout their MSW experience.
The GSSW student-to-professor ratio is 10:1, and the average class size is 15 students, which translates into an environment of active, meaningful learning. The size and style of classes offered at the GSSW, both in-person and online, give students the opportunity to connect with classmates and professors on a deeper level. Just like in the profession and across the world, professors and students come from all walks of life and bring their own unique perspectives into the learning environment. Our classes allow for deep engagement, lively discussion and true connection with each other.
The GSSW partners with departments at the university to provide dual degree options that allow students to connect their passions. Dual degrees are offered at our Waco campus in partnership with Baylor’s Hankamer School of Business and George W. Truett Theological Seminary. Degrees offered include: MSW/MBA, MSW/MDiv, MSW/MTS . Learn more here.
811 Washington Ave. Waco, TX 76701
BMC Health Services Research volume 24 , Article number: 744 ( 2024 ) Cite this article
Metrics details
Implementation science frameworks situate intervention implementation and sustainment within the context of the implementing organization and system. Aspects of organizational context such as leadership have been defined and measured largely within US health care settings characterized by decentralization and individual autonomy. The relevance of these constructs in other settings may be limited by differences like collectivist orientation, resource constraints, and hierarchical power structures. We aimed to adapt measures of organizational context in South African primary care clinics.
We convened a panel of South African experts in social science and HIV care delivery and presented implementation domains informed by existing frameworks and prior work in South Africa. Based on panel input, we selected contextual domains and adapted candidate items. We conducted cognitive interviews with 25 providers in KwaZulu-Natal Province to refine measures. We then conducted a cross-sectional survey of 16 clinics with 5–20 providers per clinic ( N = 186). We assessed reliability using Cronbach’s alpha and calculated interrater agreement (a wg ) and intraclass correlation coefficient (ICC) at the clinic level. Within clinics with moderate agreement, we calculated correlation of clinic-level measures with each other and with hypothesized predictors – staff continuity and infrastructure – and a clinical outcome, patient retention on antiretroviral therapy.
Panelists emphasized contextual factors; we therefore focused on elements of clinic leadership, stress, cohesion, and collective problem solving (critical consciousness). Cognitive interviews confirmed salience of the domains and improved item clarity. After excluding items related to leaders’ coordination abilities due to missingness and low agreement, all other scales demonstrated individual-level reliability and at least moderate interrater agreement in most facilities. ICC was low for most leadership measures and moderate for others. Measures tended to correlate within facility, and higher stress was significantly correlated with lower staff continuity. Organizational context was generally more positively rated in facilities that showed consistent agreement.
As theorized, organizational context is important in understanding program implementation within the South African health system. Most adapted measures show good reliability at individual and clinic levels. Additional revision of existing frameworks to suit this context and further testing in high and low performing clinics is warranted.
Peer Review reports
Despite the large investment in research to identify clinical and behavioral interventions to improve HIV prevention and care, many efficacious programs never get incorporated into policy or scaled into clinical settings; others fail when put into practice [ 1 , 2 ]. In contexts such as South Africa, with 7.7 million people living with HIV (PLHIV), 4.8 million on antiretroviral therapy (ART) [ 3 ], and an aging population of PLHIV who have increasingly complex care needs [ 4 , 5 ], scaling interventions that ensure effective, evidence-based care is a priority [ 6 ]. To this end, the field of implementation science has begun to shed light on why some efficacious interventions have not translated into programmatic successes, noting factors that must be addressed within the clinical environment to improve implementation and sustainment [ 1 , 7 , 8 ].
Implementation science frameworks situate interventions within the organizational context of a health care setting. The Exploration, Preparation, Implementation, Sustainment (EPIS) conceptual framework includes absorptive capacity, culture, climate, and leadership as elements of the context that shape exploration of interventions [ 9 , 10 ], while the Consolidated Framework for Implementation Research (CFIR) identifies domains such as culture, implementation climate, and readiness for implementation as key factors at the organizational or team level [ 11 , 12 ]. Recent updates to CFIR have focused on clarifying these domains as antecedents on the pathway to implementation outcomes [ 12 ]. The theory of Organizational Readiness for Change similarly identifies contextual factors such as the culture and climate of the organization that help to shape readiness for a specific change, which in turn affects implementation effectiveness [ 13 ]. Researchers have drawn on these definitions in efforts to better measure organizational characteristics: a 2017 systematic review found 76 articles attempting to measure organizational context, a majority of which were based in the United States; the authors recommended greater efforts to use mixed-methods research to develop and test measures in a range of settings [ 14 ].
Measures developed within the US health care system reflect the decentralized nature of the system, the national culture of individualism, and high levels of clinical autonomy that distinguish the US health care system from that of many other nations with more hierarchical, top-down power structures. The lack of validated measures of organizational context in centralized health systems, particularly in low-resource countries where primary care clinics are overextended, contributes to a clear gap in understanding which contextual factors impact successful program implementation and how these factors can be addressed [ 15 , 16 ] . Research in South Africa from our team and others has found that program implementation can be heavily influenced by clinic leadership, particularly leaders’ problem-solving skills, in addition to provider teamwork and clinic environment such as material and human resources [ 17 , 18 , 19 , 20 ]. Qualitative assessment across multiple levels of the health system in KwaZulu-Natal Province identified perceived benefits of a particular program as well as broader resource availability and clear communication as factors shaping integration of HIV programming into general care [ 21 ]. Recent research on implementation of maternal health quality improvement underscored the importance of leadership, teamwork, and provider motivation in maintaining consistent implementation of interventions, particularly in the face of external factors such as the COVID-19 pandemic, budget cuts, and labor actions [ 20 ].
In this study, we aimed to adapt implementation science frameworks to the context of primary care in South Africa, to develop and test measures of organizational context based on the adapted framework, and to assess if the resulting measures demonstrated associations with hypothesized determinants and outcomes of organizational context.
We report this study, which included formative qualitative work and a cross-sectional survey, based on recommendations for scale development and testing studies [ 22 ] and following STROBE guidelines for observational research (Additional file 1).
This study took place in uMgungundlovu District in KwaZulu-Natal Province, South Africa. The district includes the capital city of Pietermaritzburg but is otherwise largely rural. Adult HIV prevalence is estimated at 30%, and the 57 Department of Health (DOH) facilities provide ART for approximately 140,000 individuals [ 23 , 24 ]. The U.S. President’s Emergency Plan for AIDS Relief (PEPFAR) supports the national HIV response in this district by funding implementing partner organizations that second staff to DOH facilities in support of HIV care and that provide specific services like clinical mentoring or client tracing following disengagement from care.
We followed an iterative approach to define priority domains and identify potential items. We first synthesized implementation science literature and existing research in South Africa into a conceptual model delineating organizational context as an overall determinant of program-specific domains and ultimately organizational readiness for change (Fig. 1 ). We defined key elements of organizational context as service readiness (the resources available for service provision), stress and workload, leadership (including leadership practice, communication, and direction on roles and responsibilities), learning climate as a space for shared trial and evaluation, and team cohesion (trust and shared values). Primary care clinics in this area typically operate with an Operational Manager (OM) who is also a professional nurse, a deputy manager, and a small number of nurses who rotate through clinical services; we initially conceptualized the full clinic as the organizational unit, the OM and deputy as leaders, and the nursing and support staff as the relevant team. We used literature searches, investigator knowledge and networks, and the Community Advisory Board of the Human Sciences Research Council to identify participants for an Expert Panel composed of health care providers, program managers, social scientists, and DOH representatives all familiar with the provision of HIV care in the province. We convened the panel in a hybrid format and presented the initial conceptual framework for their review. The Expert Panel affirmed the primacy of organizational context in shaping program implementation in this setting, and noted major aspects to consider: that program implementation and sustainment is driven by top-down directives, that the context of overburdened facilities and resource constraints shapes program uptake, that leadership communication and management of complex relationships within and beyond facilities is critical, and that providers face high demands and shifting roles that can make teamwork particularly important. Panelists noted that the concept of a learning climate informed by a quality improvement feedback loop was not present within primary facilities; they highlighted leadership’s role in monitoring performance as more salient, complemented by providers’ active interest in solving implementation problems. Panelists concurred with conceptualizing clinics as an organizational unit with distinct leaders and a single team of providers. Following the Panel discussion, we returned to the literature to clarify candidate constructs and identify potential items. We also mapped service readiness to specific human and material resources that inform organizational context for the full clinic.
Original conceptual model of factors shaping program implementation
We identified 7 latent constructs for measurement, including 4 related to leadership; Table 1 defines each construct and the accompanying measure. Scales on leadership engagement, feedback and monitoring, and stress addressed our revised conceptual framework, and each was based on an existing measure validated in other contexts [ 25 , 26 ]. To measure teamwork, we adapted a measure of cohesion we had previously validated in community settings in South Africa.[ 27 ] In place of directly assessing “learning climate”, we adapted a measure on critical consciousness, which we had originally developed to capture community empowerment and learning culture [ 27 , 28 ], to address problem solving within the facility as described by the Expert Panel. Two scales were not direct adaptations. Based on other research in South African clinics [ 17 ] and Expert Panel input on the importance of leaders in optimizing implementation in light of resource constraints, we drew from and expanded on existing items on change efficacy [ 29 , 30 ] to create a leadership-focused scale on resource mobilization and problem solving. To capture coordination in this setting, we developed new items to capture the specifics of coordinating facility staff, implementing partners, and community leaders. The Expert Panel reviewed and revised items for clarity; we reconvened a subset of the panel to finalize items collectively. Items were translated into isiZulu and back translated to English by co-author MM.
We conducted cognitive interviews with 25 participants in 2 stages from June to September 2021 for content validation of the proposed items, allowing time to revise between stages. We sampled 19 providers (primarily nurses) and 6 organizational managers (OMs, clinic leaders) from 5 facilities and tested 3 to 4 scales per interview. We selected nurses and OMs for cognitive interviews based on their central responsibility for delivering care and hence capacity to answer all proposed items, including items on clinical care delivery not included in this analysis. We used the think-aloud method and probed respondents on clarity of the item and response choices, thought processes leading to their response, and ease of answering. We asked respondents to identify overlapping or redundant items. Interviews were conducted in English or isiZulu. We recorded interviews and translated and transcribed for analysis. We conducted rapid analysis to assess key terms such as “clinic leaders” and “clinic staff,” and we iteratively revised items and scales for clarity and efficiency. For instance, we revised an item on whether “leaders make use of all available staff to implement clinic programs” to whether “leaders make the most of the staff available” based on responses that not all staff are needed or appropriate for a given program. Interviewees provided consistent responses in conceptualizing their clinics as a single unit, identifying the OM and deputy as the relevant leaders, and defining nursing and support staff personnel as a team. The final instrument included 4 or 5 items per scale with 4 response options for each statement (See Additional file 2, Table S1 for all items). Greater agreement indicated respondents perceived more of that construct within the clinic; all constructs except for stress were positive aspects of organizational culture and most items on these constructs had positive stems; items with negative stems were maintained as written for inter-item assessment and reverse-scored for subsequent analyses.
To test the proposed measures, we calculated a minimum sample size of 12 providers each within 14 facilities (168 respondents) to provide > 80% power to detect a correlation of at least 0.57 with alpha of 0.05, one-sided. To ensure this sample size while including facilities with fewer than 12 providers total, we sampled 16 facilities using random selection among facilities with at least 100 patients on ART based on provincial TIER.net data, stratified by ART patient population size (< 2000, > 2000) to account for possible differences in smaller clinics and larger clinics with potentially more complex structures. Facilities participating in cognitive interviews were ineligible. Within facility, we selected all OMs and used stratified random sampling to select up to 8 higher-level nurses (Professional Nurses, Certified Nursing Professionals, Registered Nurses) and up to 8 auxiliary nurses and other patient-facing providers engaged in HIV care (Enrolled Nurses or Enrolled Nursing Assistants, Nutritionists, Pharmacists, Nutrition or Pharmacy Assistants, Lay Counselors), as the Expert Panel and cognitive interviews confirmed that these personnel were considered part of the provider team. Selection was conducted by ordering providers at random within strata to provide replacement respondents when possible in case selected providers were not available. We administered surveys via Research Electronic Data Capture (REDCap) to capture basic demographics on providers and their roles, the organizational context measures, and additional measures on conduct of specific programs analyzed elsewhere. Data collection took place from October 2021 – March 2022. National restrictions related to the COVID-19 pandemic, including limitations on clinic scheduling and staff meetings, were shifted to alert level 1 (the lowest level) on October 1 2021 and remained at that level throughout data collection [ 31 ]; routine clinical practices continued to be affected during the study period by considerations such as diverting staff for vaccination campaigns.
We conducted a concurrent facility audit of human and material resources using direct observation and a survey with the OM or a proxy if there was no OM available. From the audit, we calculated a summary score based on the presence of key infrastructure; indicators were adapted from the World Health Organization Service Availability and Readiness Assessment and are listed in Additional file 2, Table S2 [ 32 ]. We calculated staff continuity based on the number of clinical staff and number of clinical positions with turnover in the past year. We also extracted routine program data on HIV patient outcomes from the district and national reporting systems. We calculated aggregate retention on ART per facility as the number of patients remaining on ART as of March 2022 out of the number reported on ART in March 2021 or newly starting ART between March 2021 and February 2022.
Analysis proceeded in four stages. First, we conducted descriptive analysis of facilities, providers, and items. We used proportions and medians to summarize measures from the facility audit and provider surveys. We assessed incomplete responses and straightline (invariant) responses by provider and measure. Second, we conducted agreement and reliability checks at individual and facility levels. We quantified inter-item agreement with Cronbach’s alpha among complete responses. We calculated the a wg(j) statistic as a measure of agreement within facility; this calculation assumes a set of parallel items answered by multiple raters and can be interpreted similarly to Cohen’s kappa. We report mean a wg(j) across facilities and the number of facilities achieving at least moderate agreement (a wg(j) ≥ 0.50) per measure [ 33 ]. We calculated mean respondent score per measure and used the intraclass correlation coefficient (ICC) to quantify facility-level agreement and reliability among all participants and again limited to professional nurses. Third, we conducted convergent validation using the Spearman rank correlation coefficient of all measures within facility and testing correlation with human and material resources hypothesized to shape organizational context – infrastructure and staff continuity – as well as with retention patients on ART as a clinical outcome. All validity analyses were limited to facilities demonstrating moderate agreement on the relevant measure. We assessed correlations against a pre-specified threshold of rho = 0.30 (moderate effects [ 34 ]) and reporting statistical significance. Fourth, as an exploratory analysis, we compared respondents’ average scale scores between facilities with a wg(j) ≥ 0.50 (moderate agreement) on all measures versus facilities where moderate agreement was obtained on fewer measures; we used linear generalized estimating equation (GEE) models accounting for clustering within facility. All analyses were conducted in Stata version 17.
This study was approved by the Institutional Review Board at the University of California, San Francisco (20–31802), the Research Ethics Committee at the Human Sciences Research Council (REC 1/19/08/20), and the uMgungundlovu Health District; all methods were carried out in accordance with relevant guidelines and regulations. All facilities provided consent for inclusion and each participant provided written informed consent to participate.
A representative from each of the 16 facilities consented to participation and assisted in completion of the facility audit; data from all facilities were extracted in full from district reporting. The median facility had 14 full-time clinical staff; only 3 of 16 facilities had all positions filled with permanent personnel (no vacancies, no interim posts) at the time of assessment (Table 2 A). Most facilities demonstrated some gaps in core infrastructure: median score was 61%. Routine district data suggested retention on ART was high in all facilities, with a median of 95% of patients retained between March 2021 and March 2022.
Of 194 providers approached, 186 consented to participate (95.9%); those declining cited insufficient time. One respondent who worked as a data capturer rather than a patient-facing role was excluded from analysis. Consistent with health care providers in South Africa generally, most respondents were female (87.5%, Table 2 B). Due to extensive turnover in leadership of some clinics, surveys were completed by OMs at 14 of 16 facilities, including one interim OM. Just over one third of respondents were non-OM nurses. Respondents reported a median of 8 years of professional experience and 6 years at their current facility.
At the individual level, respondents tended to agree with most items: average scores for the proposed measures clustered near 3 = “Agree” out of the possible range 1 – 4 (Table 3 ). Straightline responses were common, ranging from 34% of respondents on the measure of stress to 54% for critical consciousness; nearly all such responses were all “Agree” except for the measure on stress, where straightline responses were split between “Agree” and “Disagree” (data not shown). Leadership coordination had the highest missingness, with 59 participants responding “Don’t know” or skipping at least one item, primarily two items related to the external clinic committee (whether the committee met regularly and if leaders acted on its input). Cronbach’s alpha indicated moderate to strong inter-item agreement for all measures except coordination. We excluded coordination from subsequent analysis given the high degree of missingness and inadequate inter-item agreement.
Facilities accounted for up to 22 to 23% (for critical consciousness and stress, respectively) of total variance in mean scores. ICC exceeded a minimum threshold of 0.05 for feedback and monitoring, stress, cohesion, and critical consciousness (Table 3 B); near zero ICC for leadership engagement and resource mobilization suggested these measures could not reliably distinguish between facilities. ICC was higher when limited to professional nurses for most measures, suggesting that for measures other than leadership engagement, professional nurses responded more consistently within facilities than other providers. Distribution of facility means underscores the homogeneity of scales like leadership engagement across all facilities (Additional file 2, Figure S1).
Item responses demonstrated moderate to strong agreement within facilities, with a wg(j) ranging from 0.57 for stress (12 of 16 facilities with at least moderate agreement) to 0.78 for critical consciousness (all facilities with at least moderate agreement). Facilities with a wg(j) < 0.50 demonstrated inconsistent agreement across responses to consider a summary statistic representative of the facility as a whole.
Limiting analysis to facilities with at least moderate agreement on a given measure, we found that the 6 measures of organizational context showed substantial correlation within facility: absolute correlation exceeded the predetermined threshold of 0.30 in all cases and achieved statistical significance at p < 0.05 for multiple assessments despite the small number of facilities (Table 4 A). When compared with predicted inputs and outcomes of facility climate (Table 4 B), correlation with staff continuity was moderate ( rho > 0.30) for feedback and monitoring, resource mobilization, and stress, with only stress showing a statistically significant correlation (-0.68). Higher scores on feedback and monitoring were correlated with lower facility infrastructure ( rho = -0.53), contrary to expectation. Cohesion was correlated with higher retention on ART ( rho = 0.49, p = 0.09).
In our exploratory analysis to understand potential differences in context in facilities where staff were largely in agreement on their scoring, we found that respondents in the 9 facilities with moderate agreement on all scales reported more positive organizational context than the other 7 facilities, with statistically significant differences in leadership engagement, resource mobilization, and stress (Table 5 ). The largest difference was in reported stress: average scores on the stress scale were 0.41 points lower (less stress) in facilities with agreement on all scales.
In this study, we developed and adapted measures for 7 domains of organizational context based on implementation science frameworks and expertise within primary care clinics in South Africa. The measures demonstrated reasonable individual-level consistency with our study population, except for the coordination scale created de novo; the remaining measures showed moderate to strong agreement and low to moderate reliability within facility. Variance between facilities was modest, possibly reflecting the shared context of a rural setting in a single district. Measures generally correlated with each other at the facility level, though we found limited evidence of relationships between the facility scores and hypothesized predictors and outcomes in validation analyses. Facilities with stronger agreement among respondents also tended to have a more positive context.
The Expert Panel concurred with existing literature and implementation science frameworks that facility leadership was critical to program implementation and sustainment. In this setting of relatively small clinics and distribution of responsibilities across staff, they prioritized overall leadership above leadership specific to implementation of one program, which has been more commonly measured in US-based implementation science research [ 35 ]. We adapted or developed measures for four aspects of overall leadership hypothesized to improve program implementation: engagement, feedback and monitoring, resource mobilization, and coordination. The newly created items on coordination with external partners and clinic committees proved difficult for some respondents to answer and showed limited agreement even within complete responses. Further efforts to capture this important construct, potentially as an index rather than a scale, are warranted. The other leadership measures demonstrated good item agreement and moderate agreement within facilities in our sample; as measures developed for use at an organizational level, adequate agreement across raters is critical. The finding that ICCs for leadership measures were generally higher among professional nurses, typically the most trained professional cadre in primary care facilities, indicates that these providers were relatively more consistent within facility compared to across facilities than all respondents, potentially due to greater exposure to clinic leaders or to differing interpretations of who qualifies as a ‘leader’ between professional nurses and other personnel. Our cognitive interviews demonstrated consistent understanding of ‘leader’ among the professional nurses and OMs we interviewed; including additional cadres could be useful to extend this evidence. The findings to date support use of these leadership measures within higher cadres such as professional nurses in similar settings, particularly for clinical interventions.
Beyond leadership, we tested measures of stress, cohesion, and critical consciousness hypothesized to shape uptake of new programs. These scales similarly demonstrated good item agreement and moderate inter-rater agreement within facilities; ICCs (0.23; 0.14; and 0.22, respectively) well exceeded the minimum threshold of 0.05 among all participants, suggesting greater consistency in respondents across cadres within facility than for leadership measures.
The six scales demonstrating sound measurement properties also tended to correlate together within facilities, potentially reflecting less random variation in these facilities and/or that respondents provided similar responses across scales and between raters within these facilities. Three scales demonstrated some correlation with inputs and outputs in accordance with predictions: resource mobilization, stress, and cohesion. Better resource mobilization was correlated with higher staff continuity. The scale for stress, the only construct indicating a negative climate and where agreement indicated worse performance, had less homogeneity than other scales, surfacing potential to further refine the measure to better distinguish between clinic contexts. Higher stress also showed a correlation with lower staff continuity. The scale for cohesion – teamwork within providers – demonstrated moderate heterogeneity between individuals and between facilities, and was correlated with retention on ART. Given the importance of provider burnout before and especially during the COVID-19 pandemic [ 36 , 37 , 38 ], better assessment of stress and cohesion can help to identify best performing clinics and to target facilities most in need of management or individual interventions on fostering teamwork and coping with stress.
The remaining scales—on leadership engagement, feedback and monitoring, and critical consciousness—demonstrated two drawbacks. The first was high levels of straightline responses, with approximately half of respondents indicating the same answer—typically “Agree”—to all items. These response patterns could be explained in several ways: 1) truly uniform conditions across and within these facilities within a single district, 2) insufficient distinction between items to capture indications of very low or very high levels of each construct, 3) social desirability within a hierarchical work setting, 4) lack of strong opinion, particularly given the strain providers face to deliver care amidst constrained resources, and/or 5) respondent inattention or fatigue. In the absence of a neutral response option (which we did not provide to avoid respondents defaulting to the median option), one or more of these explanations could have resulted in repeatedly agreeing. This degree of invariance can inflate apparent agreement within individuals and facilities, but it undermines the utility of the measures in distinguishing between facilities, should such distinctions exist.
The second drawback was inconsistent evidence of correlation with hypothesized predictors (staff continuity and infrastructure) and with patient outcomes (retention on ART) in the validation analysis. Careful consideration is required to understand these findings. It is possible that these initial efforts to adapt the constructs of organizational context did not fully capture the dynamics that most strongly shape performance in these facilities. This may be particularly salient given the time of the assessment following the upheaval of the COVID-19 pandemic, which shaped staff continuity, organizational context, and ART retention. An additional limitation is the relative insensitivity of the outcome measure: ART programs are longstanding, and our measure of patient retention based on (imperfect) aggregate data demonstrated little variability across the sampled facilities. Organizational context at the time of assessment may have had little influence on patient retention even had it been measured perfectly. Measures reflecting implementation or sustainment of more recently introduced programs would provide an indicator more sensitive to variation in organizational context.
Our study has multiple strengths, including use of implementation science frameworks and organizational readiness theory to propose measures of organizational context in primary care facilities in South Africa. This work expands on the qualitative work attesting to the importance of organizational context in implementing and maintaining interventions in this setting [ 17 , 20 , 21 , 39 ]. We relied on a majority South African Expert Panel to prioritize constructs and items for measurement, and we conducted detailed cognitive interviews to revise and clarify items. Limitations include difficulty in reaching providers – particularly clinic managers – amidst regular turnover and COVID-19 challenges (including the rapid changes in clinical responsibilities and locations, provider illnesses and deaths, and restrictions on routine activities) and the reliance on aggregate patient outcome data that were both imperfect and potentially insensitive to organizational context. Soliciting perspectives on leadership and organizational context in hierarchical settings is inherently fraught; it is difficult to disentangle social desirability from true agreement. Thresholds for agreement measures are imposed on a continuous metric and may not distinguish truly different performance levels [ 33 ].
This study was an initial effort to adapt and test measures of organizational context to better understand program implementation in primary care within the South African health system. This work confirms the importance of organizational context from the perspective of those working within primary care clinics and supports standing calls for further efforts to develop and test theories, frameworks, and measures that capture the dynamics of health settings in resource-constrained settings. While this initial effort at adaptation of theory and measurement to the context of South African clinics produced scales with sound measurement properties and several scales – notably resource management, stress, and cohesion – with promise for differentiating facilities, further work is needed to understand the most important domains of organizational context shaping patient outcomes. From there, further efforts to refine constructs and measures are warranted, including positive deviance assessments to ensure a sample of facilities with strongly divergent performance and inclusion of implementation outcomes more closely tied to organizational context.
The datasets used during the current study are available from the corresponding author on reasonable request.
Antiretroviral
Agreement within group
Consolidated Framework for Implementation Research
Coronavirus disease 2019
Department of Health
Exploration, Preparation, Implementation, Sustainment
Generalized estimating equation
Human immunodeficiency virus
Intraclass correlation coefficient
Operational manager
President’s Emergency Plan for AIDS Relief
Research Electronic Data Capture
Nutbeam D. Achieving ‘best practice’ in health promotion: improving the fit between research and practice. Health Educ Res. 1996;11(3):317–26.
Article CAS PubMed Google Scholar
Dionne KY. Doomed Interventions: The Failure of Global Responses to AIDS in Africa. Cambridge: Cambridge University Press; 2017. p. 214.
Book Google Scholar
UNAIDS: Joint UN Program on HIV/AIDS. UNAIDS. 2018 [cited 2019 Dec 11]. UNAIDS: South Africa. Available from: https://www.unaids.org/en/regionscountries/countries/southafrica
Gouda HN, Charlson F, Sorsdahl K, Ahmadzada S, Ferrari AJ, Erskine H, et al. Burden of non-communicable diseases in sub-Saharan Africa, 1990–2017: results from the Global Burden of Disease Study 2017. Lancet Glob Health. 2019;7(10):e1375–87.
Article PubMed Google Scholar
Sharman M, Bachmann M. Prevalence and health effects of communicable and non-communicable disease comorbidity in rural KwaZulu-Natal. South Africa Trop Med Int Health. 2019;24:1198–207.
Croce D, Mueller D, Rizzardini G, Restelli U. Organising HIV ageing-patient care in South Africa : an implementation science approach. S Afr J Public Health. 2018;2(3):59–62.
Google Scholar
Yamey G. What are the barriers to scaling up health interventions in low and middle income countries? A qualitative study of academic leaders in implementation science. Glob Health. 2012;8(1):11.
Article Google Scholar
Geng EH, Peiris D, Kruk ME. Implementation science: Relevance in the real world without sacrificing rigor. PLoS Med. 2017;14(4).
Article PubMed PubMed Central Google Scholar
Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Health. 2011;38(1):4–23.
Aarons GA, Green AE, Trott E, Willging CE, Torres EM, Ehrhart MG, et al. The roles of system and organizational leadership in system-wide evidence-based intervention sustainment: a mixed-method study. Adm Policy Ment Health. 2016;43(6):991–1008.
Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50.
Damschroder LJ, Reardon CM, Widerquist MAO, Lowery J. The updated Consolidated Framework for Implementation Research based on user feedback. Implement Sci. 2022;17(1):75.
Weiner BJ. A theory of organizational readiness for change. Implement Sci. 2009;4(1):67.
Allen JD, Towne SD, Maxwell AE, DiMartino L, Leyva B, Bowen DJ, et al. Measures of organizational characteristics associated with adoption and/or implementation of innovations: a systematic review. BMC Health Serv Res. 2017;17(1):591.
Daivadanam M, Ingram M, Annerstedt KS, Parker G, Bobrow K, Dolovich L, et al. The role of context in implementation research for non-communicable diseases: answering the ‘how-to’ dilemma. PLoS ONE. 2019;14(4).
Article CAS PubMed PubMed Central Google Scholar
Alonge O, Rodriguez DC, Brandes N, Geng E, Reveiz L, Peters DH. How is implementation research applied to advance health in low-income and middle-income countries? BMJ Global Health. 2019;4.
Gilson L, Ellokor S, Lehmann U, Brady L. Organizational change and everyday health system resilience: lessons from Cape Town, South Africa. Soc Sci Med. 2020;266:113407.
Julien A, Anthierens S, Van Rie A, West R, Maritze M, Twine R, et al. Health care providers’ challenges to high-quality HIV care and Antiretroviral treatment retention in Rural South Africa. Qual Health Res. 2021;31(4):722–35.
Leslie HH, West R, Twine R, Masilela N, Steward WT, Kahn K, et al. Measuring Organizational Readiness for Implementing Change in Primary Care Facilities in Rural Bushbuckridge, South Africa. Int J Health Policy Manag. 2020;11:912–8.
PubMed PubMed Central Google Scholar
Odendaal W, Chetty T, Goga A, Tomlinson M, Singh Y, Marshall C, et al. From purists to pragmatists: a qualitative evaluation of how implementation processes and contexts shaped the uptake and methodological adaptations of a maternal and neonatal quality improvement programme in South Africa prior to, and during COVID-19. BMC Health Serv Res. 2023;23(1):819.
van Heerden A, Ntinga X, Lippman SA, Leslie HH, Steward WT. Understanding the factors that impact effective uptake and maintenance of HIV care programs in South African primary health care clinics. Arch Public Health. 2022;80(1):221.
Streiner DL, Kottner J. Recommendations for reporting the results of studies of instrument and scale development and testing. J Adv Nurs. 2014;70(9):1970–9.
Department of Health. Province of KwaZulu-Natal Annual Performance Plan 2018/19 - 2020/21 [Internet]. KwaZulu-Natal, South Africa: Department of Health, Republic of South Africa; [cited 2023 Feb 6]. Available from: http://www.kznhealth.gov.za/app/APP-2018-19.pdf
Dwyer-Lindgren L, Cork MA, Sligar A, Steuben KM, Wilson KF, Provost NR, et al. Mapping HIV prevalence in sub-Saharan Africa between 2000 and 2017. Nature. 2019;570(7760):189–93.
Fernandez ME, Walker TJ, Weiner BJ, Calo WA, Liang S, Risendal B, et al. Developing measures to assess constructs from the Inner Setting domain of the Consolidated Framework for Implementation Research. Implement Sci. 2018;13(1):52.
Helfrich CD, Li YF, Sharp ND, Sales AE. Organizational Readiness to Change Assessment (ORCA): Development of an instrument based on the Promoting Action on Research in Health Services (PARIHS) framework. Implement Sci. 2009;4(1):38.
Lippman SA, Neilands TB, Leslie HH, Maman S, MacPhail C, Twine R, et al. Development, validation, and performance of a scale to measure community mobilization. Soc Sci Med. 2016;157:127–37.
Lippman SA, Maman S, MacPhail C, Twine R, Peacock D, Kahn K, et al. Conceptualizing community mobilization for HIV prevention: implications for HIV prevention programming in the African context. PLoS ONE. 2013;8(10):e78208.
Shea CM, Jacobs SR, Esserman DA, Bruce K, Weiner BJ. Organizational readiness for implementing change: a psychometric assessment of a new measure. Implement Sci. 2014;10(9):7.
Zullig LL, Muiruri C, Abernethy A, Weiner BJ, Bartlett J, Oneko O, et al. Cancer registration needs assessment at a tertiary medical center in Kilimanjaro. Tanzania World Health Popul. 2013;14(2):12–23.
COVID-19 / Coronavirus | South African Government [Internet]. [cited 2023 Mar 20]. Available from: https://www.gov.za/Coronavirus
World Health Organization. Service Availability and Readiness Assessment (SARA) reference manual. Geneva, Switzerland: World Health Organization; 2013.
LeBreton JM, Senter JL. Answers to 20 Questions about interrater reliability and interrater agreement. Organ Res Methods. 2008;11(4):815–52.
Cohen J. Statistical power analysis for the behavioral sciences. 2nd ed. New York: Routledge; 1988. p. 567.
Aarons GA, Ehrhart MG, Farahnak LR. The Implementation Leadership Scale (ILS): development of a brief measure of unit level implementation leadership. Implementation Sci. 2014;9(1):45.
Khamisa N, Oldenburg B, Peltzer K, Ilic D. Work related stress, burnout, job satisfaction and general health of nurses. Int J Environ Res Public Health. 2015;12(1):652–66.
van der Colff JJ, Rothmann S. Occupational stress, sense of coherence, coping, burnout and work engagement of registered nurses in South Africa. SA J Ind Psychol. 2009;35(1):1–10.
McKnight J, Nzinga J, Jepkosgei J, English M. Collective strategies to cope with work related stress among nurses in resource constrained settings: An ethnography of neonatal nursing in Kenya. Soc Sci Med. 2020;1(245).
Gilson L, Barasa E, Nxumalo N, Cleary S, Goudge J, Molyneux S, et al. Everyday resilience in district health systems: emerging insights from the front lines in Kenya and South Africa. BMJ Glob Health. 2017;2:e000224.
Download references
The authors are grateful to Anna Leddy for her contributions to survey design and data collection, to the data collection team, the interview and survey respondents who provided their time and insights, and to the Expert Panel for insights and guidance throughout the project. Expert Panelists were: Lungile Mshengu, Nomusa Mtshali, Paul Nijas, Fiona Scorgie, Jonathan Stadler, Michéle Torlutte, Joslyn Walker, Bryan Weiner, and Petra Zama.
This work was supported by the National Institute of Mental Health R21MH123389 (Lippman & Steward). The funder had no role in the preparation of this manuscript.
Authors and affiliations.
Division of Prevention Science, Department of Medicine, University of California, San Francisco, San Francisco, USA
Hannah H. Leslie, Sheri A. Lippman & Wayne T. Steward
MRC/Wits Rural Public Health and Health Transitions Research Unit (Agincourt), School of Public Health, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa
Sheri A. Lippman
Division of Human and Social Capabilities, Human Sciences Research Council, Durban, South Africa
Alastair van Heerden, Mbali Nokulunga Manaka & Phillip Joseph
Department of Paediatrics, School of Clinical Medicine, Faculty of Health Sciences, SAMRC/WITS Developmental Pathways for Health Research Unit, University of the Witwatersrand, Johannesburg, South Africa
Alastair van Heerden
Departments of Global Health and Health Systems and Population Health, University of Washington, Seattle, USA
Bryan J. Weiner
You can also search for this author in PubMed Google Scholar
Conceptualization: HHL, WTS, BJW, AVH, SAL. Methodology: HHL, WTS, BJW, MNM, AVH, SAL. Formal analysis: HHL. Investigation: HHL, WTS, MNM, PJ, AVH, SAL. Writing, original draft: HHL. Writing, review and editing: All. Supervision: WTS, SAL. Project administration: WTS, MNM, PJ, AVH, SAL. Funding acquisition: WTS, SAL, AVH, HHL, BJW.
Correspondence to Hannah H. Leslie .
Ethics approval and consent to participate, consent for publication.
Not applicable.
The authors declare no competing interests.
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary material 1., supplementary material 2., rights and permissions.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Reprints and permissions
Cite this article.
Leslie, H.H., Lippman, S.A., van Heerden, A. et al. Adapting and testing measures of organizational context in primary care clinics in KwaZulu-Natal, South Africa. BMC Health Serv Res 24 , 744 (2024). https://doi.org/10.1186/s12913-024-11184-9
Download citation
Received : 04 August 2023
Accepted : 07 June 2024
Published : 18 June 2024
DOI : https://doi.org/10.1186/s12913-024-11184-9
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
ISSN: 1472-6963
IMAGES
VIDEO
COMMENTS
Abstract. Bridging the gap between research and practice is a critical frontier for the future of social work. Integrating implementation science into social work can advance our profession's effort to bring research and practice closer together. Implementation science examines the factors, processes, and strategies that influence the uptake ...
Introduction. Advances in research, practice, and policy related to behavioral health care have led to greater availability and emphasis on the use of interventions that have proven their effectiveness. 1,2 As a result, the interest, development, and implementation of evidence-based practices (EBPs) have grown exponentially. 3 Evidence-based practices, programs, interventions, and/or ...
Despite the growing knowledge base on evidence-based practices in social work and medicine, there is a large gap between what is known and what is consistently done. Implementation research is the study of methods to promote the uptake of research findings into routine practice. In this article, we describe the rationale for implementation research and outline the concepts and effectiveness of ...
Abstract This article traces themes over time for conducting social work research to improve social work practice. The discussion considers 3 core themes: (a) the scientific practitioner, including different models for applying this perspective to research and practice; (b) intervention research; and (c) implementation science. While not intended to be a comprehensive review of these themes ...
Introduction. Evidence-based practice (EBP) has been increasingly advocated and is gaining wider acceptance in social work. This signals a continuing reaffirmation of social work's commitment to generating and maintaining a scientific knowledge base in general and, more specifically, to an expectation that social work be informed by, and based on, evidence from scientific research.
In all, implementation science can help social work develop sustainable, bidirectional bridges between research and practice to increase the relevance, use, impact, and sustainability of the best available evidence from clinical and services studies to improve the access, quality, and outcomes of social work interventions, services, and social ...
Matching service to population needs is an emphasis in social work, especially in discussions about evidence-based practice (Howard et al., 2003; Mullen et al., 2008).Client characteristics and concerns as well as research that identifies effective means to address them should inform the selection of interventions in a process of evidence-based practice (Gambrill & Gibbs, 2009; Gibbs ...
Bridging the gap between research and practice is a critical frontier for the future of social work. Integrating implementation science into social work can advance our profession's effort to bring research and practice closer together. Implementation science examines the factors, processes, and strategies that influence the uptake, use, and ...
Abstract. Achieving client outcomes is understood as a complex, dynamic interplay of elements including the client, worker/s, programme setting and practice approach. How an or-ganisation supports or constrains implementation of innovative social work practice is worthy of research attention. The emergence of frameworks for translating evidence ...
Research on Social Work Practice Volume 19 Number 5 September 2009 491-502 # 2009 The Author(s) 10.1177/1049731509335528 ... social work. What is Implementation Research? In the field of medicine, implementation research is a relativelynewconcept,andaconsensusonthenamehas yet to emerge. In fact, different names have become
Results: Examples of specific research designs and methods illustrate their use in implementation science. We propose that the CTSA program takes advantage of the momentum of the field's capacity building in three ways: 1) integrate state-of-the-science implementation methods and designs into its existing body of research; 2) position itself at the forefront of advancing the science of ...
service settings, no prior reviews of EBP implementation in social work or the wider human services field were located (Bhattacharyya et al., 2009; Gira, Kessler, & ... of the electronic versions of journals Research on Social Work Practice (2000 to July 2010), Child and Family Social Work (2000 to July 2010), and Journal of Evidence-Based Social
It updates and clarifies the frequently cited study conducted by the National Implementation Research Network that introduced these frameworks for application in diverse endeavors. ... D. L. (2015). Improving programs and outcomes: Implementation frameworks and organization change. Research on Social Work Practice, 25(4), 477-487. https ...
Equitable Implementation at Work. Equity must be integrated into implementation research and practice. Here are 10 recommendations for putting equitable implementation into action. The field of implementation science needs to prioritize evidence-informed interventions that fit the daily lives of the communities in which they will be delivered.
Implementation Research and Practice is an international, peer-reviewed, ... Berit Ingersoll-Dayton Collegiate Professor of Social Work Associate Dean for Research and Innovation, Social Work - Rogério M. Pinto focuses on academic, sociopolitical and cultural venues for broadcasting voices of oppressed individuals and groups. Funded by the ...
Dr. Julia Moore. In this podcast our guest, Julia Moore, PhD, discusses why implementation science is relevant to the advancement of the Social Work profession and she addresses the research-to-practice gaps that currently exist.
There is a clear need for more rigorous research on the effectiveness of implementation strategies, and we provide several suggestions that could improve this research area. Res Soc Work Pract . 2014 Mar 1;24(2):192-212. doi: 10.1177/1049731513505778.
This is an exciting time to think about using implementation science in social work practice! There are a multitude of interventions that are effective in detecting, preventing, and treating conditions that affect the health and well-being of those served by social workers.
Abstract. Bridging the gap between research and practice is a critical frontier for the future of social work. Integrating implementation science into social work can advance our profession's effort to bring research and practice closer together. Implementation science examines the factors, processes, and strategies that influence the uptake ...
2National Implementation Research Network, Franklin Porter Graham Child Development Institute, University of North Carolina-Chapel Hill, Chapel Hill, NC, USA Corresponding Author: Rosalyn M. Bertram, School of Social Work, University of Missouri-Kansas City, Kansas City, MO 64110, USA. Email: [email protected] Research on Social Work Practice
Review of Research Informed Practice When social work practitioners are positioned as producers (Dudley, 2010) or co-producers of ... research process, including practicalities associated with implementation, ethical considerations specific to the research topics and subjects, access to data, and sensitivity and competency with regard to issues ...
480 Research on Social Work Practice 25(4) by guest on June 17, 2015 rsw.sagepub.com Downloaded from Figure 3), an organization will b el e s sl i k e l yt os u f f e rt h ec o m -
Social Work Theories in Context: Creating Frameworks for Practice 3rd Edition, by Karen Healy, Sydney, Bloomsbury Academic, 2022, 336 pp., $61.95 (paperback), ISBN 9781350321571 Kathy Mendis Department of Community Services, Acknowledge Education Melbourne, Australia Correspondence [email protected]
including the client, worker/s, programme setting and practice approach. How an or-. ganisation supports or constrains implementation of innovative social work practice is. worthy of research ...
We designed our book to help graduate social work students through every step of the research process, from conceptualization to dissemination. Our textbook centers cultural humility, information literacy, pragmatism, and an equal emphasis on quantitative and qualitative methods. It includes extensive content on literature reviews, cultural bias and respectfulness, and qualitative methods, in ...
Social work education thus needs to prepare students as life-long learners for practice that meets current standards through the application of competencies, values, skills, and knowledge, and enables them to critically engage with, and reflect on emerging issues, contemporary practice, research, and evidence.
Systematic and rigorous inquiry allows us to discover the fundamental mechanisms and causes of disease and disparities. At our Office of Research (research@BSPH), we translate that knowledge to develop, evaluate, and disseminate treatment and prevention strategies and inform public health practice.Research along this entire spectrum represents a fundamental mission of the Johns Hopkins ...
At the Garland School, we believe social work is about service and justice, healing and restoration, and the dignity of each individual. Through innovative academics and experiential learning opportunities, both in-person and online, we strive to train and equip social work professionals to support the needs of clients through the ethical integration of faith and practice.
Implementation science frameworks situate intervention implementation and sustainment within the context of the implementing organization and system. Aspects of organizational context such as leadership have been defined and measured largely within US health care settings characterized by decentralization and individual autonomy. The relevance of these constructs in other settings may be ...
Introduction. Although the understanding of climate change as a social justice issue is increasingly accepted, the grasp of its complexity and convergence with other fields of practice such as human-animal interaction and homelessness is still emerging within social work discourse (Bezgrebelna et al., Citation 2021; Dietz et al., Citation 2020; Kidd et al., Citation 2023; Protopopova et al ...