Overview
Principles
Click on each principle for more info
Intentional analysis and inclusive interpretation
Culturally competent evaluators “carefully examine the data to understand how contextual conditions and structural inequities affect the outcomes” (Lee, 2007). Mixed methods allow for the detection of patterns of statistical significance through quantitative analysis and contextual understanding of the patterns through qualitative analysis (Lee, 2007). “Culture serves an intrinsic role in influencing how we define categories for organization purposes, classify and interpret data, and determine which comparisons have meaning. Therefore, engaging stakeholders during the analysis is essential” (CDC, 2014). Culturally competent evaluators also consider the intersection of multiple cultural and social identities (Thomas & Campbell, 2020).
Culturally responsive analysis requires “sensitivity to, and understanding of, the cultural context in which data are gathered” (Frierson et al., 2002) and culture is assumed to frame validity, rather than validity framing culture (Kirkhart, 2013). Deriving meaning from data in program evaluations that are culturally responsive requires people who understand the context in which the data were gathered (Frierson et al., 2002; Hood et al., 2015). Stakeholder review panels are one way to examine evaluative findings gathered by the principal evaluator and/or an evaluation team (Frierson et al., 2002). Special attention should be paid to differences by groups (disaggregation) and outliers in the analysis process (Frierson et al., 2002; Hood et al., 2015; Public Policy Associates, 2015).
Culturally responsive and equitable evaluation (CREE) is a compilation of culturally responsive and equity-focused evaluation approaches. Analysis and dissemination would follow those of culturally responsive evaluation, culturally responsive Indigenous evaluation, and other participatory approaches that lead to both accessible and actionable findings. It includes engagement of the priority population in the analysis and interpretation of the data in order to develop contextualized and sustainable recommendations (Mendez & Taniuchi, 2020; Anderson et al., 2020).
Bowman et al. (2015) notes that the evaluation work is designed to allow for continuous program evaluation and quality improvement, and is built upon human and infrastructure capacities for future evaluations. Analysis and interpretation are guided by cultural perspective and context. Member checking, a method that improves the credibility of qualitative research by matching the evaluator/researcher’s data and analysis with participants’ and community interpretations, is often used in decolonizing research to improve credibility of qualitative data (Smith, 2012 as cited in Bowman et al., 2015). Tribal, community elders, and other representatives are key participants in the analysis and interpretation of the data.
Through an empowerment process, people learn to think evaluatively (Patton, 2002 as cited in Fetterman et al., 2015). This process makes them more likely to make decisions and take actions based on their evaluation data. Empowerment evaluation is designed to “improve their programs using a form of self-evaluation and reflection. Participants continually assess their progress toward self-determined goals and shape plans and strategies in accordance with the community assessment” (Thomas & Campbell, 2020).
Equitable evaluation and the practice of the Equitable Evaluation Framework™ (EEF) examines what works, for whom, and under what conditions (Bamberger & Segone, 2011; Center for Evaluation Innovation et al., 2017). Both Dean-Coffey (2018) and Bamberger and Segone (2011) provide guidance on how data should be interpreted with an eye toward equity. Dean-Coffey (2018) states, “aata about the problem being addressed — as well as about the outcomes of the change initiative — should at minimum be disaggregated, so that differential effects by race, ethnicity, gender, language, or a myriad of other dimensions can be spotted and accounted for.” Furthermore, stakeholders play a prominent role in the interpretation of findings. Bamberger and Segone (2011) believe that discussing the results with them provides “an opportunity to explain how their contributions were used, and to provide them with the chance to correct any inaccuracies and to clarify any doubts.” In international settings, implementation partners and program managers are more likely than community members receiving services to have time to contribute to interpreting findings. Nevertheless, involving community members in interpreting data is important, as “community members or implementers of the intervention studied can also provide important context for and interpretation of the results of quantitative or qualitative data analysis” (Stern et al. 2019). Additionally, the Equitable Evaluation Framework™ identifies cultural responsiveness and context as key components of rigor (EEI & GEO, 2021). Further, one of the principles of the EEF states evaluative work can and should answer critical questions about the effect of a strategy on different populations and the underlying systemic drivers of inequity (Center for Evaluation Innovation et al. 2017; EEI, 2020).
Most germane to analysis is the transformative ontological assumption which deals with the nature of reality. This assumption “holds that there are diversities of viewpoints with regard to many social realities, but we need to place those viewpoints within a political, cultural, and economic value system to understand the basis for the differences. And then, we need to struggle with revealing those multiple constructions, as well as with the decisions about privileging one perspective over another” (Mertens, 1999). “We are led to ask questions such as, ‘Whose reality is privileged in this context?’ ‘What is the mechanism for challenging perceived realities that sustain an oppressive system?’ ‘What are the consequences in terms of who is hurt if we accept multiple versions of reality or if we accept the ‘wrong/privileged’ version?’” (Mertens, 2010).
© 2022 SLP4i and The Colorado Trust, authored by Katrina Bledsoe, Felisa Gonzales, and Blanca Guillen-Woods. This work is protected by copyright laws. No permission is required for its non-commercial use, provided that the authors are credited and cited.
For full citation use: Bledsoe, K., Gonzales, F., & Guillen-Woods, B*. (2022). The Eval Matrix™. Strategic Learning Partners for Innovation https://slp4i.com/the-eval-matrix.
*These authors contributed equally to this work with support from the Annie E. Casey Foundation and The Colorado Trust.
The Eval Matrix site designed by KarBel Multimedia