Colorful Illustration of diverse people inside speech bubbles

Philosophies and Approaches

Description

In Hopson’s (2009) words, “culturally responsive evaluation (CRE) is a theoretical, conceptual, and inherently political position that includes the centrality of and attunedness to culture in the theory and practice of evaluation.” There are three key assumptions underlying this approach:

– The ontological assumption holds that there are multiple realities, linked to lived experience.

– The epistemological assumption honors diverse ways of knowing.

– The methodological assumption has favored qualitative methods in the past but has now shifted to mixed methods.
This approach rejects the notion of evaluation as culture-free and calls attention to values, beliefs, and context that shape participant experiences and evaluation design. The needs of participants are centered in CRE through interactive, dynamic, and participatory evaluation processes that attend to issues of power, race, decolonization, and equity, and contribute to social transformation (Hopson, 2009).

Unique Contributions

Chouinard and Cram (2019) reference the work of many others applying this approach to note that “the most recent shift in terminology has been from cultural competence to culturally responsive evaluation, from a focus on the cultural competency of evaluators to culturally responsive practice, denoting practical strategies and frameworks for evaluators. Culturally responsive approaches are most often rooted in a political concern for self-determination and societal transformation to enhance social inclusion, with attention given to the specific needs and cultural dimensions of a program’s participants and their wider community.”

History

Culturally responsive evaluation  has its roots in the fields of education and assessment, beginning as early as the 1930s. Present-day conceptualizations of CRE have been largely driven by Hood, whose thinking shifted from culturally responsive pedagogy, to assessment, and finally evaluation. He first used the term “culturally responsive evaluation” in 1998, where he honored Stake’s scholarship on responsive evaluation, which calls for the evaluator to become well-acquainted with the program being evaluated, the participants’ needs and experiences, and the social context of the evaluation. Hood’s contribution was to call attention to culture and cultural differences in responsive evaluation practices and to highlight the need for evaluators who shared a lived experience with participants of programs that were being evaluated. Critical race theory, epistemology, and Indigenous frameworks have been used to inform CRE. Hood — along with several scholars, including Hopson at the University of Illinois and Frierson at the University of Florida — pushed the development of CRE and hosts the Center for Culturally Responsive Evaluation and Assessment (CREA). CREA has become a central organization of CRE and equity-focused approaches. More recently, CRE has expanded to more prominently featured issues of intersectionality, recognizing that culture is not a monolith (Neubauer et al., 2020).

PRINCIPLES

Icon of had holding scalesPromotion of equity and social justice; attendance to issues of power

A key characteristic of culturally responsive evaluation is “attention to issues of power differentials among people and systems” (Hood et al., 2015). CRE requires the use of methods and tools that best serve the community (Hood et al., 2015) and the use of evaluation to promote social agendas that challenge current power structures and dynamics rooted in white supremacy and oppression (Caldwell & Bledsoe, 2019; Chouinard & Cram, 2019; Hopson 2009; Public Policy Associates, 2015).

Engagement of partners and community members, particularly those with less social power, during all phases of the evaluation

“Stakeholders play a critical role in all evaluations, especially culturally responsive ones, providing sound advice from the beginning (framing questions) to the end (disseminating the evaluation results)” (Frierson et al., 2002). It is important to acknowledge that “people are at the center” of culturally responsive evaluation and they know what they value and need (McBride, 2018). This approach recommends that evaluators “develop a stakeholder group representative of the populations the project serves, assuring that individuals from all sectors [and varying levels of power, status, and resources (Hood et al., 2015)] have the chance for input” (Frierson et al., 2002). One important way of doing this is to listen and hear the questions of stakeholders so that they are incorporated into the evaluation design (Frierson et al., 2002). As in culturally competent evaluation, it is critical to establish a climate of trust and respect among all parties involved (Hood et al., 2015).

Icon of a team of four peopleComposition of evaluation team and reflection on assumptions and biases

Social investments often target populations of color and yet are evaluated by white evaluators (Public Policy Associates, 2015). Culturally responsive evaluation argues that data collection, quality and interpretation can be improved when evaluators share a lived experience with program participants (Frierson et al., 2002; Hopson, 2009; Hood et al., 2015). However, this does not mean that racial congruence automatically leads to cultural congruence (Frierson et al., 2002). A diverse evaluation team allows for multiple perspectives in the evaluation (Public Policy Associates, 2015) and genuine connection with the local context (Hood et al., 2015). All evaluators should critically reflect on their own personal cultural preferences and biases (Hood et al., 2015; Hopson 2009; Public Policy Associates, 2015) and “make a conscious effort to restrict any undue influence they might have on the work” (Frierson et al., 2002).

Icon of three people with globe overheadConsideration of cultural and historical contexts and different worldviews

Culturally responsive evaluation requires particular attention to the historical, sociopolitical, community, and organizational contexts in which an evaluation will be conducted (McBride, 2018). This includes the history of the location, the program, and the people (Hood et al., 2015). “As much as possible, evaluators should understand the realities and challenges to the daily lives of the priority population based on political positioning, leadership, power dynamics, peer agency coordination, etc. These conditions and more could influence the success of an investment and the outcomes continuing, discontinuing, or expanding support based on evaluation results and findings” (Public Policy Associates, 2015).

Icon of two data chartsIntentional methods and thoughtful data collection

Social science methodologies are socially constructed and centered on the Western white male perspective (Chouinard & Cram, 2019). Culturally responsive evaluation originally favored qualitative over quantitative methodologies to allow for the emergence of cultural nuances but now recommends mixed methods (Hood et al., 2015). For the purposes of CRE, the evaluation design does not need to be elaborate— it just needs to be appropriate for context. Instruments should be culturally valid; that is, they should resonate with the culture of the participants participating in the program being evaluated (Public Policy Associates, 2015). This may require pilot testing and adaptation to ensure cultural sensitivity and responsiveness. The process of data collection should also be responsive to cultural context. Failure to ensure that those collecting qualitative data are attuned to cultural context may lead to invalid data (Frierson et al., 2002).

Icon of clipboard with magnifying glassIntentional analysis and inclusive interpretation

Culturally responsive analysis requires “sensitivity to, and understanding of, the cultural context in which data are gathered” (Frierson et al., 2002) and culture is assumed to frame validity, rather than validity framing culture (Kirkhart, 2013). Deriving meaning from data in program evaluations that are culturally responsive requires people who understand the context in which the data were gathered (Frierson et al., 2002; Hood et al., 2015). Stakeholder review panels are one way to examine evaluative findings gathered by the principal evaluator and/or an evaluation team (Frierson et al., 2002). Special attention should be paid to differences by groups (disaggregation) and outliers in the analysis process (Frierson et al., 2002; Hood et al., 2015; Public Policy Associates, 2015).

Icon of presentation board with org chartAccessible and actionable evaluation findings

Results from culturally responsive evaluations should be widely shared and presented clearly for all of the intended audiences (Frierson et al., 2002). This may require multiple, audience-specific communication formats (Hood et al., 2015).

© 2022 SLP4i and The Colorado Trust, authored by Katrina Bledsoe, Felisa Gonzales, and Blanca Guillen-Woods. This work is protected by copyright laws. No permission is required for its non-commercial use, provided that the authors are credited and cited.

For full citation use: Bledsoe, K., Gonzales, F., & Guillen-Woods, B*. (2022). The Eval Matrix©. Strategy Learning Partners for Innovation https://slp4i.com/the-eval-matrix.
*These authors contributed equally to this work with support from the Annie E. Casey Foundation and The Colorado Trust.

The Eval Matrix site designed by KarBel Multimedia