Our work is shaped by Systems Thinking, Developmental Evaluation, Results-Based Accountability, and Data Visualization. We are highly motivated to make these theories and methods work in the real world – in ways that make data useful. In everything we do, we strive for transparency, inclusiveness, and cultural responsiveness. We think evaluation is best when we are all on the same side of the table, looking at data together objectively, asking and answering questions, and figuring out what is working and what to do next.
The following is a brief summary of the four main evaluation drivers that we mix and blend based on our experience and perspective and, more importantly, based on the needs of the particular client and program.
Systems Evaluation is rooted in Systems Thinking (Cabrera), which considers the complex factors that are inherent in the larger system in which a program is embedded. It is a foundational framework that provides a language and process for thinking explicitly based on universal patterns of thinking: distinction-making, part-whole system structures, relationships, and perspectives.
Developmental Evaluation (Patton) is appropriate for innovative initiatives being implemented in dynamic and complex environments where participants, conditions, interventions, and contexts are in flux, and pathways for achieving desired outcomes are subject to change. This method supports reality-testing, innovation, and adaptation in complex dynamic systems where relationships among critical elements are nonlinear and emergent. Even when we are implementing evidence-based programs and proven strategies, we have learned that reality is messy and nuanced and a developmental mindset keeps us agile.
Results-Based Accountability (Friedman) is a set of principles and processes that operationalize systems evaluation and developmental evaluation to make evaluation more practical and accessible. RBA makes important distinctions between means and ends and between attribution and contribution, particularly when it comes to program-level and population-level results. RBA avoids jargon and uses clear, plain and consistent language to engage a variety of stakeholders. RBA focuses the evaluation on three broad and simple questions (which are the basis for more specific questions):
- How much is being done? (process)
- How well is it being done? (quality)
- Is anyone better off? (impact)
RBA considers context and trends when examining data. Oftentimes, the story behind the data is equally if not more important than the quantitative data. In fact, qualitative stories can be elevated and analyzed in rigorous and quantitative ways. One way we do this is through video survey digital interviews. This technology is a web-based survey app that is very similar to traditional tools like Survey Monkey with the added feature of collecting and analyzing audio and video responses to questions. Open-ended responses can be analyzed using voice-to-text transcription and keyword analysis including sophisticated algorithms for sentiment analysis that look at word patterns to determine opinion, attitudes and emotions. The app allows for efficient collection of interviews from a large sample of respondents in order to build knowledge in more authentic ways.
Data Visualization (Evergreen) is much more than making data look good. It is about understanding the cognition of how we process information and move from pre-attention to long-term memory to actually building knowledge. These techniques are used to produce user-friendly reports and other products that will stimulate thinking and understanding.
We blend these methods with a relentless focus on facilitating thinking, exploring the story and context behind the data, and considering multiple perspectives – all for more meaningful and robust analysis of quantitative and qualitative data to determine, understand, and document impact and support strategic decision making.