Many angles. One unique approach to evaluation.

Our work is shaped by Systems Thinking, Developmental Evaluation, Results-Based Accountability, and Data Visualization.  We are highly motivated to make these theories and methods work in the real world – in ways that make data useful.  In everything we do, we strive for transparency, inclusiveness, and cultural responsiveness. We think evaluation is best when we are all on the same side of the table, looking at data together objectively, asking and answering questions, and figuring out what is working and what to do next. The following is a brief summary of the four main evaluation drivers that we mix and blend based on our experience and perspective and, more importantly, based on the needs of the particular client and program.

Systems Evaluation is rooted in Systems Thinking (Cabrera), which considers the complex factors that are inherent in the larger system in which a program is embedded. It is a foundational framework that provides a language and process for thinking explicitly based on universal patterns of thinking: distinction-making, part-whole system structures, relationships, and perspectives.

Developmental Evaluation (Patton) is appropriate for innovative initiatives being implemented in dynamic and complex environments where participants, conditions, interventions, and contexts are in flux, and pathways for achieving desired outcomes are subject to change. This method supports reality-testing, innovation, and adaptation in complex dynamic systems where relationships among critical elements are nonlinear and emergent. Even when we are implementing evidence-based programs and proven strategies, we have learned that reality is messy and nuanced and a developmental mindset keeps us agile.

Results-Based Accountability (Friedman) is a set of principles and processes that operationalize systems evaluation and developmental evaluation to make evaluation more practical and accessible. RBA makes important distinctions between means and ends and between attribution and contribution, particularly when it comes to program-level and population-level results. RBA avoids jargon and uses clear, plain and consistent language to engage a variety of stakeholders. RBA focuses the evaluation on three broad and simple questions (which are the basis for more specific questions):

  • How much is being done? (process)
  • How well is it being done? (quality)
  • Is anyone better off? (impact)

Data Visualization (Evergreen) is much more than making data look good. It is about understanding the cognition of how we process information and move from pre-attention to long-term memory to actually building knowledge. These techniques are used to produce user-friendly reports and other products that will stimulate thinking and understanding.

We blend these methods with a relentless focus on facilitating thinking, exploring the story and context behind the data, and considering multiple perspectives – all for more meaningful and robust analysis of quantitative and qualitative data to determine, understand, and document impact and support strategic decision making.