Reframing Research and Evaluation: A Personal Journey

By Jill Iman, Co-Managing Director and Director or Evaluation, Research and Implementation Science

My early training was in an academic, research-based setting where a hypothesis was either supported or not through rigorous research methodologies and analysis. Although this culture is changing, it has fostered an environment where success is tied to being able to support/demonstrate a concept, while hypotheses that are not borne out through research can be lost. These “failings,” or lack of ability to “prove” something, are deemed less publishable and not readily shared with colleagues and partners.

In creating this environment—where supported findings are promoted and those that are not are discarded—we are losing the ability to learn. This realization spurred my own journey to reframe how I thought about research and data.

When people hear the terms “research,” “evaluation,” “data,” “analysis,” etc., frankly, I think it can be disconcerting. “Evaluation” can bring up concerns about being, well, evaluated—told you are succeeding or missing the mark. 

I get it. I know it. I feel it. But that’s not how I see it.

Evaluation, research, data and all that jazz is a means, not an end—meaning it provides us all with an opportunity to reflect, learn, adapt, evolve, align, improve.

I see the process and the strategy as similar to Carol Dweck’s portrayal of a growth mindset rather than a fixed mindset. According to Dweck, a fixed mindset is one in which an individual believes that certain traits or characteristics are innate (you are either smart or you aren’t), whereas those with a growth mindset tend to believe that abilities and intelligence can be developed, changed and enhanced over time.

And just as Dweck argues it is possible to shift mindsets, it is quite possible to shift our thinking (and those of our colleagues, board members and broader organization) about research and evaluation.

Let’s reframe!

How? Here are some initial ideas:

  • Identify goals and set research questions that are focused on learning.
    • For example, instead of “Does my program work?” consider questions like “Is my program more or less effective when coupled with other supportive services? For certain groups people? Delivered in one way compared with another?”
  • Engage others in the development and interpretation of evaluation tools and data.
    • This allows for greater buy-in and participation in the process and an opportunity for feedback and learning, allowing many to reflect on the meaning of results.
  • Set realistic expectations.
    • Consider setting standards for demonstrating change that are aligned with the people you are serving and informed by previous research. (For example, if 60 percent of participants in your program report improved outcomes, should the program be considered a failure because that number is not 100 percent?)
  • Incorporate context into the evaluation.
    • Take a step back and reflect on the factors in the environment, in the lives of those you serve and in the broader community that may influence findings.
  • Create a culture of learning within the organization.
    • Establish a safe environment for learning among staff and board members by reviewing findings on an ongoing basis and celebrating both meeting goals and identifying opportunities for change.
  • Evaluate the evaluation, and embed flexibility.
    • Take time to ensure that the evaluation itself is aligned with questions and context. (Consider things like clarity, potential bias, and cultural sensitivity.) And try to be responsive to evaluation lessons (for example, develop a feedback loop so that as information comes in, it is shared).

We’ve all been a part of the creation of a culture that values “success” (for example, programs that demonstrate “impact” are more likely to be funded), but I’m seeing this change through your and JVA’s work.  In recent years, the focus has become more about learning, creating feedback loops and understanding that true “impact” takes time and is incredibly nuanced. For example, consider the concepts of developmental evaluation, empowerment evaluation and appreciative inquiry, which are incorporating language and ideas that foster a learning approach.

We all have a role to play in shifting the dialogue and informing the conversation.

How do you view evaluation? How do you talk about it? What keeps you up at night when you think about evaluation?

2018-04-13T09:03:36+00:00 March 13th, 2018|2018, Blog, Evaluation, Implementation Science, Unconsultants|0 Comments

Leave A Comment

/* Omit closing PHP tag to avoid "Headers already sent" issues. */