Strengths-Based Monitoring, Learning and Evaluation

--

This image is of many coloured measuring tapes entangled together on a white surface. Photo by Patricia Serna on Unsplash

As development practitioners and researchers, we must think carefully about the Monitoring, Learning and Evaluation (learning becomes before evaluation in my process) types we use with the people and communities we support. Our measures can cause conflict and distress — especially for under-represented community members. It can undermine trust and engagement and produce unreliable data.

A few years ago, I was talking with some researchers about measuring outcomes of inclusion of under-represented communities. We discussed different types of standardised methodologies and typical Monitoring, Evaluation and Learning (MEL) indicators for targeted groups of under-represented people (women, people with disabilities and people with diverse sexual orientations, gender identities, expressions and sex characteristics).

I had been increasingly uneasy about promoting a strengths-based practice while constantly focusing on what was ‘wrong’ in the programs. The people I talked to thought me idealistic and that the research and MEL were much more important than adopting a strengths-based approach.

Around this time, I asked for stakeholder feedback on a project I led. To help measure the impact, we decided to use a survey to explore how inclusive the stakeholders were in the two communities we were working in — focusing on the inclusion of women and people with disabilities in their local communities. Did the local communities have greater inclusion after two years of working in the community? (It was a five year project)

The survey had 20 statements (on a scale of strongly disagree to strongly agree), some of which were positive: people with disabilities to respect us, we must treat them with respect.” Others were quite negative, e.g., “We don’t employ people with disabilities because they have the capacity to participate at this level.”

Fortunately, before asking them to complete the survey, we explained that we wanted their feedback about it and had taken steps to ensure confidentiality. After they had placed their surveys in an envelope (so we didn’t see them), we asked what they thought about the survey.

The group reacted really badly to the negative focus of many of the questions. All the participants faced major challenges within the development system and specific thematic area we were working in (WASH) and in their lives generally, and they felt that by using the survey, we were judging them. They were also worried about how the information would be used (e.g., would it be passed on to our donors) and said it reminded some of them about previous negative experiences with aid workers.

When it became clear they felt very negatively and we had discussed it with them for a while, we invited them to take their surveys and to destroy them. If we had not returned the surveys and not set them up as carefully as we had, our trust would have been undermined, and it would have been harder to engage them in the inclusion of the under-rginalised and norm-defying groups.

The experience helped build trust and engagement because, by returning the surveys without seeing their responses, we clearly demonstrated that we had listened to them, trusted their judgement, and valued their insights. That process really created a huge step forward in practising a strengths-based approach and working together to ensure the under-represented groups were included in community discussions.

Such surveys can also produce quite unreliable data. Some years back a community practitioner told me about a researcher who had given a group of women she was working with an anonymous survey to complete. The practitioner thought some of the questions were quite personal and intrusive, so after the researcher had left, she asked the women whether they were worried about answering the questions. They replied, “No, we just lied.”

The measures we use need to be consistent with our approach (if we are strengths-based, we need to find or develop strengths-based measures), respectful, and appropriate to our audience. It is not OK for us to think that research and evaluation are more important than the people we work with.

For more blog posts on strengths-based approaches

Seven principles that underline my strengths based approach to group work.

Strengths-based practice: more than being positive

Sign up for my monthly(ish) newsletter to receive news, views, events, and freebies related to the inclusion of under-represented communities in the international development and humanitarian systems.

--

--

Lana Woolf: Including the Excluded

Founder of Community Powered Responses; Co-founder of Edge Effect, GEDSI specialist in the area of Women; People with Disabilities; People with Diverse SOGIESC