VolunteerMatch Blog

How to REALLY Know That Your Volunteers Deliver on Your Mission

Written by Guest Contributor | March 12, 2019

It’s trickier than you think to describe impact – and that’s good.

Every time I offer a volunteer impact measures training, I’m struck by some new takeaway that helps me better explain how to master this complex-but-valuable process.

Today, my takeaway is this: when you create impact measures on a matrix, one column stands out as the trickiest – and probably the most important – to complete.

That column has to do with (drum roll, please)...indicators.

Let me back up for a sec. Impact measures are created using a logic model. A logic model is a matrix that maps out all of the components needed to evaluate the effectiveness of a program.

The various columns within the matrix – the activities, inputs, outputs, indicators, etc. – all of these pieces lead toward creating the outcomes you actually want. It’s the outcomes that demonstrate exactly what your volunteers accomplished that served your clients or delivered on your organization’s mission.

You can’t create relevant outcomes without putting a lot of consideration into your indicators.

The indicator describes what improvement looks like so that you can set a quantifiable goal to work toward.

Here are some general examples of indicators for volunteer program impact:

  • An increase in reading comprehension for an after-school program with volunteer tutors
  • A decrease in park litter for a conservation organization with volunteer cleanup groups
  • An increased number of job interviews per client for a homeless shelter with volunteer resume counselors

As simple as these examples may look, indicators are tricky to develop. Generally, we know intuitively that a volunteer role has a positive impact, but describing that impact in quantifiable terms requires some brain power.

Creating indicators forces us to answer this basic question: “How will we know that this volunteer role supports our goal?”

Take this real-life example, which cropped up during an impact measures training for a museum.

Like many museums, this one has a goal of deepening visitor engagement with the collections. Also, like many museums, this one works with volunteer docents. We might conclude that docents naturally deepen visitor engagement due to their knowledge of the collections and the tours they offer.

But how do we prove that with data? What’s the indication that docent tours are actually deepening visitor engagement?

Developing indicators led to an extended conversation. We had to break apart the workings of a typical tour and figure out exactly which results demonstrate engagement in a quantifiable way.

It turned out that were multiple possible indicators for volunteer engagement. The group had to choose the one with the fewest complications.

Here were some of the options:

Indicator 1: an increase in average tour time.

  • You might argue that a longer tour means visitors are more engaged because they’re asking questions. Then again, one longwinded visitor could extend the Q&A but alienate other tour members. Plus, it might get complicated to measure the number of people asking questions and the number of questions asked.

Indicator 2: an increase in tours that stick with a 30-minute time limit.

  • Then again, a tour that sticks with a 30-minute format might deepen engagement by piquing visitor interest without monopolizing their time. For this choice, though, you might need some advance data on the connection between length of tour and number of tours that get filled to capacity.

Indicator 3: a decrease in the average number of tour drop-outs.

  • Perhaps the best signal of engagement is that visitors remain with the tour until the very end, the presumption being that un-engaged visitors would wander away. Then again, visitors may drop off a tour because the volunteer was an uninspiring speaker or not well-prepared to answer questions.

You can see how each indicator has its pros and cons. It’s our job to choose the indicator that comes closest to helping us illustrate our goal and re-assess it over time.

Creating indicators, it turns out, is part art and part science.

But here’s what I love most about indicators – as we create them, we create a different kind of conversation.

Normally, we’re in action mode, attending to the recruiting, training and supervising of our volunteers. If we confer with colleagues – say about the way we train volunteers – it’s generally to solve an immediate problem.

When we work together on an indicator, we’re still solving problems. This time, though, our solutions are tied to the big picture. As we hash out how best to illustrate our objective, we end up discussing everything: training, scheduling, data collection, volunteer satisfaction, etc. We must revisit our way of doing business and document how it contributes to our larger goals.

Measuring strategic impact is not standard practice – at least not yet. But consider the potential: if we all committed to this process and these conversations, imagine the increase in credibility for volunteer management.

Perhaps our measure of progress would be this: an increase in Leaders of Volunteers who master the art of creating indicators.

Guest post by Elisa Kosarin. This post originally appeared on Twenty Hats.