The Tech for Good Programme doesn’t fund solutions. It funds solution finding projects. Understanding what this means in practice will help you apply for your first Tech for Good grant.
Samaritans had an idea. It helped them win funding from this programme. But then data analysis and user research changed their understanding of the problem. This in turn challenged their ideas and assumptions about the solution. We talked to Simon Stewart, Product Manager about how this happened.
Volunteers need help to handle demand spikes
Samaritans rely on volunteers to deliver support across their phone, SMS and email channels. Demand across each channel spikes and troughs. Some spikes and troughs are predictable but others aren’t. How can Samaritans get better at meeting the need at a given time? How can they help volunteers respond to when the need is greatest, using the channel that is most needed?
Principles guide the way forward
Samaritans are working with experienced Tech for Good cooperative Dev Society. Their approach is based on:
- A cross-functional team and end users - so that assumptions and hypothesis are being generated in multiple places
- Good user research - to dig more deeply into the problem and assumptions
- Good data analysis - to understand the nature of demand surges and troughs
- Building then testing the smallest prototype possible - to demonstrate if its the right thing (de-risking the chance of building the wrong features!)
If you read this blog often then you’ll probably recognise this approach as Agile.
We are trying to combine what people tell us they think they want vs how people behave in relation to that thing. What they tell us helps us work out what to test. Testing it tells us how they actually behave when given that feature or solution”
Simon Stewart, Senior Product Manager, Samaritans
Data analysis and user research blow the project wide open
Our assumption that ‘this is the solution to the problem’ has changed to ‘this is the problem we are trying to address and there are multiple ways of approaching it’.
Simon Stewart
Samaritans thought they’d need to create a real-time dashboard showing demand levels to volunteers on-shift. But when they analysed the data and ran user interviews they realised what was needed was a more predictive model of need with better ways to notify volunteers of how to offer their time.
User interviews made them consider whether a virtual assistant might deliver a better UX for volunteers. Then they realised they’d need to learn if it was best to present info at the beginning of before or during a shift.
Would volunteers want notifications at home? Some said they would rather respond to demand on the day of need. Others wanted to be able to plan ahead more.
This opened up multiple options for how they could receive info: email notifications, red amber green on-rota displays, text messaging systems etc.
Twin challenges become clear
When research ended Simon and project manager Tyler could more clearly describe the challenge ahead, as to:
- Predict demand surges
- Present the info to volunteers in a way that helps them make an informed decision about when to offer their time and how to use when they are there.
This creates two technical challenges: to predict accurately and deliver a solid user experience.
Next steps are to test their hypotheses about what will work best.
Testing loops ahead!
These principles mean the team will run a series of sprints or testing loops. They will follow these steps:
- Start with a hypothesis e.g. volunteers will engage well with SMS notifications of predicted demand every 48hrs
- Identify a test for it e.g. Manual demand surge SMS notification to volunteers
- Build a prototype to test e.g. dev team analyse data every 48 hrs and predict level of demand over next 48 hrs. Inform project team by email.
- Test it with users - get it in front of them. Observe their experience and reaction e.g. project team manually sends SMS with predicted demand data to 12 volunteers, then follows up with short phone interview to gauge reaction and experience 10 mins, 1hr and 3hrs afterwards.
- Analyse, learn and decide - ask ‘do we need to learn more about this feature or are we confident that we should now build and release it’?
- Begin again at Step 1
Testing helps break through theoretical discussions by generating data. If you can present people with the data from testing then you can make more informed decisions.”
Simon Stewart
Rewards are waiting (don’t worry!)
Between now and Christmas Samaritans are planning three testing loops. By its end they will have a set of features that have been tested, some of which will have already been built and released.
But Simon thinks charities can get worried about taking a test-based approach to problems. They worry about is how much it costs to run testing loops rather than jumping more directly to the solution.
But it’s more expensive to build the wrong thing. And user research and testing generate much higher user involvement. And this helps you find out more quickly what is actually valuable to users. When you’ve generated enough value the rest of your budget becomes liberated. That’s a financial reward.
It’s a safe way to work because it means we aren’t investing lots of upfront money in something that hasn’t been tested. It helps us get through the theoretical debates. It’s a good way for organisations to reassure stakeholders that they are making evidence based decisions.”
Simon Stewart
Summary
- Don’t focus on the solution. Instead focus on delivering a good solution finding project.
- Do good user research and data analysis
- Set up some kind of research-build-test process that helps you test your hypotheses increase the chances of building the right thing