Understand what early stage success looks like and you’ll be well placed to set the right expectations for your team and funders.
If you’re a funder then understand that definitions of success for these projects can’t be measured by number of users or volume of impact, not even by size of product delivered.
We asked nine of our current project cohort to define what success might look like when their grant ends.
Did they say “a finished product“? Did they say “a fully launched digital service“?
No. The majority described success as creating a product that generated user value, that was getting good feedback and demonstrated potential for future impact. Working software was also mentioned, but there was no expectation of a fully launched product.
Nor any mention of number of users or volume of outcomes (these guys are on the ball!).
Indicators of future potential matter more than numbers
For anyone whose ever worked for a charity this might sound odd. Output and outcome numbers are the mainstay of grant funded projects.
But in the world of early stage digital products and services you can’t measure success like a traditional project. Numbers matter less than indicators of potential. Indicators that this piece of tech actually works and people are ready to use it.
Yes, the right metrics are important. But at this stage, while you’re still developing a product’s value, you’ll still be developing its key metrics. There’ll be plenty of time for traditional metrics once you have a statistically significant number of users.
For now, just chill and measure potential.
Why charities and funders need to understand this
Tech for Good projects measure progress and success differently. You need to understand this and communicate it to your management and your funders. You may even have to educate them in tech’s principles of user value and social value. If you don’t their agenda will drive your project and you’ll end up with unhelpful delivery targets or building the wrong thing.
Instead your targets should be based around testing concepts and features, and learning what works. Learning really is the key unit of progress. Each test should help you validate if a feature, interface or user journey has potential, or if it should be ditched. This approach to learning and validation needs to drive your early stage measurements of progress and success.
Indicators of potential you should pay attention to
As you move through the cycle of testing, learning and iterating look out for three main indicators.
1. Well tested, working software
At early stage this could be a beta version or a minimum viable product. That’s the smallest value generating version of your service possible at that time. If you built it, tested it and found it worked then it’s an indicator of progress. This even includes pre-beta versions like a paper prototype, a clickable wireframe, or even a concierge service. These versions are all important milestones.
If, on the other hand, the first time you expect to test is at the end of your project then software becomes a poor measure of progress and future potential. You can’t build a successful digital service without first learning what works. And you can’t learn what works if you haven’t tested smaller and simpler versions of the software.
2. Measured user value
This is the most powerful indicator of future potential; that you are on the right track and unlikely to fail.
Gauge user value by watching what people do when they test your product, seeing how they engage, behave and react to the experience. Carry out usability testing and ask them to describe their thoughts as they go. This will give you real-time insight into what its like to use your product.
You can also ask people to do remote usability testing or use google analytics to track people’s journey through your product. This can give you indications of where users find it useful and where they get confused or leave. Sometimes this generates more questions that you can go back and ask them.
3. User feedback and sentiment
It can be useful to ask people what they think of your idea or product. Or ask them to try your prototype then complete a questionnaire or a user interview. That way they can reflect on their experience and offer insights that might otherwise have been missed. Their level of enthusiasm can be a good measure and if you ask them why they feel the way they do then feedback becomes richer and more useful.
However, you should also beware of the ‘Granny Factor’. If you asked your granny for feedback would they be honest? Could you be sure it was reliable? Most people prefer to give positive feedback rather than be critical. What we say isn’t always what we mean. Liking is not the same as signing up and using. So exercise caution with what people say about your product and never make a decision based solely on what people say they want.
Be successful. Measure potential first.
That’s it. Set targets that reflect these measures of success. Educate your team. Educate your funders. Then go for it.