We’ve made the claim that our managed beta tests achieve an average active participation rate of over 90%, something almost unheard of in beta testing (where participation rates average ~30%). While in the past we’ve covered many of the tactics and tools we’ve used to reach this level, we’ve never published a definition of exactly what this benchmark is.
When we say active participation, we mean that the user has met the expectations we’ve set throughout the beta test. While these expectations will differ based on the product and the objectives of the test, it usually includes the completion of all assigned surveys (commonly one per week), regularly logging in to our feedback portal, and engaging in other assigned activities. For many projects we incorporate daily journals and user forums as ongoing feedback channels, with other situational activities assigned throughout the project. These can include completing tasks, contributing to wiki content, or being present for one-on-one or group calls.
While bugs and feature requests are a common activity in beta, these are typically not a requirement, as not all testers will necessarily encounter bugs or have ideas to contribute. That said, the other activities mentioned above are designed to increase engagement in an effort to uncover more of these types of feedback.
It’s important to note that if a tester makes meaningful attempts to participate, but is constrained due to a show-stopping bug, this is still considered participation. This is why we stress that products should be “beta-ready” prior to starting your test. For more information on beta readiness, download our whitepaper.
Understanding what participation means in your test is important for setting tester expectations as well as incentivizing your testers. Your testers need to know what is expected of them so they can deliver, and you need a clear understanding of the participation requirements so you can plan for and provide the proper incentives to all the testers that meet them.
As with all aspects of beta, however, you need to achieve a balance with participation expectations. Requiring too much of your participants will pressure them and burn them out, resulting in less relevant feedback and less insight into the true user experience. At the same time, expecting too little could mean you don’t obtain the feedback you need.
Hopefully this insight into how we define active participation will help you prepare for your next test. Please let us know how you define and quantify participation in the comments below. For more best practices that increase participation, check out our participation eBook.