August 16, 2016
Our Process for Mobile App Testing
As reliance on mobile devices increases, a growing number of agencies are developing mobile apps to provide citizens with easier access to their services. If those apps are unintuitive or hard to use, the goals of the app aren’t met. To make sure a new app is providing the intended value, we turn to testing with real users. Mobile app testing creates a shared understanding of the technological gaps present in an application while offering legitimate recommendations for closing these gaps.
Using processes and tasks in mobile app user testing requires laying a solid foundation. This foundation includes role identification, establishing device testing benchmarks, coming up with scenario based questions, and creating a testing environment.
Be mindful of some important steps to consider in order to lay a solid foundation:
- Create a strategy that follows the PDCA, or Plan–Do–Check–Act, approach.
- List specific roles and responsibilities.
- Create a checklist of documents to produce before and after the testing event.
- Produce scenario-based, interview-type questions.
- Identify pass and fail criteria for each scenario question.
Make sure that each team member knows his role and what his contribution will be before, during and after the mobile user testing event.
Step 1 – Plan
A master carpenter measures twice and cuts once. Planning is the most important step and it matters more than the actual subject testing. It matters more than the final results. Planning drives the quality of the final results; in fact, it accounts for about 60% of the total project time.
How do we plan? Our team uses checklists. The room, the questions, the expected results, and the camera are represented on the checklist.
Step 2 – Do
First things first: show up. When it’s the day of testing, we’re onstage. The spotlight is on us. Except that it isn’t. The focus is on the subject and how they’re using the device. Once we ask a question we really try to limit our verbal feedback to prevent damaging influence. This doesn’t mean we can’t provide additional instruction if the subject is truly stumped. However, it’s best practice to let the subject do most of the talking and solve the problem themselves.
Step 3 – Check
Checking is where we compare our test results against the test solution. Testing is numbers and pass/fail driven. For example, we want to know if the mobile device responded how it was expected to respond. If the device responds as expected then the scenario is marked as a pass. Accordingly, if a device does not meet the expected behavior, then that specific scenario is marked as a fail. We then calculate the pass and fail rates based on a series of percentages.
Step 4 – Act
The final stage is implementation. This is where hard data is gathered and final reports are prepared. At this point, we compare the expected behavior with the actual behavior. We specifically review the device’s functional, operational, technical, and transitional behavior. Were there glitches with operating system or application of the device? Did general operations of the device negatively/positively impact the user? Did access to wireless internet affect the user’s experience? These are all questions that must be evaluated and answered in our final reports. We make sure to score this behavior with precision.
Mobile user testing starts with a plan. The plan will determine success or failure. If there isn’t a solid plan in place then the results and the corrections are for naught. We’ve employed this process to provide mobile application testing to agencies who don’t have this expertise in house. In the end it’s a process that will help our agencies identify and solve their mobile device and mobile application issues.