IS480 Team wiki: 2012T1 6-bit Project Management UT2
by offering a place for exchanging ideas and information on its public domain.
|Home||Technical Overview||Project Deliverables||Project Management||Learning Outcomes|
- 1 Schedule
- 2 Testing
- 2.1 Test Cases
- 2.2 Test Plans
- 2.3 User Testing
- 2.3.1 User Testing 2
- 3 Milestones
- 4 Schedule Metric
- 5 Bug Metric
- 6 Risk & Mitigation
Team Meeting Minutes
Supervisor Meeting Minutes
|User Testing 1||User Testing 2||User Testing 3||User Testing 4|
User Testing 2
The objective of User Test 2 is on functionality and usability testing of the system. The coverage of the test is focused on forum and basic marketplace functions.
The purpose of re-testing on forum function is to test for an experience improvement. Basic marketplace functions will consist of a complete product display to payment process. We will want to find out if the changes made after User Test 1 is offering an improved user experience to users, and whether the improvement is consistent between forum and marketplace functions.
There are a total of 46 testers who attended the User Test, of which 61% (28) are male and 39% (18) are female, representing various schools in SMU with SIS students being the majority. It is also observed that most testers are users of Chrome and Firefox web browser.
There is no significant difference between testers’ background in User Test 1 and 2.
There are 2 test groups employed in this test.
Group A (Control Group) consists of 17 testers who were testers from User Test 1. The purpose is to test if their return experience with the system has any improvements.
Group B (Test Group) consists of 29 new testers who have not participated in any of our previous user tests. The purpose of their participation is for comparison, to find out if they have a different experience from the returning testers in Group A.
Testers are invited to attend the User Test session and required to bring their own laptops. They are informed on the purpose of the test and given a brief description of the system objective of Chapalang!.
Subsequently, they are provided with an instruction sheet for a guided test experience. Testers will be required to perform a series of system tasks based on a test case that will go through all the system features and use-case. Thereafter, testers will fill in details in an open-ended textbox appended after each question, should they encounter any bugs or suggestions for improvements.
Based on the survey questions, the results are positive with most testers having a good level of comfort using our web application.
There are also 21 reported bugs, and 69 recommendations for improvements received. The following are the top 10 bugs reported.
The top 10 most frequently mentioned or important recommendations will be published and appended below.
Click Data Analysis(User Test 1 vs. User Test 2 – Forum Functions Only)
Additionally, click data of each test session has also been collected and analysed. They are also being compared to that with the results of User Test 1.
The above box-plot represents 3 sets of data comparing the number of clicks per task, for discussion forum functions only. UT1 represents the results from User Test 1, UT2A represents the results from Group A testers of User Test 2, while UT2B represents the results from Group B testers of User Test 2. For the objective of fair comparisons, the results from User Test 2 has been drilled down to consists of data
The median number of clicks it takes per tester to accomplish a forum-related task in User Test 2 ranges from 1 to 3 clicks with 2 clicks being the median, a decrement from the median of 3 clicks, as well as a smaller variance as compared to User Test 1. Additionally, it can also be observed that there is no significant difference in the results between Group A and Group B users.
Preliminary, we can observe an improvement in the user experience for Group A users between the 2 tests. The improvement can be broadly attributed to the improvements made as well as the high learnability of the system interface design. However, this observation is not conclusive and more data is required.
Additional statistics were computer and observed that the median time spent to accomplish a forum-related task for Group A tester is 4 seconds, and Group B tester is 5 seconds. Again, this is a significant decrement in the time spent from User Test 1, where testers spent a median of 10 seconds between each task. Based on the finding, we can reasonably derive that there is an improved user experience between User Test 1 and User Test 2, attributing to the improvements made and high learnability of the system. In addition, the improved user experience is shared between both Group A and Group B users, possibly suggesting that the improved system does not require much training or high learning curve.
Click Data Analysis (Group A vs. Group B – Marketplace Functions)
Prospectively, we will also study the user experience difference between Group A and Group B testers on marketplace functions, based on the click data which measures the number of clicks involved per task and time taken in seconds between each task.
In the box-plot diagram above, UT2A refers to User Test 2 Group A testers, while UT2B refers to User Test 2 Group B testers. Each box-plot is represented by data of a specific group of users, and the results computed based on the number of clicks of time pertaining to forum or marketplace functions.
Comparing marketplace functions, both Group A and Group B testers have made a median of 2 clicks to accomplish each task. While Group B testers have a wider variance of clicks of up to 4 clicks, it can be broadly attributed to outliers, user experiments or some learning curve involved in getting used to the functions or interface objects placements.
The result when comparing the time taken is consistent with the preliminary conclusion when comparing the number of clicks per task. The median time taken for Group A and Group B testers for forum and marketplace functions are within the range of 4 to 5 seconds. The difference between the median records is insignificant.
Overall, the result is consistent across forums and marketplace functions, between testers from both User Tests and test groups. It is also consistent with our earlier preliminary conclusion that the improvements made between the two User Tests have resulted in improved user experience, and there is a good level of learnability in the interface design.
While there are limitations in this test, where there are other externalities such as network performance, computing habits of testers and response time of each users, the macro results of the test provide a reasonable sampling on the objective of the test.
Every iteration, schedule metric values are calculated to understand the project progress. They are broadly categorized into 5 different groups, where different action plans will apply. The acceptable range of value is within 90% to 110%, offering some buffer for natural inaccuracies between forecasting and execution.
Total Schedule Metric Value = Planned no. of days taken (P) / Actual no. of Days Assigned (A) x 100%
Bug Log: |Click Here
Bug logging for Chapalang! takes the direction of being practical and easily monitored from both macro and micro perspectives. Whenever a bug is found, a new row is entered with the following data:
- Index number
- Bug description
- Found by
- Found date
- Expected solve-by date
- Bug severity
- Owner of the function
- Fixed date
- Closed by (Tester)
- Close date
- Additional comments
Bugs are classified into 3 different categories of complexity, easy, moderate and hard. Each category is assigned points of 1, 5 and 10 respectively, lower is better.
Total Points for Each Iteration = Σ Points of the Bugs in each iteration
After assigning each bug with points associated by its complexity, we will track the total bug scores at the end of each week before deciding if there should be any actions to be taken. The following is an action plan for our bug metric: