IS480 Team wiki: 2012T1 6-bit Project Management UT1
by offering a place for exchanging ideas and information on its public domain.
|Home||Technical Overview||Project Deliverables||Project Management||Learning Outcomes|
- 1 Schedule
- 2 Testing
- 3 Milestones
- 4 Schedule Metric
- 5 Bug Metric
- 6 Risk & Mitigation
Team Meeting Minutes
Supervisor Meeting Minutes
|User Testing 1||User Testing 2||User Testing 3||User Testing 4|
User Testing 1
The objective of User Test 1 is on functionality and usability testing of the system. The coverage of the test is focused on forum functions.
The purpose of the test is to allow neutral testers to endorse that the system is functioning according to design, or spot any bugs or anomalies. Click data which collects data about a user’s number of clicks or time taken to achieve a task is also collected to understand the experience, and for comparison purposes for future tests.
There are a total of 20 testers who attended the User Test, of which 60% (12) are male and 40% (8) are female, representing various schools in SMU with SIS students being the majority. There is no stratification in the test users, and testers are selected without any intended bias. It is also observed that most testers are users of Chrome and Firefox web browser.
There is no test grouping employed in this test.
Testers are invited to attend the User Test session and required to bring their own laptops. They are informed on the purpose of the test and given a brief description of the system objective of Chapalang!. Subsequently, they are provided with an instruction sheet for a guided test experience. Testers will be required to perform a series of system tasks based on a test case that will go through all the system features and use-case. Thereafter, testers will answer Yes/No binary questions, and can fill in details in an open-ended textbox appended after each question, should they encounter any bugs or suggestions for improvements. While the primary method of testing is on direct user experience and feedback, we have also employed secondary method which collects results indirectly. The number of clicks that each user made, together with the click coordinates or link URL, as well as timestamp of each click is captured for analytical purposes.
Based on the abovementioned set of results, it is reasonable to conclude that the system is above satisfactory on functionalities for most testers while there are some rooms for improvements especially on intuitiveness.
There are also 4 reported bugs, and 19 recommendations for improvements received.
The top 10 most frequently mentioned or important recommendations will be published and appended below.
Click Data Analysis
Additionally, click data of each test session has also been collected and analysed.
Based on the computed statistics illustrated in the box-plot above, the median number of clicks it takes per tester to accomplish a task ranges from 1 to 8 clicks, with 3 clicks being the median. For the purpose of understanding, we could take the measurement of 3 clicks per task as a benchmark, to be compared on subsequent test sessions and observe if there are any improvements.
More statistics were computed in the box-plot diagram above to understand the median time spent to accomplish a task and we observe that the time taken ranges from 4 seconds to 18 seconds, with 10 seconds being the median. We will also be taking the measurement of 10 seconds per task as a benchmark to compare with subsequent test sessions and observe if there are any improvements.
Further drill down on the click data has identified an observation where most testers take more clicks to accomplish Step 4 of the instructions. In Step 4, testers are required to subscribe a forum. In this observation, we are able to relate that testers are unable to easily find the Subscribe to Forum button. This finding is aligned with a common feedback that we should consider replicating the subscription button to a more intuitive location on the webpage.
Every iteration, schedule metric values are calculated to understand the project progress. They are broadly categorized into 5 different groups, where different action plans will apply. The acceptable range of value is within 90% to 110%, offering some buffer for natural inaccuracies between forecasting and execution.
Total Schedule Metric Value = Planned no. of days taken (P) / Actual no. of Days Assigned (A) x 100%
Bug Log: |Click Here
Bug logging for Chapalang! takes the direction of being practical and easily monitored from both macro and micro perspectives. Whenever a bug is found, a new row is entered with the following data:
- Index number
- Bug description
- Found by
- Found date
- Expected solve-by date
- Bug severity
- Owner of the function
- Fixed date
- Closed by (Tester)
- Close date
- Additional comments
Bugs are classified into 3 different categories of complexity, easy, moderate and hard. Each category is assigned points of 1, 5 and 10 respectively, lower is better.
Total Points for Each Iteration = Σ Points of the Bugs in each iteration
After assigning each bug with points associated by its complexity, we will track the total bug scores at the end of each week before deciding if there should be any actions to be taken. The following is an action plan for our bug metric: