IS480 Team wiki: 2012T1 6-bit Project Management UT1

From IS480
Revision as of 11:44, 4 December 2012 by Huiling.ong.2010 (talk | contribs) (→‎Team Meeting Minutes)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
6-bit logo.png
6-bit's Chapalang! is a social utility that connects people with friends and new friends
by offering a place for exchanging ideas and information on its public domain.

Final Wikipage
Home Technical Overview Project Deliverables Project Management Learning Outcomes


Planned Schedule

6-bit ScheduleDiagramOverview.png

Meeting Minutes

Team Meeting Minutes

[|Meeting Minute 1] [|Meeting Minute 11] [|Meeting Minute 21]
[|Meeting Minute 2] [|Meeting Minute 12] [|Meeting Minute 22]
[|Meeting Minute 3] [|Meeting Minute 13] [|Meeting Minute 23]
[|Meeting Minute 4] [|Meeting Minute 14] [|Meeting Minute 24]
[|Meeting Minute 5] [|Meeting Minute 15] [|Meeting Minute 25]
[|Meeting Minute 6] [|Meeting Minute 16] [|Meeting Minute 26]
[|Meeting Minute 7] [|Meeting Minute 17] [|Meeting Minute 27]
[|Meeting Minute 8] [|Meeting Minute 18]
[|Meeting Minute 9] [|Meeting Minute 19]
[|Meeting Minute 10] [|Meeting Minute 20]

Supervisor Meeting Minutes

|Meeting Minute 1
|Meeting Minute 2
|Meeting Minute 3
|Meeting Minute 4
|Meeting Minute 5
|Meeting Minute 6
|Meeting Minute 7
|Meeting Minute 8
|Meeting Minute 9
|Meeting Minute 10
|Meeting Minute 11


Test Cases

|Test Cases

Test Plans

Test Plan 1 on 17 September 2012
Test Plan 2 on 28 September 2012
Test Plan 3 on 19 October 2012
Test Plan 4 on 4 November 2012

User Testing

User Testing 1 User Testing 2 User Testing 3 User Testing 4

User Testing 1


Test Description

The objective of User Test 1 is on functionality and usability testing of the system. The coverage of the test is focused on forum functions.

The purpose of the test is to allow neutral testers to endorse that the system is functioning according to design, or spot any bugs or anomalies. Click data which collects data about a user’s number of clicks or time taken to achieve a task is also collected to understand the experience, and for comparison purposes for future tests.

Testers Background

There are a total of 20 testers who attended the User Test, of which 60% (12) are male and 40% (8) are female, representing various schools in SMU with SIS students being the majority. There is no stratification in the test users, and testers are selected without any intended bias. It is also observed that most testers are users of Chrome and Firefox web browser.


Test Groups

There is no test grouping employed in this test.

Test Procedures

Testers are invited to attend the User Test session and required to bring their own laptops. They are informed on the purpose of the test and given a brief description of the system objective of Chapalang!. Subsequently, they are provided with an instruction sheet for a guided test experience. Testers will be required to perform a series of system tasks based on a test case that will go through all the system features and use-case. Thereafter, testers will answer Yes/No binary questions, and can fill in details in an open-ended textbox appended after each question, should they encounter any bugs or suggestions for improvements. While the primary method of testing is on direct user experience and feedback, we have also employed secondary method which collects results indirectly. The number of clicks that each user made, together with the click coordinates or link URL, as well as timestamp of each click is captured for analytical purposes.

Test Instruction

Click Here to Download User Testing 1 Instruction

Test Results

Based on the abovementioned set of results, it is reasonable to conclude that the system is above satisfactory on functionalities for most testers while there are some rooms for improvements especially on intuitiveness.

There are also 4 reported bugs, and 19 recommendations for improvements received.

The top 10 most frequently mentioned or important recommendations will be published and appended below.

Click Data Analysis

Additionally, click data of each test session has also been collected and analysed.
Based on the computed statistics illustrated in the box-plot above, the median number of clicks it takes per tester to accomplish a task ranges from 1 to 8 clicks, with 3 clicks being the median. For the purpose of understanding, we could take the measurement of 3 clicks per task as a benchmark, to be compared on subsequent test sessions and observe if there are any improvements.

More statistics were computed in the box-plot diagram above to understand the median time spent to accomplish a task and we observe that the time taken ranges from 4 seconds to 18 seconds, with 10 seconds being the median. We will also be taking the measurement of 10 seconds per task as a benchmark to compare with subsequent test sessions and observe if there are any improvements.

Additional Observations

Further drill down on the click data has identified an observation where most testers take more clicks to accomplish Step 4 of the instructions. In Step 4, testers are required to subscribe a forum. In this observation, we are able to relate that testers are unable to easily find the Subscribe to Forum button. This finding is aligned with a common feedback that we should consider replicating the subscription button to a more intuitive location on the webpage.


6-bit schedule.png

Schedule Metric

Every iteration, schedule metric values are calculated to understand the project progress. They are broadly categorized into 5 different groups, where different action plans will apply. The acceptable range of value is within 90% to 110%, offering some buffer for natural inaccuracies between forecasting and execution.

Total Schedule Metric Value = Planned no. of days taken (P) / Actual no. of Days Assigned (A) x 100%

6-bit schedulemetric.png

Bug Metric


6-bit BugMetric.png 6-bit BugLog.png

Bug Log: |Click Here

Bug logging for Chapalang! takes the direction of being practical and easily monitored from both macro and micro perspectives. Whenever a bug is found, a new row is entered with the following data:

  • Index number
  • Bug description
  • Found by
  • Found date
  • Expected solve-by date
  • Bug severity
  • Status
  • Owner of the function
  • Fixed date
  • Closed by (Tester)
  • Close date
  • Additional comments


Bugs are classified into 3 different categories of complexity, easy, moderate and hard. Each category is assigned points of 1, 5 and 10 respectively, lower is better.

Total Points for Each Iteration = Σ Points of the Bugs in each iteration

6-bit BugMetricFormula.png

After assigning each bug with points associated by its complexity, we will track the total bug scores at the end of each week before deciding if there should be any actions to be taken. The following is an action plan for our bug metric:

6-bit BugMetricFormula2.png

Risk & Mitigation

6-bit RiskDiagram.png