Difference between revisions of "IS480 Team wiki: 2012T1 Bumblebee Project Documentation User Acceptance Test"
Line 33: | Line 33: | ||
{| class="wikitable" cellpadding="15" | {| class="wikitable" cellpadding="15" | ||
|- | |- | ||
− | | style="background:#ffffff;; color: #000000; font-weight: bold; text-indent: 2px; width: 32%; font-size:22px; text-align:lrft; border: 1px solid #ffffff" valign="top" | <b>User Testing 1 (on 17 September 2012) | + | | style="background:#ffffff;; color: #000000; font-weight: bold; text-indent: 2px; width: 32%; font-size:22px; text-align:lrft; border: 1px solid #ffffff" valign="top" | <b><u>User Testing 1 (on 17 September 2012)</u> |
Objectives</b> | Objectives</b> | ||
|- | |- |
Revision as of 18:12, 9 October 2012
Home | Project Overview | Project Documentation | Project Management | Learning and Growth |
Diagrams | User Interface | User Manual | User Testing | Meeting Minutes | Presentations |
Contents
Objectives
User Testing 1 (on 17 September 2012)
Objectives |
The goals and objectives of usability testing: • Record and document general feedback and first impressions • Identify any potential concerns to address regarding application usability, presentation, and navigation. • Get feedback on the usefulness and accuracy of the functions developed. • To match client expectations on the system developed. |
Methodology
Methodology |
User Testing EnvironmentComputer platform : Intel Pentium Processor Screen resolution : 1028 x 768 Operating System : Windows XP Set-up required : Computer date format (English (Australia)) of d/MM/YYYY ParticipantsThe participants will attempt to complete a set of scenarios presented to them and to provide feedback regarding the usability and acceptability of the application. 1. Kevin Choy, SATS Airline Relations Manager - Person-in-charge for this project. 2. Goh Wei Xuan, SATS Airline Relations Manager ProcedureInstructionsThese instructions were given to our clients: 1. Each user will be accompanied by 1 facilitator. 2. Users are encouraged to verbalize their movements, purpose, and problems. 3. Facilitators will record mistakes and questions made by users during testing. 4. To start the test, click on the file named “START.bat” found in folder named “SATS_Bumblebee_Beta_v5”. 5. All sample files needed for testing are found in: SATS_Bumblebee_Beta_v5/data 6. Database used to store imported data is also found in ROOT folder. 7. Users are allowed to change their input(s) to verify data validity. 8. Users are to complete the tasks stated below. After completing each task, users have to answer the test questions pertaining to the specific task. TasksThese are the task descriptions given to clients: Below are tasks for users to complete. 1. Bootstrap/import files(s) This task is for user to import data from excel files such as Flight Schedule Departure, Flight Schedule Arrival, Staff Records, etc. into the application. The application will then use these data for simulation purpose in the later step. 2. Add staff costs This task is to record various costs in hiring staff into the application. 3. Add uncertainties This task is to record the mean and standard deviation of different uncertainties that will affect the initial schedule prepared by the application. Simulation period = 7 days (represents the number of days the data is to be generate). 4. Run simulation Run simulation to start assigning staff to different job assignments. 5. View staff schedule (in Gantt Chart) This allows user to view and compare between a staff’s planned and actual working time. 6. Add airline requirements Airlines have several different requirements on number of CSA and CSO needed. This task is to record the individual requirements into the database. The input data will be used for simulation purpose in the later step. 7. Generate result This task is to view the result generated in PDF format. Team RolesOverall in-charge (Yosin Anggusti) - Provide training overview prior to usability testing - Defines usability and purpose of usability testing to participants Facilitators (Glorya Marie, Suriyanti) - Evaluate on the application and user interaction with the application, rather than evaluating on the user - Facilitator will observe and enter user behavior and user comments. - Responds to participant’s requests for assistance Test Observers (Yosin Anggusti) - Silent observers - Assists the data logger in identifying problems, concerns, coding bugs, and procedural errors
|
Reporting Results
Usability Metrics |
Critical ErrorsCritical errors are deviations of results from the actual result. These errors will cause the task to fail. Facilitators are to records there critical errors. Non-Critical ErrorsNon-critical errors are usually procedural, in which the participant does not complete a task in the most optimal means (e.g. excessive steps, initially selecting the wrong function, attempting to edit an un-editable field). These errors may not be detected by the user himself. Facilitators have to record these errors independently. Scenario Completion TimeThe time to complete each scenario, not including subjective evaluation durations, will be recorded.
|
Reporting Results
Reporting Results |
tba
|
Subjective Evaluations
Subjective Evaluations | ||||||||||||||||||||||||||||||||||||
Subjective evaluations regarding ease of use and satisfaction will be collected via questionnaires. There are 2 participants, thus results from both participants will be combined or averaged whenever it is necessary. Each of the two participants contributes 50% to their answers. Not all sections are answered, thus not all questions have a total of 100% weight.
Comment(s): -‘Back’ button at simulation is missing - Have a flow. Not sure which button to select? - Need more instructions for first time users.
Comment(s): - Must be more explicit on description
| ||||||||||||||||||||||||||||||||||||
Reporting Conclusions
Reporting Conclusions |
• Client was satisfied with the system. However, there are more to be improved in terms of the presentation and navigation of the application. • Client, especially another participant gained better and clearer understanding on what the application delivers after testing the application. • There are critical errors on the logic/formula for staff utilization rate and staff working hours that can be further improved to increase the accuracy of the calculation. • Non-critical errors will also be solved. |