HeaderSIS.jpg

Difference between revisions of "IS480 Team wiki: 2017T1 Ravenous User Testing 2"

From IS480
Jump to navigation Jump to search
m
 
(2 intermediate revisions by the same user not shown)
Line 39: Line 39:
 
[[Image: ravenousuat2.png|150px|link=IS480_Team_wiki:_2017T1 Ravenous User Testing 2]]
 
[[Image: ravenousuat2.png|150px|link=IS480_Team_wiki:_2017T1 Ravenous User Testing 2]]
 
[[Image: ravenousuat3.png|150px|link=IS480_Team_wiki:_2017T1 Ravenous User Testing 3]]
 
[[Image: ravenousuat3.png|150px|link=IS480_Team_wiki:_2017T1 Ravenous User Testing 3]]
[[Image: ravenousuat4.png|150px|link=IS480_Team_wiki:_2017T1 Ravenous User Testing 4]]
 
 
</center>
 
</center>
  
Line 49: Line 48:
 
<ul>
 
<ul>
 
<li><b>Date:</b> 18th September 2017, Monday</li>
 
<li><b>Date:</b> 18th September 2017, Monday</li>
<li><b>Time:</b> Session 1: 10 AM - 11 AM, Session 2: 11 AM - 12 PM, Session 3: 2 PM - 3 PM, Session 4: 3PM - 4PM</li>
+
<li><b>Time:</b> Session 1: 10 AM - 11 AM, Session 2: 11 AM - 12 PM, Session 3: 2 PM - 3 PM, Session 4: 3 PM - 4 PM</li>
 
<li><b>Duration:</b> Approx 60 minutes per user</li>
 
<li><b>Duration:</b> Approx 60 minutes per user</li>
 
<li><b>Venue:</b> Public Service Division Office, 100 High Street The Treasury</li>
 
<li><b>Venue:</b> Public Service Division Office, 100 High Street The Treasury</li>
Line 74: Line 73:
  
 
<br>
 
<br>
 +
 +
=== Overall conclusion ===
 +
[[File: Ravenous - UAT2Conclusion.png|900px|center]]
 +
 +
 
<center>
 
<center>
 
==Conclusion of Analytics Dashboard User Testing 2==
 
==Conclusion of Analytics Dashboard User Testing 2==

Latest revision as of 02:29, 5 October 2017

Ravenouslogo.png


Iconh1.png
Iconau1.png
Iconpo1.png
Iconpm1.png
Icond.png

 

Ravenousinternaltest.png Ravenousuat1.png Ravenousuat2.png Ravenousuat3.png

General

About UAT 2

  • Date: 18th September 2017, Monday
  • Time: Session 1: 10 AM - 11 AM, Session 2: 11 AM - 12 PM, Session 3: 2 PM - 3 PM, Session 4: 3 PM - 4 PM
  • Duration: Approx 60 minutes per user
  • Venue: Public Service Division Office, 100 High Street The Treasury
  • Number of participants: 16 (4 NEA, 6 PSD, 2 MLAW, 2 MOF, 1 GovTech, 1 PMO, )


Objectives

  • To gather feedback on event organiser account registration process
  • To improve usability of event dashboard as well as agency dashboard
  • To improve readability of charts in both event and agency dashboard
  • To check whether the behaviour of EvBot matches the user commands
  • To gather feedback to improve the intent and behaviour of the EvBot
  • To improve usability of EvBot in carrying out tasks as a organiser
  • To improve usability of EvBot in carrying out tasks as a participant
  • To gather feedback on the user experience of FaBot


Procedure

There are a total of 4 sessions. Session 1 and 2 are testing for EvBots and FaBot. Session 3 and 4 are for testing of Analytics Dashboard.

Before each test, Team Ravenous will introduce the application to the testers. For each test, they are asked to think aloud as they follow instruction from each document. Team Ravenous will be noting down the participants’ behaviours and any critical incidents. Participants are to leave their feedback at the end of each application test. At the end of each application test, Team Ravenous will asking the testers questions with regards to their behaviours and thought process as they navigate through the application.


Overall conclusion

Ravenous - UAT2Conclusion.png


Conclusion of Analytics Dashboard User Testing 2

Test Plan

Click here for test instructions

Goals Reached

S/N Goals Reached? Remarks
1 User should find it easy to register for and login to a Dashy Account Yes A few users initially are not sure where to get OTP from

Average Score: 5.125 Average Time: 38

However, 6/8 participants liked the design and the flow of login page

2 User should find it easy to understand the pie charts in the Event Report Page Yes Most comments were about color scheme.

Team may need to revamp color scheme for the event report page

3 User should find the survey responses readable in the Event Report Page Yes -
4 User should find the charts in the overview page understandable No Despite scoring an average of 4.625 on how easy it is to understand the charts in this page, 7/8 participants answered the question regarding claim rate correctly and 8/8 participants answered the question regarding active rate correctly.
5 User should find the chart descriptions helpful on the overview page No The average score was 4.25
6 User should find the chart on the overview page readable Yes The average score was 5. But they have suggestions to use traffic light colours to indicate claim rate and active rate. There are also suggestions to combine the headers for active and claim rate since there are repeated words now.
7 User should find the charts in the group page understandable No All participants answered all questions correctly except for 1 answered “Which date has the highest group daily active” wrongly.

Average score was 4.625

Participants suggested to have exportable data, data labels for graphs and legends for group engagement charts

8 User should find the chart descriptions helpful on the group page No The average score was 4.67
9 User should find the charts on the group page readable No The average score was 4.625
10 User should find the charts in the content page understandable No The average score was 3.75 but more than 75% participants give the correct answers.

Participants commented that it is hard to understand stacked bar chart. But when prompted for suggestion, they feel that this is currently one of the best ways. Participants unware of the filter. Date Format not standardized. Some of them want a quick summary of data.

11 User should find the chart descriptions helpful on the content page Yes The average score was 5
12 User should find the chart on the content page readable No Suggestions were made about improving the colour scheme

The average score was 4.625

13 User should find the tables in the bot metric page understandable No The average score was 4.375
14 User should find the table descriptions helpful on the bot metric page No The average score was 3.8
15 User should find the tables on the bot metric page readable Yes Few participants find it very confusing, should have some form of categorization for a fast review

But the average score was 5.125

Key Findings

Function Observations/Users' Comments Changes to be made
Event report Users suggested a more vibrant/contrast colour scheme Improve Colour scheme for Event report pie-chart
Overview Users wanted a limit that shows red for poor, orange for normal, green for healthy. Traffic light colous. This will help them in their reporting Implement threshold limit for liquid charts
Overview page, Group page, content page Some charts require them to figure out what does it means Legends for all charts
Bot Metrics Page The naming of metrics is a little confusing for them Bot metrics page – better categorization of bot metrics
Event report In the scenario that there are a lot of events, this will help users in finding their events quickly. Add in event report names and date

Overall Results/Comments

  • 9 out of 15 goals were not met. Out of the 9 unmet goals, 5 of them have an average score above 4.5 (75% satisfaction rate).
  • As the goals in each page is inter-related with one another (example, for content page, the goals are readability, understandability etc), users are likely to give lower score for subsequent questions when they experience something negative previously
  • Despite not meeting the goals, the participants tend to give correct answer for all the questions we asked. Coupled this with the constant feedback on UI/UX, we believed that further improvement to UI/UX can help us resolve these problems.
  • The account registration via OTP was a unique feature that our team came up with to work-around the limitations of Workplace and we considered it to be a success as 6 out of 8 users like it and the average score was 5.125/6.


Full Analytics Dashboard results can be found here

Conclusion of EvBot User Testing 2

Test Plan

Click here for test instructions

Goals Reached

S/N Goals Reached? Remarks
1 User should find the terminologies applied in the bot and their respective uses understandable. No Average score: 4.75

Some participants mentioned that they didn't know a slash had to be included to trigger the commands. It takes them some time to understand.

2 User should find it easy to manage questions as an organiser No Participants find accessing the questions straightforward and easy, however, commented about the limitation of the functionalities.

Quick reply buttons can be slow to pop up after message, causing participants to type 'Yes' instead of clicking the button 'Yes'

Adding 3 Questions: Average Score: 4.25 Average Time Taken: 290s for 3 questions

Remove Question: Average Score: 4.85 Average Time Taken: 43s

View Questions: Average Score: 5 Average Time Taken: 16.8

3 User should find it easy to manage their events as an organiser No Some participants said clearer instructions were needed e.g. "Easy, but abit lost on how to register, perhaps some instruction on how to 'first' check. Eg: type in event name to register"

2 Participants suggested having a button to register event instead of free text

Most participants had issues with registering event, but were alright with closing event and identifying Event ID

Average score (Register event): 3.25 Average score (Close event): 5.25 Average score (Identify Event ID): 5.125

Average time(Register event): 290 Average time (Close event): 16.14 Average time (Identify Event ID): 20

5/8 participants found that managing their events with EvBot was "much easier, the process is less tedious and brief"

4 User should find it easy to check into an event as a participant Yes Participants find the event ID to be very long

Keyword: Register and check in are confusing

Average score: 5.125 Average time taken: 34s

5 User should find it easy to respond to a survey as a participant Yes Average score (Answer question 1 normally): 5.5

Average time taken (Answer question 1 normally): 18.7

Average score (Answer question 2 after reminder): 5.57

Average time taken (Answer question 2 after reminder): 9.17

6 User should find it easy to locate the Create Event button that directs them to Workplace Event creation page Yes The average score was 5.
7 User should find it easy to register a Workplace Event with EvBot No Average Score: 3.25

Average Time Taken: 71

8 User should find it straightforward to add questions to an event No Average Score: 4.25

Average Time Taken: 287.5 (3 questions)

9 User should find it easy to locate the Event ID of an EvBot event Yes Average Score: 5.1

Average Time Taken: 20

10 User should find it easy to close an EvBot event Yes Average Score: 5.25

Average Time Taken: 15.5

11 User should find it easy to remove a question from an EvBot event No Average Score: 4.85

Average Time Taken: 43

12 User should find it easy to view created questions No Average Score: 5

Average Time Taken: 16.85

13 User should find it easy to send a survey to participants No Average Score: 5.37

Average Time Taken: 19.5

14 User should find it easy to check-in to an event Yes Average Score: 5.125

Average Time Taken: 34s

15 User should find it easy to answer a survey Yes Average Score: 5.5

Average Time Taken: 18.75

16 User should find it easy to complete the survey from a survey reminder No Average Score: 5.57

Average Time Taken: 9.14s

Key Findings

Function Observations/Users' Comments Changes to be made
Register event Participants are confused initially at the concept of linking workplace event with EvBot to help them with the event and it took up quite some time for them to understand. They have also feedback this to us. Register event confusion. Change keyword to ‘link’ instead of ‘register’

Team will be exploring other concepts, for example “Event webhook”.

Get Started The responses by bot is too fast. Quick-reply button delay
View Event Participants find the event ID to be very long and tedious to type. Some of them are worried about typing error. Shorten the long Event ID
All Participants feel that more buttons will make their experience more seamless and less error-prone Button for various functionality (Menu)

Overall Results/Comments

  • 10 out of 16 goals were reached. Among the 6 goals are not met, 3 out of 6 have a score of at least 4.75. All the goals related to participants are met. Thus, our group feel that it was a success in terms of participants’ goals.
  • However, improvements to UI/UX of organizer’s view have to be made. From our observations, the participants are using the Bot as an organizer correctly and it does what an event organizer do but the experience was not seamless for them. For example, they have to type for majority of the time and they indicate that some time it is not required.
  • In line with the post-survey, they will prefer a mixture of NLP and button/menu to help them.
  • In the post-survey, participants also indicated that the chatbot understand their messages and carried out the appropriate task (5.25 out of 6) but they feel that the UI/UX can be improved (4.75 out of 6)


Full EvBot results can be found here

Conclusion of FaBot User Testing 2

Test Plan

Click here for test instructions

Goals Reached

S/N Goals Reached? Remarks
1 User should find the new commands (help and cancel) useful Yes Participants commented that we should have consistent naming convention.

The average score was 5.125 and 5 out of 8 participants find the commands useful

2 User should find it easy to remove a facility Yes Most participants comment that it is easy to carry out the task.

Average Time Taken: 24.5s Average Score: 5.25

No errors were made

3 User should find it easy to search for available facilities No Participants find the format for date and time input rigid. They also have difficulty finding search available facilities button

Key Findings

Function Observations/Users' Comments Changes to be made
Search facility Participants cannot find the search button easily Search visibility of menu. Will become - View/book/search. Remove ‘delete booking’ and replace it with search
Make a booking Participants find it very restricted to only type in a particular format. Input format for date/time too rigid. Require advanced NLP.

Overall Results/Comments

  • 2 out of 3 goals are met with huge success. The participants have almost no hiccups when using the bot.
  • General feedback is that participants find that all the current features are useful to them
  • However, improvement can be made to the search button placement as well as input format of the date/time to improve the user experience.


Full FaBot results can be found here