HeaderSIS.jpg

Difference between revisions of "IS480 Team wiki: 2015T1 Vulcan Midterm"

From IS480
Jump to navigation Jump to search
 
(48 intermediate revisions by 5 users not shown)
Line 29: Line 29:
 
<!-- sub navigation bar end -->
 
<!-- sub navigation bar end -->
  
==Project Progress Summary==
+
==<div style="background: #6CACFF; padding: 15px; font-weight: bold; line-height: 0.3em; text-indent: 0px; font-size:20px; font-family:helvetica"><font color= #FFFFFF>Project Progress Summary</font></div>==
  
Place your Midterm slides link and deployed site link here
+
[[File: IS480_Vulcan_Mid_Term_Presentation.pdf | Team Vulcan's Mid Term Presentation]]
 +
===Highlights of Project:===
  
For proposal, please see Requrements at the Project Deliverables. This will help us understand your scope. Note wiki policy [[Help:Contents|here]].
+
* LiveLabs relocation of servers on Oct 5th 2015, two days before Midterms Presentation
 +
* LiveLabs servers overload (data increase 1GB/minute), seemed to have been caused by us
 +
* API level of phone borrowed from school too low for our development
  
This page should NOT be too long. It should link to other pages in the IS480 team wiki. Do not repeat the proposal or other wiki information here. However, keep a snapshot of the midterm state. Highlight changes since project acceptance.
+
==<div style="background: #6CACFF; padding: 15px; font-weight: bold; line-height: 0.3em; text-indent: 0px; font-size:20px; font-family:helvetica"><font color= #FFFFFF>Project Management</font></div>==
  
Describe the project progress briefly here. Have the project continued as planned? If not, is the team confident to complete? This is a crossroad for the team to make a decision. Proceed with confident or file an incomplete.
+
===Project Status:===
 +
[[File: MidTerm Progress.PNG|link=]]
 +
[[File: Vulcan Scope midterm.png|center|link=]]
 +
<br>
 +
Please refer to [[Media:Vulcan PlannedIteration v3.pdf| Planned vs Actual Tasks Metrics]] for the detailed breakdown of our individual tasks.
  
===Project Highlights:===
+
===Project Schedule (Plan Vs Actual):===
  
What unexpected events occurred?
+
<h3> Planned Schedule </h3>
*Team members too busy with other work
+
[[File: Vulcan Schedule timeline acceptance.png|link=]]
*List of requirement changes
+
 
** CRUD items replaced with CU/Sync/Archive items
+
<h3> Actual Schedule </h3>
** Business analytics replaced with iPad client
+
[[File: Vulcan Schedule timeline midterm.png|link=]]
*Took 8 weeks to learn Ruby on Rails
+
 
*etc.
+
===Project Metrics:===
Be brief.
 
  
==Project Management==
+
<h2>Schedule Metric Formula: (Estimated Days / Actual Days) x 100% </h2>
 +
[[File: Vulcan_Schedule_Metric_Score.PNG|center|link=]]
  
Provide more details about the status, schedule and the scope of the project. Describe the complexity of the project.
+
<div style="padding-left: 50px;">
 +
{| class="wikitable" style="background-color:#FFFFFF; margin: auto;width:90%; text-align:center; "
 +
! style="background: #02367A; color: white; font-weight: bold; width:50px" | Iteration
 +
! style="background: #02367A; color: white; font-weight: bold; width:50px" | Planned Duration (Days)
 +
! style="background: #02367A; color: white; font-weight: bold; width:50px" | Actual Duration (Days)
 +
! style="background: #02367A; color: white; font-weight: bold; width:50px" | Schedule Metric Score
 +
! style="background: #02367A; color: white; font-weight: bold; width:250px" | Action
 +
! style="background: #02367A; color: white; font-weight: bold; width:50px" | Status
 +
|-
 +
|2
 +
|18
 +
|32
 +
|56.25%
 +
|Team is behind schedule. This is due to the complexity of the tasks planned (Android App and Smart Watch). <br>
 +
Follow up action: Rescheduled the future iterations, deducted days from buffer days.
 +
|Completed
 +
|-
 +
|4
 +
|18
 +
|24
 +
|75%
 +
|Team is behind schedule. This is due to Livelab's server permission issues.<br>
 +
Follow up action: Rescheduled the future iterations, deducted days from buffer days.
 +
|Completed
 +
|-
 +
|}
 +
</div>
  
===Project Status:===
+
===Project Risks:===
 +
These are the top risks we have identified and has happened before Mid Term.
 +
We have followed the mitigation steps listed above and successfully managed the risks.
 +
[[File:vulcan_Risk_midterm.PNG|center|link=]]
  
Highlight changes to modules, the completion status (implemented, user testing done, client approved, deployed, etc), the confidence level (0-1 where 0 is no confident of getting it done, 1 is 100% confident in getting it done) and comments (who has been assigned to do it, new scope, removed scoped, etc). Please use a table format to summarize with links to function details.
+
===Technical Complexity:===
  
{| border="1"
+
==== Beeper Survey Creation: ====
|- style="background:blue; color:white"
+
[[File:AlarmManager.png|600px|center]]
||Task/function/features, etc
 
|align="center"|Status
 
|align="center"|Confident Level (0-1)
 
|align="center"|Comment
 
|-
 
  
|| Customer CRUD
+
Beeper surveys are created each day by an alarm manager using the set method. The setWindow method was initially used as it would schedule the alarm within a given window of time. However, this did not ensure that participants received beeper surveys at a random times because the beeper surveys would go off close to the start time. In order to fix this problem, the randomisation of beeper timings was done beforehand through java. This was done because there was no method to implement what the sponsor wanted which was 3 random beeper surveys each day with the timings being all different.
|| Fully deployed and tested 100%
 
|| 1
 
|| Fiona
 
|-
 
  
|| Trend Analytic
+
[[File:RandomTimingBeeper.png|600px|center]]
|| 25%
 
|| 0.9
 
|| Ben is researching analytic algoritms
 
|}
 
  
===Project Schedule (Plan Vs Actual):===
+
There will be 3 blocks of equal duration created based on the wake up time and sleep time which the participant has inputed. From there, 3 random numbers will be generated with the max number being the length of a block. These 3 numbers will be added to the start time, block 1 time and block 2 time to create the 3 random times for the beeper surveys.
  
Compare the project plan during acceptance with the actual work done at this point. Briefly describe a summary here. Everything went as plan, everything has changed and the team is working on a new project with new sponsors or the supervisor is missing. A good source for this section comes from the project weekly report.
+
[[File:BeeperStart.png|600px|center]]
  
Provide a comparison of the plan and actual schedule. Has the project scope expanded or reduced? You can use the table below or your own gantt charts.
+
Android Studio's alarm manager does not allow repeating alarms to have a different time everyday and thus this could not be used. The solution from our team was to create a repeating alarm that will go off at midnight if the participant's sleep time is before midnight and at the sleep time if the participant's sleep time is after midnight. The repeating alarm will call the BeeperCreator class which creates the 3 random beeper survey timings for the day.
  
{| border="1"
+
==== Dynamic generation of survey elements: ====
|- style="background:blue; color:white"
 
|| Iterations
 
|colspan="2" align="center"| Planned
 
|colspan="2" align="center"| Actual
 
|| Comments
 
|-
 
  
|rowspan="2"| 1
+
Normally to implement dynamic lists of content, we would use the Android ListView to display the elements on the activity page. ListView is useful for creating a scrolling list of elements that can be interactable and is efficient in terms of implementation. However, due to how ListView recycles elements that move out of view when scrolling, it is difficult to retain user entered information that is linked to an element, as the information would be inherited by the "new" element that comes into view.  
|| Customer CRUD
 
|| 1 Sept 2010
 
||
 
|| 25 Aug 2010
 
|| Fiona took the Sales CRUD as well.
 
|-
 
  
|| Trend Analytic
+
[[File:Slidercreation.png|800px|center]]
|| 1 Sept 2010
 
||
 
|| 15 Sept 2010
 
|| Ben is too busy and pushed iteration 1 back
 
|-
 
  
|rowspan="2"| 2
+
To avoid this, we implement ScrollView instead. ScrollView is useful for showing a scrolling page of static elements but its not usually used to generate a dynamic list. The above shown implementation indicates how we append a layout "fragment" to the current ScrollView to represent a single survey question element. We populate the question fields for each question element and attach it to the view, allowing a scrolling view that will display questions according to what is required for this particular survey. All question elements are active in memory and are set up to collect user input as indicated in the implementation below.
|| User tutorial
 
|| 1 Oct 2010
 
|colspan="2"|
 
|| Removed proposed by Ben
 
|-
 
  
|colspan="2"|  
+
[[File:Slidersaving.png|800px|center]]
|| Psycho analysis
 
|| 1 Oct 2010
 
|| New module proposed by sponsor
 
|}
 
  
===Project Metrics:===
 
  
Summary of analysis for the metrics collected. You may refer to another page for the details about the metrics and how it is collected.
+
==== Database structure: ====
  
===Project Risks:===
+
[[File:LDiagram.jpg|600px|center]]
  
Update the proposal assumptions and risks. Describe what you learn from the risk update and mitigation steps taken.  
+
As can be seen from the given Logical Diagram, our database implementation includes tables which are dynamically generated whenever a new study is added by a researcher. The tables reflect the given name of the new study and collectively contain the survey configuration and result data for that study. This isolates the data collected for each study so that they do not interfere with each other or potentially mix. Furthermore, results are separated into different tables such as "rfk_beeper_result_*program name*" and "rfk_session_result_pause_*program name*", where "program name" is the name of the study. This facilitates the separate retrieval of data of different types by the researcher and allows future development to cut or add more data types without having to modify the tables.
  
{| border="1"
+
==<div style="background: #6CACFF; padding: 15px; font-weight: bold; line-height: 0.3em; text-indent: 0px; font-size:20px; font-family:helvetica"><font color= #FFFFFF>Quality of product</font></div>==
|- style="background:blue; color:white"  
 
|align="center" width="150px"| Risk
 
|align="center"| Probability
 
|align="center"| Impact
 
|align="center"| Mitigation
 
|-
 
  
|width="150px"| Sponsor want to use Joomla instead of Drupal
+
===Security:===
|| High
+
Details of users, specifically login details which could be personally identifying, are separated from demographic information about the user and their result data. Therefore, in the event that a study participant requests that they be removed from the program along with all identifying details, they can be removed from the database while retaining their studies data as anonymous participants, protecting their privacy.
|| High
 
|| Team evaluating Joomla to write an impact analysis report
 
|-
 
  
|width="150px"| Sponsor deployment machine approval and support
+
Furthermore, in order to ensure that user data is not proliferated, only the creator of a study is able to access a study and modify its details, and more importantly retrieve result and demographic data about participants in that study. Other researchers are unable to access other studies, which if made possible may be a breach of privacy as participants may have only provided permission to results to the owning researcher. For administrative purposes, any user with administrator rights will also be able to access all studies and their data, as representatives of the Refokus system.
|| High
 
|| Medium (now it is low)
 
|| Use UPL machine
 
|}
 
  
Be sure to prioritize the risks.
+
===Scalability:===
 +
In order to provide scalability and flexibility to researcher created studies, creation of a study can include the creation of an unlimited number of text or slider based questions for post-session and periodic beeper surveys. This gives researchers the ability to customize their surveys to a large extent and it potentially accommodates any possible data points the researcher may wish to collect through survey data.
  
===Technical Complexity:===
+
Furthermore, updating of survey questions or session-specific podcasts is possible while the study is active where there are existing users with partial progress. Completed sessions that receive updates will not be repeated for participants, but any sessions they have not yet completed will be updated to the latest attributes set by the researcher. Any data collected will reflect the survey results according to the version completed by the participant, and so no data is lost either from the old or new version of the study.
  
Describe and list the technical complexity of your project in order of highest complexity first. For example, deploying on iPhone using Objective-C, customizing Drupal with own database, quick search for shortest flight path, database structure, etc.
+
===Reliability/Availability:===
  
==Quality of product==
+
In order to improve availability of service, users will be able to download the podcast for their next session ahead of time, even if they are not yet able to start the session due to the imposed daily limit. In the process the mobile app will also update its survey questions to be used for any subsequent post-session survey or beeper surveys. Sessions can be carried out without internet availability, and the result data stored after the session until internet connectivity is restored.
  
Provide more details about the quality of your work. For example, you designed a flexible configurable system using XML.config files, uses Strategy Design Pattern to allow plugging in different strategy, implement a regular expression parser to map a flexible formula editor, etc.
 
  
 
===Intermediate Deliverables:===
 
===Intermediate Deliverables:===
  
There should be some evidence of work in progress. 
+
{| class="wikitable" style="background-color:#FFFFFF;"
 
+
|-
{| border="1"
+
! style="color:white; background-color:#02367A;" | Stage  
|- style="background:blue; color:white"  
+
! style="color:white; background-color:#02367A;" | Specification
|align="center"| Stage  
+
! style="color:white; background-color:#02367A;" | Modules  
|align="center"| Specification
 
|align="center"| Modules
 
 
|-
 
|-
  
 
|rowspan="2"| Project Management
 
|rowspan="2"| Project Management
|| Minutes
+
|| Minutes (Sponsor,Supervisor, Team)
|| Sponsor weeks -10 -5 3 7 Supervisor weeks -2 3 5 7
+
||[https://wiki.smu.edu.sg/is480/IS480_Team_wiki%3A_2015T1_Vulcan_Minutes Minutes]
 
|-
 
|-
  
 
|| Metrics
 
|| Metrics
|| Bug metrics
+
||[https://wiki.smu.edu.sg/is480/IS480_Team_wiki%3A_2015T1_Vulcan_Metrics Schedule, Bug, Change Management Metrics]
 +
|-
 +
 
 +
|rowspan="2"| Requirements Gathering
 +
|| Design Documents
 +
|| [https://wiki.smu.edu.sg/is480/IS480_Team_wiki%3A_2015T1_Vulcan_Design_Documents Scenario,Storyboard,Navigation Diagram,Prototype]
 
|-
 
|-
  
|| Requirements
+
|| Market Research
|| Story cards
+
|| [https://wiki.smu.edu.sg/is480/IS480_Team_wiki%3A_2015T1_Vulcan_Market_Research Market Research]  
|| [http://www.agilemodeling.com/artifacts/userStory.htm CRUD Customer], [http://www.agilemodeling.com/artifacts/userStory.htm Trend Analytic]
 
 
|-
 
|-
  
|rowspan="4"| Analysis
+
|rowspan="3"| Analysis
 
|| Use case
 
|| Use case
|| [http://en.wikipedia.org/wiki/Use_case_diagram overall]
+
|| [https://wiki.smu.edu.sg/is480/IS480_Team_wiki%3A_2015T1_Vulcan_Documentation Participant & Researcher Use Cases]
 
|-
 
|-
  
|| System Sequence Diagram
+
|| Business Process Diagram
|| [http://en.wikipedia.org/wiki/System_Sequence_Diagram client], [http://en.wikipedia.org/wiki/System_Sequence_Diagram server]
+
|| [https://wiki.smu.edu.sg/is480/IS480_Team_wiki%3A_2015T1_Vulcan_Documentation Business Process Diagram]
 
|-
 
|-
  
|| [http://en.wikipedia.org/wiki/Business_Process_Modeling_Notation Business Process Diagram]
+
|| Logical Diagram
|| Here
+
|| [https://wiki.smu.edu.sg/is480/IS480_Team_wiki%3A_2015T1_Vulcan_Documentation Logical Diagram]
 
|-
 
|-
  
|| Screen Shots
 
|| CRUD Customer, Trend Analysis
 
|-
 
  
|rowspan="2"| Design
+
|rowspan="2"| Testing
|| [http://en.wikipedia.org/wiki/Entity-relationship_model ER Diagram]
+
|| User Testing
|| 1, 2, 3
+
||[https://wiki.smu.edu.sg/is480/IS480_Team_wiki%3A_2015T1_Vulcan_User_Test User Test]
 
|-
 
|-
  
|| [http://en.wikipedia.org/wiki/Class_diagram Class Diagram]
+
|| Test Plans
|| [http://en.wikipedia.org/wiki/Class_diagram 1], [http://en.wikipedia.org/wiki/Class_diagram 2], [http://en.wikipedia.org/wiki/Class_diagram 3]
+
||[https://wiki.smu.edu.sg/is480/IS480_Team_wiki%3A_2015T1_Vulcan_Test_Cases Test Cases]
 
|-
 
|-
 
 
|| Testing
 
|| User test plan
 
|| [[IS480_Midterm_Wiki#Testing: | instructions]]
 
 
|}
 
|}
 
Not all parts of the deliverables are necessary but the evidence should be convincing of the progress. Try to include design deliverables that shows the quality of your project.
 
  
 
===Deployment:===
 
===Deployment:===
  
In an iterative approach, ready to use system should be available (deployed) for client and instructions to access the system described here (user name). If necessary, provide a [[IS480_Final_Wiki#Project_Deliverables: | deployment diagram link]].
+
We have launch our Alpha Version of our mobile application on the google playstore, the instructions for the Alpha tester to participate can be found here:
 +
[[Media:IS480_Team_Vulcan-Pilot_Test_Instruction.docx| Instructions to download]]
  
 +
We have also deployed our web application to the Livelabs Web Server (Hestia), [http://hestia.smu.edu.sg/tomcat/ReFokus/ ReFokus Web Application]
 
===Testing:===
 
===Testing:===
 +
''Number of User Tests:'' 3
 +
<br>
 +
''Tester Profile:''
 +
<br>
 +
Our testers consist of users with research backgrounds, specifically research assistants currently pursuing their PhD in Psychology. With their experience, we were able to gain valuable feedback with regards to the creation of studies.
 +
<br>
 +
For more information about the user tests and the detailed results, please visit the link below:
 +
<br>
 +
[https://wiki.smu.edu.sg/is480/IS480_Team_wiki%3A_2015T1_Vulcan_User_Test User Test]
 +
<br>
 +
<br>
 +
''Test Cases:''
 +
<br>
 +
For each iteration, we have functional test cases to test individual functions. Towards the end of the iteration, we will do regression testing and go through the entire flow of the project to ensure all parts are working.
 +
<br>
 +
For the detailed test cases, please visit the link below:
 +
<br>
 +
[https://wiki.smu.edu.sg/is480/IS480_Team_wiki%3A_2015T1_Vulcan_Test_Cases Test Cases]
 +
<br>
 +
[[File: vulcan_bugmetric.PNG|center|600px|link=]]
 +
[[File:Vulcan_Bug_Report_3.png|center|600px|link=]]
 +
<br>
 +
For our bug metrics score, we can see that iteration 5 had an exceptionally high score. This was due to the aftermath of User Test 2 and 3, which proved to be useful for us with the functional and UI bugs spotted. Even though the bug score was well above the threshold level of 10, we managed to solve all the bugs in the scheduled debugging time.
 +
<br>
 +
For the detailed bug reports, please visit the link below:
 +
<br>
 +
[https://wiki.smu.edu.sg/is480/IS480_Team_wiki%3A_2015T1_Vulcan_Metrics Bug Metrics]
  
Describe the testing done on your system. For example, the number of user testing, tester profile, test cases, survey results, issue tracker, bug reports, etc.
+
==<div style="background: #6CACFF; padding: 15px; font-weight: bold; line-height: 0.3em; text-indent: 0px; font-size:20px; font-family:helvetica"><font color= #FFFFFF>Reflection</font></div>==
 
 
==Reflection==
 
 
 
In this section, describe what have the team learn? Be brief. Sometimes, the client writes a report to feedback on the system; this sponsor report can be included or linked from here.
 
 
 
===Team Reflection:===
 
 
 
Any training and lesson learn? What are the take-away so far? It would be very convincing if the knowledge is share at the wiki [[Knowledge_base | knowledge base]] and linked here.
 
 
 
===Benjamin Gan Reflection:===
 
  
You may include individual reflection if that make sense at this point. The team is uncooperative and did not follow my instructions.
+
[[File: Vulcan Reflection1.png|center|link=]]
 +
[[File: Vulcan Reflection2.png|center|link=]]
 +
[[File: Vulcan Reflection3.png|center|link=]]

Latest revision as of 07:39, 7 October 2015

VulcanLogo.png
Vulcan home icon2.png
Vulcan aboutus icon2.png
Vulcan projectoverview icon2.png
Vulcan projectmanagement icon2.png
Vulcan documentation icon2.png

HOME ABOUT US PROJECT OVERVIEW PROJECT MANAGEMENT DOCUMENTATION


Project Progress Summary

File:IS480 Vulcan Mid Term Presentation.pdf

Highlights of Project:

  • LiveLabs relocation of servers on Oct 5th 2015, two days before Midterms Presentation
  • LiveLabs servers overload (data increase 1GB/minute), seemed to have been caused by us
  • API level of phone borrowed from school too low for our development

Project Management

Project Status:

MidTerm Progress.PNG

Vulcan Scope midterm.png


Please refer to Planned vs Actual Tasks Metrics for the detailed breakdown of our individual tasks.

Project Schedule (Plan Vs Actual):

Planned Schedule

Vulcan Schedule timeline acceptance.png

Actual Schedule

Vulcan Schedule timeline midterm.png

Project Metrics:

Schedule Metric Formula: (Estimated Days / Actual Days) x 100%

Vulcan Schedule Metric Score.PNG
Iteration Planned Duration (Days) Actual Duration (Days) Schedule Metric Score Action Status
2 18 32 56.25% Team is behind schedule. This is due to the complexity of the tasks planned (Android App and Smart Watch).

Follow up action: Rescheduled the future iterations, deducted days from buffer days.

Completed
4 18 24 75% Team is behind schedule. This is due to Livelab's server permission issues.

Follow up action: Rescheduled the future iterations, deducted days from buffer days.

Completed

Project Risks:

These are the top risks we have identified and has happened before Mid Term. We have followed the mitigation steps listed above and successfully managed the risks.

Vulcan Risk midterm.PNG

Technical Complexity:

Beeper Survey Creation:

AlarmManager.png

Beeper surveys are created each day by an alarm manager using the set method. The setWindow method was initially used as it would schedule the alarm within a given window of time. However, this did not ensure that participants received beeper surveys at a random times because the beeper surveys would go off close to the start time. In order to fix this problem, the randomisation of beeper timings was done beforehand through java. This was done because there was no method to implement what the sponsor wanted which was 3 random beeper surveys each day with the timings being all different.

RandomTimingBeeper.png

There will be 3 blocks of equal duration created based on the wake up time and sleep time which the participant has inputed. From there, 3 random numbers will be generated with the max number being the length of a block. These 3 numbers will be added to the start time, block 1 time and block 2 time to create the 3 random times for the beeper surveys.

BeeperStart.png

Android Studio's alarm manager does not allow repeating alarms to have a different time everyday and thus this could not be used. The solution from our team was to create a repeating alarm that will go off at midnight if the participant's sleep time is before midnight and at the sleep time if the participant's sleep time is after midnight. The repeating alarm will call the BeeperCreator class which creates the 3 random beeper survey timings for the day.

Dynamic generation of survey elements:

Normally to implement dynamic lists of content, we would use the Android ListView to display the elements on the activity page. ListView is useful for creating a scrolling list of elements that can be interactable and is efficient in terms of implementation. However, due to how ListView recycles elements that move out of view when scrolling, it is difficult to retain user entered information that is linked to an element, as the information would be inherited by the "new" element that comes into view.

Slidercreation.png

To avoid this, we implement ScrollView instead. ScrollView is useful for showing a scrolling page of static elements but its not usually used to generate a dynamic list. The above shown implementation indicates how we append a layout "fragment" to the current ScrollView to represent a single survey question element. We populate the question fields for each question element and attach it to the view, allowing a scrolling view that will display questions according to what is required for this particular survey. All question elements are active in memory and are set up to collect user input as indicated in the implementation below.

Slidersaving.png


Database structure:

LDiagram.jpg

As can be seen from the given Logical Diagram, our database implementation includes tables which are dynamically generated whenever a new study is added by a researcher. The tables reflect the given name of the new study and collectively contain the survey configuration and result data for that study. This isolates the data collected for each study so that they do not interfere with each other or potentially mix. Furthermore, results are separated into different tables such as "rfk_beeper_result_*program name*" and "rfk_session_result_pause_*program name*", where "program name" is the name of the study. This facilitates the separate retrieval of data of different types by the researcher and allows future development to cut or add more data types without having to modify the tables.

Quality of product

Security:

Details of users, specifically login details which could be personally identifying, are separated from demographic information about the user and their result data. Therefore, in the event that a study participant requests that they be removed from the program along with all identifying details, they can be removed from the database while retaining their studies data as anonymous participants, protecting their privacy.

Furthermore, in order to ensure that user data is not proliferated, only the creator of a study is able to access a study and modify its details, and more importantly retrieve result and demographic data about participants in that study. Other researchers are unable to access other studies, which if made possible may be a breach of privacy as participants may have only provided permission to results to the owning researcher. For administrative purposes, any user with administrator rights will also be able to access all studies and their data, as representatives of the Refokus system.

Scalability:

In order to provide scalability and flexibility to researcher created studies, creation of a study can include the creation of an unlimited number of text or slider based questions for post-session and periodic beeper surveys. This gives researchers the ability to customize their surveys to a large extent and it potentially accommodates any possible data points the researcher may wish to collect through survey data.

Furthermore, updating of survey questions or session-specific podcasts is possible while the study is active where there are existing users with partial progress. Completed sessions that receive updates will not be repeated for participants, but any sessions they have not yet completed will be updated to the latest attributes set by the researcher. Any data collected will reflect the survey results according to the version completed by the participant, and so no data is lost either from the old or new version of the study.

Reliability/Availability:

In order to improve availability of service, users will be able to download the podcast for their next session ahead of time, even if they are not yet able to start the session due to the imposed daily limit. In the process the mobile app will also update its survey questions to be used for any subsequent post-session survey or beeper surveys. Sessions can be carried out without internet availability, and the result data stored after the session until internet connectivity is restored.


Intermediate Deliverables:

Stage Specification Modules
Project Management Minutes (Sponsor,Supervisor, Team) Minutes
Metrics Schedule, Bug, Change Management Metrics
Requirements Gathering Design Documents Scenario,Storyboard,Navigation Diagram,Prototype
Market Research Market Research
Analysis Use case Participant & Researcher Use Cases
Business Process Diagram Business Process Diagram
Logical Diagram Logical Diagram
Testing User Testing User Test
Test Plans Test Cases

Deployment:

We have launch our Alpha Version of our mobile application on the google playstore, the instructions for the Alpha tester to participate can be found here: Instructions to download

We have also deployed our web application to the Livelabs Web Server (Hestia), ReFokus Web Application

Testing:

Number of User Tests: 3
Tester Profile:
Our testers consist of users with research backgrounds, specifically research assistants currently pursuing their PhD in Psychology. With their experience, we were able to gain valuable feedback with regards to the creation of studies.
For more information about the user tests and the detailed results, please visit the link below:
User Test

Test Cases:
For each iteration, we have functional test cases to test individual functions. Towards the end of the iteration, we will do regression testing and go through the entire flow of the project to ensure all parts are working.
For the detailed test cases, please visit the link below:
Test Cases

Vulcan bugmetric.PNG
Vulcan Bug Report 3.png


For our bug metrics score, we can see that iteration 5 had an exceptionally high score. This was due to the aftermath of User Test 2 and 3, which proved to be useful for us with the functional and UI bugs spotted. Even though the bug score was well above the threshold level of 10, we managed to solve all the bugs in the scheduled debugging time.
For the detailed bug reports, please visit the link below:
Bug Metrics

Reflection

Vulcan Reflection1.png
Vulcan Reflection2.png
Vulcan Reflection3.png