HeaderSIS.jpg

Difference between revisions of "IS480 Team wiki: 2015T1 Vulcan Final"

From IS480
Jump to navigation Jump to search
Line 29: Line 29:
 
==<div style="background: #6CACFF; padding: 15px; font-weight: bold; line-height: 0.3em; text-indent: 0px; font-size:20px; font-family:helvetica"><font color= #FFFFFF>Project Progress Summary</font></div>==
 
==<div style="background: #6CACFF; padding: 15px; font-weight: bold; line-height: 0.3em; text-indent: 0px; font-size:20px; font-family:helvetica"><font color= #FFFFFF>Project Progress Summary</font></div>==
  
[[File: IS480_Vulcan_Mid_Term_Presentation.pdf | Team Vulcan's Mid Term Presentation]]
+
[[File: IS480_Vulcan_Mid_Term_Presentation.pdf | Team Vulcan's Mid Term Presentation]]<big>***************</big>
 
===Highlights of Project:===
 
===Highlights of Project:===
  

Revision as of 19:10, 20 November 2015

VulcanLogo.png
Vulcan home icon2.png
Vulcan aboutus icon2.png
Vulcan projectoverview icon2.png
Vulcan projectmanagement icon2.png
Vulcan documentation icon2.png

HOME ABOUT US PROJECT OVERVIEW PROJECT MANAGEMENT DOCUMENTATION

Project Progress Summary

File:IS480 Vulcan Mid Term Presentation.pdf***************

Highlights of Project:

  • LiveLabs relocation of servers on Oct 5th 2015, two days before Midterms Presentation
  • LiveLabs servers overload (data increase 1GB/minute), root cause remains unknown
  • Delay in SMU IRB's approval and migration to Production server, Vesta
  • Production server was shutdown for a brief movement during the Pilot Study phase

Challenges of Project:

  • Picking up mobile application development skills in Android
  • Understanding of server administration and configuration, eg. reversed proxy and session forwarding
  • Understanding the concept of mindfulness and data collection for research studies

Achievements of Project:

  • First undergraduate project to be launched on LiveLabs production server, Vesta
  • Successfully integrated smart wearable health vital monitoring function to the mobile application for research purposes

Project Management

Project Status:

Final Progress.PNG Note: Though we had achieved 100% completion for our project (based on planned scope) & we had conducted a handover session with our sponsor, we are still working to refine on the User Interface to provide better user experience :)

Vulcan Scope midterm.png


Please refer to Planned vs Actual Tasks Metrics for the detailed breakdown of our individual tasks.

Project Schedule (Plan Vs Actual):

Based on the planned schedule, after midterms we have discussed with our sponsor and supervisor and decided to drop some functions and reschedule to focus more on testing to ensure that the functions in core and secondary scope are working well. After midterms, our iterations were all 100% on schedule.

Planned Schedule

Vulcan Schedule timeline midterm.png

Actual Schedule

Vulcan Schedule timeline v8.png

Project Metrics:

Schedule Metric Formula: (Estimated Days / Actual Days) x 100%

Vulcan Schedule Metric Score.PNG
Iteration Planned Duration (Days) Actual Duration (Days) Schedule Metric Score Action Status
2 18 32 56.25% Team is behind schedule. This is due to the complexity of the tasks planned (Android App and Smart Watch).

Follow up action: Rescheduled the future iterations, deducted days from buffer days.

Completed
4 18 24 75% Team is behind schedule. This is due to Livelab's server permission issues.

Follow up action: Rescheduled the future iterations, deducted days from buffer days.

Completed

Project Risks:

These are the top risks we have identified and has occurred during our project. We have followed the mitigation steps listed above and successfully managed the risks.

Vulcan Risk finals.PNG

Technical Complexity:

Beeper Survey Creation:

AlarmManager.png

Beeper surveys are created each day by an alarm manager using the set method. The setWindow method was initially used as it would schedule the alarm within a given window of time. However, this did not ensure that participants received beeper surveys at a random times because the beeper surveys would go off close to the start time. In order to fix this problem, the randomisation of beeper timings was done beforehand through java. This was done because there was no method to implement what the sponsor wanted which was 3 random beeper surveys each day with the timings being all different.

RandomTimingBeeper.png

There will be 3 blocks of equal duration created based on the wake up time and sleep time which the participant has inputed. From there, 3 random numbers will be generated with the max number being the length of a block. These 3 numbers will be added to the start time, block 1 time and block 2 time to create the 3 random times for the beeper surveys.

BeeperStart.png

Android Studio's alarm manager does not allow repeating alarms to have a different time everyday and thus this could not be used. The solution from our team was to create a repeating alarm that will go off at midnight if the participant's sleep time is before midnight and at the sleep time if the participant's sleep time is after midnight. The repeating alarm will call the BeeperCreator class which creates the 3 random beeper survey timings for the day.

Dynamic generation of survey elements:

Normally to implement dynamic lists of content, we would use the Android ListView to display the elements on the activity page. ListView is useful for creating a scrolling list of elements that can be interactable and is efficient in terms of implementation. However, due to how ListView recycles elements that move out of view when scrolling, it is difficult to retain user entered information that is linked to an element, as the information would be inherited by the "new" element that comes into view.

Slidercreation.png

To avoid this, we implement ScrollView instead. ScrollView is useful for showing a scrolling page of static elements but its not usually used to generate a dynamic list. The above shown implementation indicates how we append a layout "fragment" to the current ScrollView to represent a single survey question element. We populate the question fields for each question element and attach it to the view, allowing a scrolling view that will display questions according to what is required for this particular survey. All question elements are active in memory and are set up to collect user input as indicated in the implementation below.

Slidersaving.png


Quality of product

Security:

Details of users, specifically login details which could be personally identifying, are separated from demographic information about the user and their result data. Therefore, in the event that a study participant requests that they be removed from the program along with all identifying details, they can be removed from the database while retaining their studies data as anonymous participants, protecting their privacy.

Furthermore, in order to ensure that user data is not proliferated, only the creator of a study is able to access a study and modify its details, and more importantly retrieve result and demographic data about participants in that study. Other researchers are unable to access other studies, which if made possible may be a breach of privacy as participants may have only provided permission to results to the owning researcher. For administrative purposes, any user with administrator rights will also be able to access all studies and their data, as representatives of the Refokus system.

Scalability:

In order to provide scalability and flexibility to researcher created studies, creation of a study can include the creation of an unlimited number of text or slider based questions for post-session and periodic beeper surveys. This gives researchers the ability to customize their surveys to a large extent and it potentially accommodates any possible data points the researcher may wish to collect through survey data.

Furthermore, updating of survey questions or session-specific podcasts is possible while the study is active where there are existing users with partial progress. Completed sessions that receive updates will not be repeated for participants, but any sessions they have not yet completed will be updated to the latest attributes set by the researcher. Any data collected will reflect the survey results according to the version completed by the participant, and so no data is lost either from the old or new version of the study.

Reliability/Availability:

In order to improve availability of service, users will be able to download the podcast for their next session ahead of time, even if they are not yet able to start the session due to the imposed daily limit. In the process the mobile app will also update its survey questions to be used for any subsequent post-session survey or beeper surveys. Sessions can be carried out without internet availability, and the result data stored after the session until internet connectivity is restored.


Intermediate Deliverables:

Stage Specification Modules
Project Management Minutes (Sponsor,Supervisor, Team) Minutes
Metrics Schedule, Bug, Change Management Metrics
Requirements Gathering Design Documents Scenario,Storyboard,Navigation Diagram,Prototype
Market Research Market Research
Analysis Use case Participant & Researcher Use Cases
System Sequence Diagram System Sequence Diagram
Business Process Diagram Business Process Diagram
Logical Diagram Logical Diagram(web)Logical Diagram(mobile)
Testing User Testing User Test
Test Plans Test Cases
Handover Manuals User tutorial, Developer Manual, Setup Manual (Sensitive information, passed to sponsor directly)
Codes Livelabs server, source code (Sensitive information, passed to sponsor directly)

Deployment:

We have deployed our mobile application on the Google PlayStore:

Version Remarks
Alpha Version Instructions to download
Beta Version Instructions to download
Production Version ReFokus is on PlayStore now!

We have deployed our web application on the Livelabs Servers:

Version
ReFokus on Test Server
ReFokus on Production Server

We have also deployed our web application to the Livelabs Web Server (Hestia), ReFokus Web Application

Testing:

Number of User Tests: 5
Tester Profile:
Our testers consist of users with research backgrounds, specifically research assistants currently pursuing their PhD in Psychology. With their experience, we were able to gain valuable feedback with regards to the creation of studies.
Our testers also include SIS Students. With their knowledge and a higher expectation in usability, their valuable feedback helped us improve on our app's usability
For more information about the user tests and the detailed results, please visit the link below:
User Test

Test Cases:
For each iteration, we have functional test cases to test individual functions. Towards the end of the iteration, we will do regression testing and go through the entire flow of the project to ensure all parts are working.
For the detailed test cases, please visit the link below:
Test Cases

Vulcan bugmetric.PNG
Vulcan Bug Report 3.png


For our bug metrics score, we can see that iteration 5 had an exceptionally high score. This was due to the aftermath of User Test 2 and 3, which proved to be useful for us with the functional and UI bugs spotted. Even though the bug score was well above the threshold level of 10, we managed to solve all the bugs in the scheduled debugging time.
For the detailed bug reports, please visit the link below:
Bug Metrics

Reflection

Vulcan Reflection1.png
Vulcan Reflection2.png
Vulcan Reflection3.png