HeaderSIS.jpg

IS480 Team wiki: 2015T1 Vulcan Midterm

From IS480
Jump to navigation Jump to search
VulcanLogo.png
Vulcan home icon2.png
Vulcan aboutus icon2.png
Vulcan projectoverview icon2.png
Vulcan projectmanagement icon2.png
Vulcan documentation icon2.png

HOME ABOUT US PROJECT OVERVIEW PROJECT MANAGEMENT DOCUMENTATION


Project Progress Summary

Highlights of Project:

  • LiveLabs relocation of servers on Oct 5th 2015, two days before Midterms Presentation
  • LiveLabs servers overload (data increase 1GB/minute), seemed to have been caused by us
  • API level of phone borrowed from school too low for our development

Project Management

Provide more details about the status, schedule and the scope of the project. Describe the complexity of the project.

Project Status:

MidTerm Progress.PNG

Vulcan Scope midterm.png


Please refer to Planned vs Actual Tasks Metrics for the detailed breakdown of our individual tasks.

Project Schedule (Plan Vs Actual):

Compare the project plan during acceptance with the actual work done at this point. Briefly describe a summary here. Everything went as plan, everything has changed and the team is working on a new project with new sponsors or the supervisor is missing. A good source for this section comes from the project weekly report.

Provide a comparison of the plan and actual schedule. Has the project scope expanded or reduced? You can use the table below or your own gantt charts.

Planned Schedule

Vulcan Schedule timeline acceptance.png

Actual Schedule

Vulcan Schedule timeline midterm.png

Project Metrics:

Schedule Metric Formula: (Estimated Days / Actual Days) x 100%

Vulcan Schedule Metric Score.PNG
Iteration Planned Duration (Days) Actual Duration (Days) Schedule Metric Score Action Status
2 18 32 56.25% Team is behind schedule. This is due to the complexity of the tasks planned (Android App and Smart Watch).

Follow up action: Rescheduled the future iterations, deducted days from buffer days.

Completed
4 18 24 75% Team is behind schedule. This is due to Livelab's server permission issues.

Follow up action: Rescheduled the future iterations, deducted days from buffer days.

Completed

Project Risks:

These are the top risks we have identified and has happened before Mid Term. We have followed the mitigation steps listed above and successfully managed the risks.

Vulcan Risk midterm.PNG

Technical Complexity:

Beeper Survey Creation:

AlarmManager.png

Beeper surveys are created each day by an alarm manager using the set method. The setWindow method was initially used as it would schedule the alarm within a given window of time. However, this did not ensure that participants received beeper surveys at a random times because the beeper surveys would go off close to the start time. In order to fix this problem, the randomisation of beeper timings was done beforehand through java. This was done because there was no method to implement what the sponsor wanted which was 3 random beeper surveys each day with the timings being all different.

RandomTimingBeeper.png

There will be 3 blocks of equal duration created based on the wake up time and sleep time which the participant has inputed. From there, 3 random numbers will be generated with the max number being the length of a block. These 3 numbers will be added to the start time, block 1 time and block 2 time to create the 3 random times for the beeper surveys.

BeeperStart.png

Android Studio's alarm manager does not allow repeating alarms to have a different time everyday and thus this could not be used. The solution from our team was to create a repeating alarm that will go off at midnight if the participant's sleep time is before midnight and at the sleep time if the participant's sleep time is after midnight. The repeating alarm will call the BeeperCreator class which creates the 3 random beeper survey timings for the day.

Dynamic generation of survey elements:

Quality of product

Security:

Details of users, specifically login details which could be personally identifying, are separated from demographic information about the user and their result data. Therefore, in the event that a study participant requests that they be removed from the program along with all identifying details, they can be removed from the database while retaining their studies data as anonymous participants, protecting their privacy.

Furthermore, in order to ensure that user data is not proliferated, only the creator of a study is able to access a study and modify its details, and more importantly retrieve result and demographic data about participants in that study. Other researchers are unable to access other studies, which if made possible may be a breach of privacy as participants may have only provided permission to results to the owning researcher. For administrative purposes, any user with administrator rights will also be able to access all studies and their data, as representatives of the Refokus system.

Scalability:

In order to provide scalability and flexibility to researcher created studies, creation of a study can include the creation of an unlimited number of text or slider based questions for post-session and periodic beeper surveys. This gives researchers the ability to customize their surveys to a large extent and it potentially accommodates any possible data points the researcher may wish to collect through survey data.

Furthermore, updating of survey questions or session-specific podcasts is possible while the study is active where there are existing users with partial progress. Completed sessions that receive updates will not be repeated for participants, but any sessions they have not yet completed will be updated to the latest attributes set by the researcher. Any data collected will reflect the survey results according to the version completed by the participant, and so no data is lost either from the old or new version of the study.

Reliability/Availability:

In order to improve availability of service, users will be able to download the podcast for their next session ahead of time, even if they are not yet able to start the session due to the imposed daily limit. In the process the mobile app will also update its survey questions to be used for any subsequent post-session survey or beeper surveys. Sessions can be carried out without internet availability, and the result data stored after the session until internet connectivity is restored.


Intermediate Deliverables:

Stage Specification Modules
Project Management Minutes (Sponsor,Supervisor, Team) Minutes
Metrics Schedule, Bug, Change Management Metrics
Requirements Gathering Design Documents Scenario,Storyboard,Navigation Diagram,Prototype
Market Research Market Research
Analysis Use case Participant & Researcher Use Cases
Business Process Diagram Business Process Diagram
Logical Diagram Logical Diagram
Testing User Testing User Test
Test Plans Test Cases

Deployment:

We have launch our Alpha Version of our mobile application on the google playstore, the instructions for the Alpha tester to participate can be found here: Instructions to download

We have also deployed our web application to the Livelabs Web Server (Hestia), ReFokus Web Application

Testing:

Number of User Tests: 3
Tester Profile:
Our testers consist of users with research backgrounds, specifically research assistants currently pursuing their PhD in Psychology. With their experience, we were able to gain valuable feedback with regards to the creation of studies.
For more information about the user tests and the detailed results, please visit the link below:
User Test

Test Cases:
For each iteration, we have functional test cases to test individual functions. Towards the end of the iteration, we will do regression testing and go through the entire flow of the project to ensure all parts are working.
For the detailed test cases, please visit the link below:
Test Cases

Vulcan bugmetric.PNG
Vulcan Bug Report 3.png


For our bug metrics score, we can see that iteration 5 had an exceptionally high score. This was due to the aftermath of User Test 2 and 3, which proved to be useful for us with the functional and UI bugs spotted. Even though the bug score was well above the threshold level of 10, we managed to solve all the bugs in the scheduled debugging time.
For the detailed bug reports, please visit the link below:
Bug Metrics

Reflection

In this section, describe what have the team learn? Be brief. Sometimes, the client writes a report to feedback on the system; this sponsor report can be included or linked from here.

Team Reflection:

Any training and lesson learn? What are the take-away so far? It would be very convincing if the knowledge is share at the wiki knowledge base and linked here.

Benjamin Gan Reflection:

You may include individual reflection if that make sense at this point. The team is uncooperative and did not follow my instructions.