HeaderSIS.jpg

Difference between revisions of "IS480 Team wiki: 2017T2 Tetris Midterm"

From IS480
Jump to navigation Jump to search
(Added risks)
m (formatting fix)
Line 289: Line 289:
 
|| High (now it is medium)
 
|| High (now it is medium)
 
|| Sharing & Pair Programming
 
|| Sharing & Pair Programming
 +
|-
  
 
|width="150px"| Scope Creep
 
|width="150px"| Scope Creep

Revision as of 17:53, 20 February 2017

Tetris' Logo


TtMenu Home.png TtMenu About.png TtMenu Overview.png TtMenu PM.png TtMenu Docu.png


TtMenu MidtermWiki s.png


Project Progress Summary

Team Tetris is on Iteration 11, and our next milestone is Midterms on 21 February 2017. We have deployed our project online and completed 1 round of User Testing. You may download our proposal submitted for Acceptance and our Midterm presentation slide deck for your reference.

Project Highlights:

What unexpected events occurred?

1) Took 3 weeks in December break to learn Drools for Rule Engine & Quartz for timing / scheduling jobs, but didn't use them.

For Drools library, rules could be represented as String expressions and using String expressions make it more flexible when it came to customisation of the rules. In short, String expressions fit the task better than Drools.

As for Quartz, it was an inefficient way of creating a Job (task) for every single alert. It was simpler an more efficient to implement a servlet context listener which ran in the background to check for active alerts and whether escalation conditions have been achieved. The listener ran as a single job instead of multiple Quartz jobs.

2) Water sensor wasn't reliable, sponsor wanted us to focus on the customisation of the rule engine instead.
3) Addition of scope (Generic Rule Engine), which we could not do within the constraint of the project timeline and the JSON formatting received may affect the reliability & functionality of existing functions.
4) A team member wasn't available for two iterations, which affected quality of functions.
5) Delayed, but completed UT 1 due to large number of bugs; Pushed UT 2 back to after midterm, UT 3 date stays.

Project Management

Project Status:

We highlight the status of our project and also state the todo (tasks we still need to do to bring the function to 100% implementation)
You may view the function details here.

Core Functions

Function Status Confidence (0-1) Comments
Account 90% implemented. Deployed, UT done, client approved. 0.8 Assigned to Wei Liang (Lead).

Done by Samantha & Wei Liang.
Todo: Validation & Profile Photo

Caregiver Group 70% implemented. Deployed, UT done, client approved. 0.7 Assigned to Wei Liang (Lead).

Done by Samantha & Wei Liang.

Todo: Validation & Detailed Testing with Rules. Note: Changes in Use Case affected module scope in Iter 8. We had 3 meetings with the client to get her further input, in addition to consultation with her. Supervisor also present.

SMS 90% implemented. Deployed, with US$40 in Twillio account. 1.0 Assigned to Avinash.

Todo: Mobile Number Confirmation

Medication Data Retrieval & Rule Engine 100% implemented. Deployed, rules in rule engine approved by client. 0.75 Assigned to Irshad (lead), Avinash, Elliotz.

Todo: Scheduling recurring schedule using input of time only; more detailed testing & debugging

Presence Data Retrieval & Rule Engine 70% implemented. Deployed, rules in rule engine approved by client. 0.65 Assigned to Irshad (lead), Avinash, Elliotz.

Todo: More detailed testing & debugging


Movement Data Retrieval & Rule Engine 100% implemented. Deployed, rules in rule engine approved by client. 0.6 Assigned to Irshad (lead), Avinash, Elliotz.

Todo: More detailed testing & debugging

Hygiene Data Retrieval & Rule Engine (Removed) Attempted. Function removal approved by client, in presence of Supervisor. N/A Sensors had an extreme range of readings. To concentrate on other events instead.
Generic Rule Engine (Added, then subsequently Removed) Attempted. Function removal approved by client, in presence of Supervisor. N/A JSON data structure received might be different, affecting info extracted in Retrieval Module.


Average 87% 0.75


Secondary Functions

Function Status Confidence (0-1) Comments
Escalation Policy 90% implemented. Deployed, client approved. 0.8 Assigned to Avinash (Lead).

Todo: Implement a configurable time before case is escalated before Tier 1 & 2, 2 & 3 and resend to all caregivers in the group

Multiple Locations (Sensors) 0% implemented. 0.6 Assigned to Irshad (lead), Avinash, Elliotz.
Planned for a future iteration 11 & 12.
Multiple Locations (Caregivers & Caregiver Groups) 80% implemented. Deployed. Client aware. 0.8 Assigned to Irshad (lead), Avinash, Elliotz.
Planned for a future iteration 11 & 12.
Notification Logs 100% implemented. Deployed. 0.8 Assigned to Avinash (lead), Samantha.

Todo: Scheduling recurring schedule using input of time only; more detailed testing & debugging

Average 67.5% 0.75

Optional Functions

Function Status Confidence (0-1) Comments
Usability 0% implemented. 0.7 do not proceed
Analytics 0% implemented. 0.5 do not proceed
Average 0% 0.6

Project Schedule (Plan Vs Actual):

Briefly, the team believes that the project can be completed 100% fully working and deployed with 3 user testing completed by finals.

From Acceptance till Midterm, the project schedule set the sequence of features to be completed and we made sure each feature was working (main functionality of the feature) was working. We had weekly meetings with our supervisor to update progress and met with our sponsor every two weeks to make sure what we developed was what was intended. We also met the sponsor offline to ensure the work done was inline. In weeks that they were not available, we kept communication ongoing with email updates to share the team's progress.

The Scope quite greatly expanded in iteration 3 with the Generic Rule Engine feature the team ultimately deemed was not possible to complete within the constraints of this project timeline and dropped the feature in iteration 10 while we continued to find ways to make the feature possible. You may read the details of all our changes here.

Iterations Planned Actual Comments
1 Planning 2 Oct 16 15 Oct 16 Zhen Hui did up the proposal in 10 pages and submitted to prof Hwee Pink, Andus, prof ben for initial vetting. Proposal included timeline, scope which the group discussed with the sponsor. ZH condensed in 2 pages and submitted the proposal. ZH did the wiki. Avinash not available.
Diagramming 2 Oct 16 15 Oct 16 Zhen Hui & Elliotz did architecture diagram & use case. Irshad did logical diagram for database SQL. Wei Liang & Samantha did the prototype and scenario, and logo for the group. Avinash not available.
Data Retrieval 2 Oct 16 15 Oct 16 Elliotz, Irshad configured the raspberry pi & gateway for acceptance. This was the gateway at that time to push the data into our system. Elliotz & Samantha also worked to store data in the client's database, setup mongodb and aws. Avinash not available.
2 Medication (frontend) 16 Oct 16 29 Oct 16 Samantha & Wei Liang did the UI. Samantha did test cases for medbox.
Medication (backend) 16 Oct 16 29 Oct 16 Zhen Hui, Elliotz, Irshad & Avinash defined the logic. Zhen Hui worked on some of the code with Irshad.
SMS 2 Oct 16 15 Oct 16 Elliotz and Avinash defined the logic to connect the trillion account. Elliotz set up the account for the group and Zhen Hui pumped in USD20 for the SMS feature. Avinash implemented the send message feature.
Acceptance Prep 2 Oct 16 15 Oct 16 Zhen Hui did the wiki & slides. Elliotz was in charge of architecture diagram and use cases, Irshad for rules, avinash for demo, wei liang for scenario and samantha for the QA and slides.

Project Metrics:

You may view the Schedule metrics here.

Project Risks:

Risk Probability Impact Mitigation
Unavailability of Team Member High High (now it is medium) Meeting at Member’s Home

Team & Peer Evaluation (3 times)
Open Table Discussion
Feedback

Technical Risk High High (now it is medium) Sharing & Pair Programming
Scope Creep Low High (now it is medium) We evaluated that the increase in scope was doable and we planned this new scope (Generic Rule Engine) for 2 iterations (4 weeks) to work on. Plus, we dropped one of the 4 events. We attempted the Rule Engine, but the result was not what the sponsor wanted, and it wasn't very good since the data input will be variable and could affect our existing functions. We eventually decided to drop this scope and worked to refine our existing 3 events to make them customisable.

Technical Complexity:

Describe and list the technical complexity of your project in order of highest complexity first. For example, deploying on iPhone using Objective-C, customizing Drupal with own database, quick search for shortest flight path, database structure, etc.

Quality of product

Provide more details about the quality of your work. For example, you designed a flexible configurable system using XML.config files, uses Strategy Design Pattern to allow plugging in different strategy, implement a regular expression parser to map a flexible formula editor, etc.

Intermediate Deliverables:

There should be some evidence of work in progress.

Stage Specification Modules
Project Management Minutes Sponsor weeks -10 -5 3 7 Supervisor weeks -2 3 5 7
Metrics Bug metrics
Requirements Story cards CRUD Customer, Trend Analytic
Analysis Use case overall
System Sequence Diagram client, server
Business Process Diagram Here
Screen Shots CRUD Customer, Trend Analysis
Design ER Diagram 1, 2, 3
Class Diagram 1, 2, 3
Testing User test plan instructions

Not all parts of the deliverables are necessary but the evidence should be convincing of the progress. Try to include design deliverables that shows the quality of your project.

Deployment:

In an iterative approach, ready to use system should be available (deployed) for client and instructions to access the system described here (user name). If necessary, provide a deployment diagram link.

Testing:

Describe the testing done on your system. For example, the number of user testing, tester profile, test cases, survey results, issue tracker, bug reports, etc.

Reflection

In this section, describe what have the team learn? Be brief. Sometimes, the client writes a report to feedback on the system; this sponsor report can be included or linked from here.

Team Reflection:

Any training and lesson learn? What are the take-away so far? It would be very convincing if the knowledge is share at the wiki knowledge base and linked here.

Benjamin Gan Reflection:

You may include individual reflection if that make sense at this point. The team is uncooperative and did not follow my instructions.