HeaderSIS.jpg

IS480 Team wiki: 2017T2 Tetris Midterm

From IS480
Revision as of 16:16, 20 February 2017 by Zhwhey.2014 (talk | contribs) (Added a summary for the project schedule)
Jump to navigation Jump to search
Tetris' Logo


TtMenu Home.png TtMenu About.png TtMenu Overview.png TtMenu PM.png TtMenu Docu.png


TtMenu MidtermWiki s.png


Project Progress Summary

Team Tetris is on Iteration 11, and our next milestone is Midterms on 21 February 2017. We have deployed our project online and completed 1 round of User Testing. You may download our proposal submitted for Acceptance and our Midterm presentation slide deck for your reference.

Project Highlights:

What unexpected events occurred?

1) Took 3 weeks in December break to learn Drools for Rule Engine & Quartz for timing / scheduling jobs, but didn't use them.

For Drools library, rules could be represented as String expressions and using String expressions make it more flexible when it came to customisation of the rules. In short, String expressions fit the task better than Drools.

As for Quartz, it was an inefficient way of creating a Job (task) for every single alert. It was simpler an more efficient to implement a servlet context listener which ran in the background to check for active alerts and whether escalation conditions have been achieved. The listener ran as a single job instead of multiple Quartz jobs.

2) Water sensor wasn't reliable, sponsor wanted us to focus on the customisation of the rule engine instead.
3) A team member wasn't available for two iterations, which affected quality of functions.
4) Delayed, but completed UT 1 due to large number of bugs; Pushed UT 2 back to after midterm, UT 3 date stays.

Project Management

Project Status:

We highlight the status of our project and also state the todo (tasks we still need to do to bring the function to 100% implementation)
You may view the function details here.

Core Functions

Function Status Confidence (0-1) Comments
Account 90% implemented. Deployed, UT done, client approved. 0.8 Assigned to Wei Liang (Lead).

Done by Samantha & Wei Liang.
Todo: Validation & Profile Photo

Caregiver Group 70% implemented. Deployed, UT done, client approved. 0.7 Assigned to Wei Liang (Lead).

Done by Samantha & Wei Liang.

Todo: Validation & Detailed Testing with Rules. Note: Changes in Use Case affected module scope in Iter 8. We had 3 meetings with the client to get her further input, in addition to consultation with her. Supervisor also present.

SMS 90% implemented. Deployed, with US$40 in Twillio account. 1.0 Assigned to Avinash.

Todo: Mobile Number Confirmation

Medication Data Retrieval & Rule Engine 100% implemented. Deployed, rules in rule engine approved by client. 0.75 Assigned to Irshad (lead), Avinash, Elliotz.

Todo: Scheduling recurring schedule using input of time only; more detailed testing & debugging

Presence Data Retrieval & Rule Engine 70% implemented. Deployed, rules in rule engine approved by client. 0.65 Assigned to Irshad (lead), Avinash, Elliotz.

Todo: More detailed testing & debugging


Movement Data Retrieval & Rule Engine 100% implemented. Deployed, rules in rule engine approved by client. 0.6 Assigned to Irshad (lead), Avinash, Elliotz.

Todo: More detailed testing & debugging

Hygiene Data Retrieval & Rule Engine (Removed) Attempted. Function removal approved by client, in presence of Supervisor. 0.6 Sensors had an extreme range of readings. To concentrate on other events instead.
Hygiene Data Retrieval & Rule Engine (Removed) Attempted. Function removal approved by client, in presence of Supervisor. N/A Sensors had an extreme range of readings. To concentrate on other events instead.
Generic Rule Engine (Added, then subsequently Removed) Attempted. Function removal approved by client, in presence of Supervisor. N/A JSON data structure received might be different, affecting info extracted in Retrieval Module.


Average 87% 0.75


Secondary Functions

Function Status Confidence (0-1) Comments
Escalation Policy 90% implemented. Deployed, client approved. 0.8 Assigned to Avinash (Lead).

Todo: Implement a configurable time before case is escalated before Tier 1 & 2, 2 & 3 and resend to all caregivers in the group

Multiple Locations (Sensors) 0% implemented. 0.6 Assigned to Irshad (lead), Avinash, Elliotz.
Planned for a future iteration 11 & 12.
Multiple Locations (Caregivers & Caregiver Groups) 80% implemented. Deployed. Client aware. 0.8 Assigned to Irshad (lead), Avinash, Elliotz.
Planned for a future iteration 11 & 12.
Notification Logs 100% implemented. Deployed. 0.8 Assigned to Avinash (lead), Samantha.

Todo: Scheduling recurring schedule using input of time only; more detailed testing & debugging

Average 67.5% 0.75

Optional Functions

Function Status Confidence (0-1) Comments
Usability 0% implemented. 0.7 do not proceed
Analytics 0% implemented. 0.5 do not proceed
Average 0% 0.6


Project Schedule (Plan Vs Actual):

Briefly, the team believes that the project can be completed 100% fully working and deployed with 3 user testing completed by finals.

From Acceptance till Midterm, the project schedule set the sequence of features to be completed and we made sure each feature was working (main functionality of the feature) was working. We had weekly meetings with our supervisor to update progress and met with our sponsor every two weeks to make sure what we developed was what was intended. We also met the sponsor offline to ensure the work done was inline. In weeks that they were not available, we kept communication ongoing with email updates to share the team's progress.

The Scope quite greatly expanded in iteration 3 with the Generic Rule Engine feature the team ultimately deemed was not possible to complete within the constraints of this project timeline and dropped the feature in iteration 10 while we continued to find ways to make the feature possible. You may read the details of all our changes here.

Iterations Planned Actual Comments
1 Customer CRUD 1 Sept 2010 25 Aug 2010 Fiona took the Sales CRUD as well.
Trend Analytic 1 Sept 2010 15 Sept 2010 Ben is too busy and pushed iteration 1 back
2 User tutorial 1 Oct 2010 Removed proposed by Ben
Psycho analysis 1 Oct 2010 New module proposed by sponsor

Project Metrics:

You may view the metrics here.

Project Risks:

Update the proposal assumptions and risks. Describe what you learn from the risk update and mitigation steps taken.

Risk Probability Impact Mitigation
Sponsor want to use Joomla instead of Drupal High High Team evaluating Joomla to write an impact analysis report
Sponsor deployment machine approval and support High Medium (now it is low) Use UPL machine

Be sure to prioritize the risks.

Technical Complexity:

Describe and list the technical complexity of your project in order of highest complexity first. For example, deploying on iPhone using Objective-C, customizing Drupal with own database, quick search for shortest flight path, database structure, etc.

Quality of product

Provide more details about the quality of your work. For example, you designed a flexible configurable system using XML.config files, uses Strategy Design Pattern to allow plugging in different strategy, implement a regular expression parser to map a flexible formula editor, etc.

Intermediate Deliverables:

There should be some evidence of work in progress.

Stage Specification Modules
Project Management Minutes Sponsor weeks -10 -5 3 7 Supervisor weeks -2 3 5 7
Metrics Bug metrics
Requirements Story cards CRUD Customer, Trend Analytic
Analysis Use case overall
System Sequence Diagram client, server
Business Process Diagram Here
Screen Shots CRUD Customer, Trend Analysis
Design ER Diagram 1, 2, 3
Class Diagram 1, 2, 3
Testing User test plan instructions

Not all parts of the deliverables are necessary but the evidence should be convincing of the progress. Try to include design deliverables that shows the quality of your project.

Deployment:

In an iterative approach, ready to use system should be available (deployed) for client and instructions to access the system described here (user name). If necessary, provide a deployment diagram link.

Testing:

Describe the testing done on your system. For example, the number of user testing, tester profile, test cases, survey results, issue tracker, bug reports, etc.

Reflection

In this section, describe what have the team learn? Be brief. Sometimes, the client writes a report to feedback on the system; this sponsor report can be included or linked from here.

Team Reflection:

Any training and lesson learn? What are the take-away so far? It would be very convincing if the knowledge is share at the wiki knowledge base and linked here.

Benjamin Gan Reflection:

You may include individual reflection if that make sense at this point. The team is uncooperative and did not follow my instructions.