IS480 Team wiki:2017T2 Zenith FinalWiki

From IS480
Jump to navigation Jump to search
Zenith banner.png



Project Overview

Project Management


Main Wiki

Midterm Wiki Final Wiki

Zenith final header.PNG

Project Progress Summary

Finals Slides

Zenith Finals Slides

Deployed site link

Zenith deployment link.jpeg

Staging site: http://medsense-staging-env.ap-southeast-1.elasticbeanstalk.com/

Project Highlights

Our project schedule is divided into 13 iterations.

  • We are currently on our 12th iteration (26 Mar - 8 Apr 2018).
  • Up till 4 Apr 2018, we have completed 100% of our development progress.
  • 2 User Acceptance Tests were conducted before the Final Presentation. The results are shown here.
  • Achieved and exceeded Finals X-factor.

Unexpected events:

  • List of requirement changes after Midterms can be viewed here.

Project Management

Iteration Progress: 13 of 13
Features Completion: 100% (46 out of 46 features)
Confidence Level: 100%

Project Status:

A breakdown of tasks is shown in our project scope.

Zenith final scope.JPG

Project Schedule (Plan Vs Actual):

Milestones Overview:

Planned (Acceptance) Actual (Midterms)

Zenith midterm actual timeline.png

Zenith final actual timeline.png

Project Schedule:

Planned Schedule (Midterm)

Zenith final expected schedule.png

Changes in planned schedule (Midterm)

Zenith final changed schedule.png

Actual Schedule (Final)

Zenith final actual schedule.png

Project Metrics:

Task metric

Zenith task metric.png
Score TM <= 50 50 < TM <= 75 75 < TM <= 125 125 < TM <= 150 150 > TM
Action 1. Inform supervisor within 24 hours.
2. Re-estimate tasks for future iterations.
3. Consider dropping Tasks.
1. Re-estimate tasks for future iterations.
2. Deduct number of days behind from buffer days.
3. If there are no more buffer days, decide the functionalities to drop.
1. Our estimates are fairly accurate, and are roughly on track.
2. Add/deduct number of days ahead / behind from buffer days.
1. Re-estimate tasks for future iterations.
2. Add number of days ahead to buffer days.
1. Inform supervisor within 24 hours.
2. Re-estimate tasks for future iterations.

Bug metric

Zenith bug count.png

Zenith bug score.png

Note: There were no coding tasks for iteration 1.

Severity Low Impact High Impact Critical Impact
Description User interface display errors, such as out of alignment, colour used is not according to theme.

It does not affect the functionality of the system.

The system is functional with some non-critical functionalities are not working. The system is not functional.

Bugs have to be fixed before proceeding.

Points BM <= 5 5 < BM < 10 BM >= 10
Description The system does not need immediate fixing, could be fixed during buffer time or during coding sessions Coders to use planned debugging time in the iteration to solve the bug The team has to stop all current development and resolve the bug immediately

Project Risks:

Zenith risk metric.png
S/N Risk Type Description Likelihood Impact Level Threat Level Mitigation Plan
1 Project Management The schedule is planned based on macro functionalities, so small coding tasks may have been left out / time is underestimated due to lack of experience. Medium High A Project Manager will review the project schedule and the time needed for each task regularly, and make changes when necessary.
2 Client Management Client may request for changes to existing functionalities or request for new functionalities. These increases in scope may delay the project completion. Medium Medium B Team will have to consider expertise and time required. Project Manager will manage the expectations of the client.
3 Project Management Team members feel overwhelmed by the workload, and feel that they are increasingly burned out. Medium Low C Team members will limit the increases in scope, and prioritize functions in case functions need to be dropped.

Technical Complexity:

Our technical complexity comprises of:

Recommendation Models

Student Recommendation Model

We have implemented a recommendation model for students, to address their individual weaknesses.This model is guided by the following principles:

  1. Recommend beginner/advanced cases based on year of study: Medical students in their third year and below will be recommended beginner cases, while medical students who are in the fourth and fifth year of study will be recommended advanced cases. This is in accordance with their curriculum in school.

  2. Recommend most popular cases for new users: A new user might feel overwhelmed with the selection of cases. Hence, we will recommend the popular cases for them to familiarize themselves with the games. Their performance in these popular cases is a good gauge of their personal strengths and weaknesses, and will serve as a starting point for more targeted recommendations in the future.

  3. Recommend cases in the sub-specialties they are weakest in, based on the cases they have previously done: Through our model, we aim to tailor instruction to individual learner's needs. By recommending students to practice on topics they are weaker in, we believe that the medical students will be able to learn in a more holistic manner.

Professor Recommendation Model

For Professors, knowing which areas students are commonly weak in will assist them in their course planning. We would like to encourage Professors to upload more cases on the students' weaker topics. The recommendation model is guided by the following principles:

  1. The recommended sub-specialties are based on the overall cohort performances.

  2. Recommend sub-specialties with the least number of cases if there is insufficient data to determine the weaker topics: Professors will not be able to gauge students' performances in areas where there are no cases for students to try. The model will encourage them to upload cases in those sub-specialties, so that they can get a more accurate idea of the students' weaknesses.

  3. Recommend sub-specialties where the number of pending cases is above a certain threshold: Doing so will encourage Professors to vet the pending cases, which will increase the number of cases for students to practice on.

Quality of product

Intermediate Deliverables:

Stage Specification
Project Management Schedule
Bug metrics
Task metrics
Risk management
Change management
Requirements Overview
Analysis Use case diagram
Architecture diagram
Technologies used
Design Prototypes
Testing User Acceptance Test 1 (11 - 13 Feb 2018)
User Acceptance Test 2 (4 - 10 Apr 2018)


  • Creation of test cases during development.
  • Functionality testing after completion of function.
  • Regression testing at the end of every iteration.
  • 2 UATs completed by Final Presentation.
  • 1 UAT has been completed after Midterms. To view the results of this UAT, click here.


Team Reflection:

Throughout our FYP journey, we have tried our best to implement the changes that our clients have requested, even if it meant additional coding hours or having to rework the project schedule multiple times. This is because we place emphasis on our project's value to the clients. However, as we approach the completion of our project, we had to scrutinize every change more closely. We had to find a balance between providing the best value to the clients, as well as our workload and confidence of completing the project in time.

Individual Reflections:

Our stakeholders have been actively involved in our project right from the start. Apart from meeting our supervisor regularly, we have been in constant communication with our other stakeholders. Working with a team of 17 entails certain challenges. I have learnt to manage different perspectives and expectations, as well as balance the workload and priorities of my team.

Chin Rui:
It has been a challenging journey technically. The application architecture has undergone multiple major changes, especially our security configurations, to ensure better quality and continuity for our clients. I have also discovered the difficulties in planning a software architecture after taking into account budget, skill and technical constrains.

As the Quality Assurance lead, I had to scrutinize every single corner case in the application, and constantly come up with more test cases. After our midterm presentation we conducted load testing, a User Acceptance Test, and security testing. I learnt that observing users during the UAT is imperative, as some users tend to not notice certain issues, or they do not voice them out. The UAT is also an opportunity for us to see if the functions are indeed useful for the users.

Ming Rui:
After our Midterm presentation, we completely changed the UI of our application. We went back to the paper and Axure prototypes, and thought of ways to improve the user experience. The final designs are a product of multiple revamps, each better compared to the previous. I have learnt the importance of gathering feedback from a larger group of users, instead of just the Medsense team from NUS.

Due to the various changes we have made over the course of our project, we had to do some refactoring. I have learnt that regardless of the time and effort we put into getting the schema right at the start of the project, changes are inevitable during the development process. Being the Full-Stack Support has definitely challenged me to improve my technical skills and resilience as I dealt with the bug infestation that came with the refactoring.

As a backend developer, I have realised that is important to have a seamless integration of the frontend components with the server side logic as well as designing and implementing data storage solutions. It has been a great experience being the technical lead, and I am glad that I could help my team members with the technical issues they faced.