HeaderSIS.jpg

Difference between revisions of "IS480 Team wiki: 2017T1 Codezilla MidTerm Wiki"

From IS480
Jump to navigation Jump to search
Line 262: Line 262:
 
[[Image: Codezillatechcom8.png|center|800px]]
 
[[Image: Codezillatechcom8.png|center|800px]]
 
<br>
 
<br>
Scores are being calculated for for candidates in specific category
+
We obtain candidate users based on their similarities in several categories. Scores are being calculated for for candidates in specific category:
 
*Country
 
*Country
*Project area/ Cause
+
*Project Area/ Cause
 
*Resource Category
 
*Resource Category
 
+
Similarity multipliers help to establish greater scores
 +
The logic lies in the fact that: multiple similarities in one category > similarities in separate categories
 +
*For similarities across 2 categories, for example Country and Project Area/Cause, we make use of the ACROSS_SIMILARITY_MULTIPLIER
 +
<br>
 +
[[Image: Codezillatechcom9.png|center|800px]]
 +
<br>
 +
[[Image: Codezillatechcom10.png|center|800px]]
 +
<br>
 +
[[Image: Codezillatechcom11.png|center|800px]]
 +
<br>
 
<br>
 
<br>
 
=Quality of Product=
 
=Quality of Product=

Revision as of 02:21, 11 October 2017

CODEZILLABANNER.png


HOME

ABOUT US

PROJECT OVERVIEW

PROJECT MANAGEMENT

DOCUMENTATION


 



Project Progress Summary

Midterm Slides: Midterm Slides
Deployed Site Link: http://impactlaunch.space

Project Highlights:
To learn more about our Project: Project Description Link
Our project is being divided into a total of 15 iterations and we are currently at sprint 11 (02 October 2017 to 18 October 2017). We have so far completed 73% of our project. We made use of our buffer sprint in sprint 10 to complete our Project Management Module 1 and 2. In these 11 sprints, we have completed 2 User Testings where participants include actual professionals from various industries.
An unexpected event that occured would be that of the Project Management Module being uncompleted in Sprint 8, 9 due to our underestimation. As such, we made use of our buffer sprint in Sprint 10 to complete the Project Management Module.

  • Improved the Resource Request statuses by adding coloured labels, making it easier to read and understand requests
  • Landing page user interface was improved upon
  • Guide for resource management created and included
  • Built the Project Management Space that includes User Management and Document Uploading
  • Notifications System included into the Navigation bar



Deliverables

Project Status

Codezillastatus.png


Project Scope

Module Status Confidence Level (0-1) Comments
Account Modules (1 and 2) 100% 1 NIL
Project Module 100% 1 NIL
Resource Module 100% 1 There were improvements that had to be made with regards to understanding what a resource was and the design during UAT1
Search Modules (Project and Resource) 100% 1 NIL
Project Request Module 100% 1 NIL
Resource Request Module 100% 1 NIL
Project Management Modules (1 and 2) 100% 1 Tested in UAT2 in Sprint 11 and there are comments to improve it
Administration Module 0% 0.7 In Progress
Project Matching Module 80% 1 In Progress
Feedback and Ratings Module 0% 0.5 In Progress
Communications Module 0% 0.5 In Progress
Analytics Module 0% 0.5 In Progress


There has been no changes made to our Project Scope since Acceptance. Below is our scope and the status of each module.

Codezillascopemidterms.png


Project Schedule

To view all versions of the Project Schedule: Project Schedule Link

Summary Of Changes Made To Project Schedule:
Iteration Planned Actual Comments
9 4th Sept 2017 (Sprint 9) 2nd Oct 2017 (Sprint 11) UAT 2 pushed back from Sprint 9 to Sprint 11
8, 9 21st Aug, 4th Sept 2017 (Sprint 8, 9) 18 Sept 2017 (Sprint 10 - Buffer) Buffer Sprint 10 was used for Project Management Module
11 18 Sept 2017 (Sprint 11) 02 Oct 2017 (Sprint 11) Sprint 11 initially did not include X Factor recruitment
Planned Actual

Codezilla timeline 8th v2.PNG

Codezilla s11 schedule.png


Project Metrics

To view more details about the metrics used: Project Metrics Link

Sprint Velocity:


Sprint Velocity is a metric that predicts how much work Team Codezilla can successfully complete within our two-week sprint. It is calculated at the end of each sprint by summing the total points of completed user stories. It is used to accurately estimate and how much we are getting done each sprint, in order to estimate better in future.

Codezillasprintvelocitybar.png


1234codezillavelocityline.png


Burndown Chart:


Scrum Burndown is graphic representation that shows the rate at which work is completed and how much work remains to be done. The chart slopes downward over Sprint duration and across Story Points completed. It shows Team progress towards the Sprint Goal, not in terms of time spent but in terms of how much work remains.

Codezillasprint11burndown.png


Bug Metrics:


Bugmetricscodezillaupdate.png

Impact Score

Severity Description
Low Impact (Score: 1) Inconsequential. Simple typo error or minor user interface misalignment.
High Impact (Score: 5) Non-critical functionalities are not working, but still system runs.
Critical Impact (Score: 10) The system or core functionality is down.


Mitigation Plan
Total Score = # Bugs (Low) x 1 + # Bugs (High) x 5 + # Bugs (Critical) x 10

Bug Score Action
Score <= 10 Fix during buffer time.
10 < Score <= 20 Fix during planned debugging time in current iteration.
Score > 20 Halt current development and fix the bug(s) immediately. Reschedules if necessary.


Project Risks

Risk Category Table:


Riskscodezilla99.png


Risk Management:


Riskmanagementtablecodezilla.png


Technical Complexity

We used a Matching Algorithm in order to carry out our project recommendations. This is done through:

  • Similarity to the specified user
  • Location of the user or Geofencing


Our team made use of the One Signal Platform in order to send out the notifications through Firebase Cloud Messaging and Swift apns. Subscriptions are listened to at the login page and upon each login. The code below initialises the notification subscription. On each subscription change, there is a post call to the DB to store a unique device:ID for each subscriber.

Codezillatechcom1.png


If it is a new subscriber, this logs the user’s latest geo-position in terms of longitude and latitude. However, if the user is being unsubscribed, the user’s particular device:ID is being removed to prevent overhead.

Codezillatechcom2.png


If its a existing subscriber, upon future logins, it updates the DB of the user’s last co-cordinates.

Codezillatechcom3.png


Codezillatechcom4.png


Below are screenshots of notifications being pushed out:

Codezillatechcom5.png


Codezillatechcom6.png


Below is a scoring algorithm on a small dataset to match the project to similar users.

Codezillatechcom7.png


Codezillatechcom8.png


We obtain candidate users based on their similarities in several categories. Scores are being calculated for for candidates in specific category:

  • Country
  • Project Area/ Cause
  • Resource Category

Similarity multipliers help to establish greater scores The logic lies in the fact that: multiple similarities in one category > similarities in separate categories

  • For similarities across 2 categories, for example Country and Project Area/Cause, we make use of the ACROSS_SIMILARITY_MULTIPLIER


Codezillatechcom9.png


Codezillatechcom10.png


Codezillatechcom11.png



Quality of Product

Intermediate Deliverables

Area of Interest Link
Project Management Minutes
Changes
Risks
Schedule
Metrics
Requirements Scope
Overview
Analysis Use Case
Architectural Diagram
Design Prototypes
Testing User Testing Link


Testing

  • For every new function developed, we conduct Internal Testing
  • We also conduct Regression Testing
  • We planned to complete a total of 3 User Acceptance Testings
  • We have conducted a total of 2 User Acceptance Testings - one in Sprint 6 and one in Sprint 11.


To learn more about the UAT: User Testing Link

User Testing 1:


Venue: SMU SIS GSR 2-2
Date:03 August 2017
Time: 7:30PM
Duration: About 30 minutes per user
Number of Participants: 4
Objectives:

  • To look at the feedback with regards to our User Interface
  • To understand any usability problems by observing their behaviour
  • To improve our web application

Scope:

  • Account Module 1 and 2
  • Project Module
  • Resource Module
  • Search Module


User Testing 2:

Venue: SMU SOL GSR B1-10
Date: 06 October 2017
Time: 11:00AM
Duration: About 30 minutes per user
Number of Participants: 8
Objectives:

  • To look at the feedback with regards to our User Interface
  • To understand any usability problems by observing their behaviour
  • To improve our web application

Scope:

  • Project Request Module
  • Resource Request Module
  • Project Management Module 1 and 2


Deployment

Learn more about our application and explore the projects and resources that we have to offer by clicking on the link provided below. Feel inspired to pledge a resource or jumpstart your very own social impact project and make a difference.
Deployed Site Link: http://impactlaunch.space

Market Questionnaire

To better understand the market and the need for our application, we conducted a survey.
Below are some of our key findings:

Codezillamarketsurvey2.png


Codezillamarketsurvey1.png


Codezillamarketsurvey3.png


  • 67% of people don’t know of any current online platforms that allow you to offer up your expertise
  • There is a good range of both Project Owners (56%) and Resource Offerors (44%)
  • Participants were highly interested in multiple social impact causes and those that ranked the highest were: Health Welbeing, Technology and Children’s Support

To view the Data: Market Questionnaire Results

Reflections

Team Reflections

Codezilla grppic.png

As a team, we have learnt that group synergy and proper communication is very important. Without which it would be difficult to work together to pull through with our various responsibilities to meet deadlines. When communication within the team breaks down, it becomes difficult to continue. This is one of the most important thing, whereby we respect each other and are transparent about any issues, enabling us to work together well.

Individual Reflections

Codezilla indv reflections.png


Border2.png