HeaderSIS.jpg

IS480 Team wiki: 2017T1 Noir Knights Final Wiki

From IS480
Revision as of 21:32, 14 November 2017 by Lynnetewong.2015 (talk | contribs)
Jump to navigation Jump to search
Noir Knights Logo.png



Noir Knights Home.png   HOME

 

Noir Knights The Team.png   ABOUT US

 

Noir Knights Project Overview.png  PROJECT OVERVIEW

 

Noir Knights Project Management.png  PROJECT MANAGEMENT

 

Noir Knights Documentation.png   DOCUMENTATION

Main Wiki Midterms Wiki Final WikiNK sub logo.jpg


Project Progress Summary

Deployed Link: Click here!

PROJECT HIGHLIGHTS!

Click the icons to find out more!

Nk iconmid.png Nk iconuat.png Nk abtest.png
Nk imeeting.png Nk smeeting.png Nk cmeeting.png
Media: OverwritePosterPrint.jpg Nk iconfin.png

Project Management

Project Scope

Acceptance Midterms

Actual Project Scope.jpg

Revised Project Scope.JPG

Final Completed Scope

Nk scope(revised).png

Major Scope changes

  • Movement of payment module and discussion board as well as flagging to good to have functions as clients wants to add more specification and for us to focus more on the exam and report module

Project Schedule

View detailed Project Schedule here!

Planned Actual

Nk planneds.png

Nk actuals.png

Schedule Highlights

  • Removing of UAT1 due to it being an internal UAT as advised by Prof Ben during acceptance
  • Renaming of UAT1, UAT2, UAT3, UAT4
  • Rescheduling of UAT1 and UAT2 due to client's availability, as we are utilising Basecamp Learning Centre students.
  • Including A/B testing to check on algorithm effectiveness

Metrics

Task Metrics

View our Task Metrics Here!

Nk tm.png
Task Metrics Highlights
Iteration Task Metric Score Action Status
4 90% Estimates are generally accurate and on track. Proceed as per normal.

Delayed slightly due to uploading of pictures to DB, team was unfamiliar with cloudinary and hence took a longer time to upload the pictures.

Follow up action: Task was pushed to the next iteration.

Completed
5 78% Need for greater efficiency. Re-estimate time needed to perform tasks and consider delaying tasks if necessary.

Delayed slightly due to deployment of script onto AWS. Pushing back deployment and testing and debugging of application

Follow up action: Task was pushed to the next iteration.

Completed
7 90% Estimates are generally accurate and on track. Proceed as per normal.

Delayed slightly due to bug of assessment page not showing the correct question that the machine learning algorithm sends.

Follow up action: Task was pushed to the next iteration.

Completed
9 86% Estimates are generally accurate and on track. Proceed as per normal.

Realised the need to use spring security for email verification. Emailing of report pushed to next iteration. More research to be done for implementing spring security.

Follow up action: Task was pushed to the next iteration.

Completed
11 89% Estimates are generally accurate and on track. Proceed as per normal.

After meeting with Prof Hoi, team decided to develop a separate platform for A/B testing to ensure that testing user's data is singled out.

Follow up action: Task of history page pushed to next iteration.

Completed

Bug Metrics

View our Bug Metrics here!

Bug Score
Bugmetrics.jpg
Bug Distribution based on Severity
Bugsbreakdown.jpg
Iteration Bug Score Summary of bugs Action Taken
5 12
2 High
2 Low
Most of the bugs in this iteration were due to the Deployment of Python script. Stop current development and resolve the bug immediately. Project Manager reschedules the project.
9 14
2 High
4 Low
MAB not generating questions.
Radar chart not showing.
Stop current development and resolve the bug immediately. Project Manager reschedules the project.

Technical Complexity

Scoring Algorithm

  • Overwrite differs from traditional marking systems whereby all students are given fixed timing on a certain set of questions
  • The scoring algorithm aims to accurately measure student’s competency for every question and documents student’s data trail precisely
Nk techD.png
  • As seen from the diagram, the app first gets factors such as time taken, level of difficulty after a student does a question.
  • The app queries the database for information such as average time taken of question by all students and number of questions answered correctly consecutively.
  • Using these 4 factors and the algorithm, a score for that question will be calculated. We will also update an average time taken of the question by all students and update the number of questions answered correctly consecutively.


Reinforcement Learning

  • Using a purely arbitrary scoring algorithm mentioned above is not a comprehensive solution in determining a student’s competency in a certain subject with many topics.
  • A holistic and dynamic testing environment is required to better analyse student’s full capabilities and imitate an examination condition
  • Utilising Multi-Armed Bandit Algorithm to predict student’s weak topics within the shortest number of trials possible
Nk mlD.png
  • Every question answered by student will run the Upper Confidence Bound 1 algorithm.
  • Algorithm would evaluate student’s performance and rank all topics in terms of their growth potential and likely hood of generating the most ‘rewards’ (weakest topic)
  • Machine would then decide between ‘exploration’ vs ‘exploitation’ in determining whether to generate a question of a different topic or proceed with testing the same topic
  • This interactive element between student and machine for each question allows for reinforcement learning to occur as machine would be able to better predict student’s strength and weakness more accurately after each round


Deployment

Deployed Link: http://54.213.80.240/overwritemaven/

Testing

We have conducted One A/B testing and Four User Acceptance testing.

A/B Testing 1

  • Venue: St. Hilda's Primary School
  • Date: 29th November 2017 - 5th November 2017
  • Participants: Primary 6 students
  • Number of Participants: 18
  • Objectives:

To test which algorithm provides the best improvement after 100 assessment questions are done. View our A/B testing results here!

User Acceptance Testing 1

  • Venue: Basecamp Learning Centre
  • Date: 17th August 2017
  • Participants: Secondary 4 students
  • Number of Participants: 3
  • Objectives:

To gather feedback regarding user interface of developed functions from prospective users
To detect usability issues based on user behavior
To improve web application based on UAT results View our UAT1 results here!

User Acceptance Testing 2

  • Venue: Basecamp Learning Centre
  • Date: 12th September 2017
  • Participants: Secondary 1 students
  • Number of Participants: 2
  • Objectives:

To gather feedback regarding user interface of developed functions from prospective users
To detect usability issues based on user behavior
To improve web application based on UAT results View our UAT2 results here!

User Acceptance Testing 3

  • Venue: SIS CR3-1
  • Date: 29th September 2017
  • Participants: SIS students
  • Number of Participants: 14
  • Objectives:

To gather feedback regarding user interface of developed functions from prospective users
To detect usability issues based on user behavior
To improve web application based on UAT results View our UAT3 results here!

User Acceptance Testing 4

  • Venue: Different homes/ schools
  • Date: 10th November 2017 - 14th November 2017
  • Participants: Primary 6 students
  • Number of Participants: <to be confirmed>
  • Objectives:

To gather feedback regarding user interface of developed functions from prospective users
To detect usability issues based on user behavior
To improve web application based on UAT results View our UAT4 results here!