HeaderSIS.jpg

IS480 Team wiki: 2017T2 Mavericks Midterms Wiki

From IS480
Revision as of 08:39, 15 February 2018 by Jamie.chew.2015 (talk | contribs)
Jump to navigation Jump to search
MV-background.png


MV-home.png   HOME

 

MV-logo.png   ABOUT US

 

MV-overview.png   PROJECT OVERVIEW

 

MV-schedule.png   PROJECT MANAGEMENT

 

MV-documentation.png   DOCUMENTATION

Main Wiki

Midterms Wiki

Finals Wiki


Project Progress Summary

Midterm slides will be posted here by 18 February 2018.

Deployed site link: Student User | Admin

This project has been split into 15 sprints - we are currently at iteration 11 (12 February to 25 February 2018). We have completed 96% of core functions, and 20% of our secondary functions. Throughout the 11 iterations, we have conducted 3 User Testings with actual students from Singapore Management University and student interns from Ngee Ann Polytechnic. We have also completed a major milestone of conducting our Proof of Deployment Lab Session at Ngee Ann Polytechnic, with 15 users.
An unexpected event that occurred would be underestimation of time taken to complete certain functionalities due to certain APIs unavailable, as well as the team's unfamiliarity with the technologies used. To resolve this, the team has come together to discuss and revise new project schedule, to ensure all deadlines to be met.
In addition, our team has removed a good-to-have scope: availability of Chinese language. Due to the nature of our project and Singapore's English-based education system, our team & sponsor have decided to remove this feature, and the time allocated for this feature will be used for enhancements of Admin Module II. Therefore, the team is confident of the completion of this project.

Project Highlights:

  • Completion of Features
    • Core functions: 96% (25 of 26 features)
    • Secondary functions: 20% (1 of 5 features)
  • Requirement changes
    • Added new functionality for Customer Request module: Standing Instruction
    • Revised process flow for Educational module
    • Included new features in Admin Analytics Dashboard
      • Descriptive Analysis
      • Cluster Analysis
  • Completed 3 User Testing sessions at Singapore Management University
  • Conducted Proof of Deployment at Ngee Ann Polytechnic
    • 15 students from Financial Informatics course
  • Achieved and exceeded midterms X-factor

Project Management

Iteration Progress: 11 of 15
Features Completion: 83% (26 out of 31 features)
Confidence Level: 100%

Project Status:

The following diagram shows the completed functionalities. Those highlighted in red are outstanding functionalities that will be completed by Iteration 14.

Mv-completed-scope-1.png

Highlight changes to modules, the completion status (implemented, user testing done, client approved, deployed, etc), the confidence level (0-1 where 0 is no confident of getting it done, 1 is 100% confident in getting it done) and comments (who has been assigned to do it, new scope, removed scoped, etc). Please use a table format to summarize with links to function details.

Function Scope Status Confident Level (0-1) Comments
Account Module Core Fully deployed and tested 100% 1 Reviewed and accepted by sponsor
Customer Request Module Core Fully deployed and tested 100% 1 Added new features: Standing Instruction
Reviewed and accepted by sponsor
Chat Module Core Fully deployed and tested 100% 1 Reviewed and accepted by sponsor
AI Module Core Fully deployed and tested 100% 1 Reviewed and accepted by sponsor
Security Module Core Fully deployed and tested 100% 1 Reviewed and accepted by sponsor
Educational Module Core 75% completed 1 Remaining features: Request-reply details of API
Admin Module I Core Fully deployed and tested 100% 1 Reviewed and accepted by sponsor
Admin Module II Secondary 33.33% completed 1 Remaining features: Descriptive & Cluster Analysis
Advanced Search Module Secondary 0% 1 Remaining features: Search by Keyword & Date
Language Module Good-to-Have - - Removed scope

Project Schedule (Plan Vs Actual):

Planned Project Schedule

MV-timeline1.jpeg

Changes Made on Planned Project Schedule

Mv-Changes to schedule.001.jpeg

Actual Project Schedule

Mv-timeline-4.001.jpeg

Project Metrics:

Task Metrics:

MV-TaskMetrics.PNGMV-TaskOverview.PNG
MV-TaskActions.PNG

Bug Metrics:

Mv-bugs.png
MV-BugScore.PNG

Project Risks:

S/N Risk Description Likelihood Impact Category Mitigation
1 Team members are unfamiliar with human–computer interaction technologies based on natural language conversations. High High Technical Unfamiliarity Lead developers will perform intensive research and guide the team. Project Manager (PM) and Tech Architect (TA) will work on the project timeline together and allocate more time to unfamiliar tasks.
2 Our voice to text functionality and natural language processing are implemented via request forwarded to a cloud-based speech recognition service. This exposes us to the risk of service failure in unforeseen circumstances, such as when the external service becomes unavailable either through malicious attacks, natural disasters or data wipes. Medium High External Risk Instead of using a single cloud-based speech recognition service for the voice to text functionality, our team will adopt services from Bing Speech, Google Cloud Speech. For the natural language processing, we will utilise DialogFlow and Wit.AI. This mitigates the risk of a single point of failure as redundancy is included.
3 Our tBuddy application needs to be integrated with the existing SMU tBank modules to simulate real-life scenarios of performing bank requests. Medium Medium Technical Unfamiliarity Our team will work closely with our sponsor to provide progress updates and retrieve all updated API to connect with all necessary tBank modules.
4 Users (faculty and students) may not be familiarized with conversational banking and face difficulties utilizing the application during their academic work. Medium Medium Users It is necessary to conduct user testing regularly with the stakeholders, so that they can provide the team with feedback for improvement, and familiarize themselves with the technology.
5 Possibility of SMU tBank API not working at times due to various possible reasons. Medium High External Risk Proper communication with other teams and ensuring that classes do not run the lab session at the same time. Schedule urgent meeting with Sponsor to run through functionalities & discover underlying causes.
6 Since AI is constantly evolving, DialogFlow may recognise the intents wrongly sometimes. High Medium External Risk It is necessary to conduct user testing regularly with the stakeholders, so that they can provide the team with feedback for improvement, and familiarize themselves with the technology.
7 Usability of voice-recording function is dependent on user’s own mobile phones. Thus, browser updates may result to users not being able to use the voice-recording function. This is because voice-recording is relatively new, esp for iOs devices. Medium Medium External Risk The team needs to be updated on the latest browser updates in order to make changes to our app accordingly.

Technical Complexity:

1) Speech-to-text Speed Optimisation

  • Version 1
    • Duration Range: 8 to 26s
    • Task performed serially
    • User must be done with recording before voice uploaded
  • Version 2
    • Duration Range: 4 to 10s
    • Voice recording and voice data transmission done concurrently between client & server
    • Enabled by Websocket protocol implementation
    • Supports full duplex transmission
  • Version 3 (Work-in-Progress)
    • Duration Range: 100ms to 6s
    • Voice recording and voice data transmission done concurrently client, server & Google
    • Enabled by Websocket implementation
    • Supports full duplex transmission
    • Integrating Google-client gRPC lib to receive Websocket stream

2) Maintaining User-DialogFlow Conversation

  • Http requests are stateless
  • Maintain the state
    • Storing sessions in application variable
    • Pass user id as metadata when forwarding requests to DialogFlow

3) New technologies

  • Web Audio Recording
    • Cross browser compatibility and limitations
    • MediaRecorder API does not work on iOS
    • Instead, use Web Audio API, and we have to
      • Manually encode PCM audio samples.
    • AudioContext.createScriptProcessor buffer size inconsistent across browsers
      • Smaller buffer is faster
      • Larger buffer has better sound quality
      • After experimentation
        • 8192 is the most optimal for our use case
  • Google Speech
    • gRPC protocol
      • Learn and understand how to perform gRPC requests
    • Unclear Library dependencies
      • Too many JAR files to keep track
      • Repository in Alpha stage, SNAPSHOT fixes not published as releases
    • Invalid Server Authentication
      • Example shown does not integrate with tBuddy
  • DialogFlow
    • Unclear documentation
      • Follow up intents
      • Request lifespan and context
      • Resulting in repeated response.
    • Experimentations required
      • Query different request and determine if response is the expected response
      • Try multiple intents and ensure that follow up intents sticks to the correct intent

4) Deployment Configuration

  • Sponsor requirement to maintain dual deployment
    • Staging
    • Production
  • Limitations on Dialogflow’s Webhook Config
    • Webhook requests can only be forwarded to one address
    • Need to have maintain separate projects for each deployment
    • Each project a unique API key
  • Conventional Deployment Config
    • Read configurations from .properties
  • Dialogflow’s Java-client library compiled apikey variable as final constant
  • Not able to simply read from .properties file and override value
  • Non-trivial to insert overriding API keys
    • Lack of documentation
    • Usage of web.xml for parameter initialisation

Challenge comes where team has to continuously experiment and perform trial & error
Actual key name to override is apiaiKey

Quality of product

System Architecture Diagram


Mv-archi.png


Intermediate Deliverables:

Stage Specification
Project Management Minutes
Schedule
Metrics
Risk Mitigation
Requirements Overview
Scope
Scenarios
Analysis System Architecture Diagram
Technologies Used
Design ER Diagram
Prototyping Progress
Testing UT1 - 6 Nov 2017
UT2 - 26 Jan 2018
UT3 - 13 Feb 2018

Testing:

  • Internal Testing is performed for every new function developed
  • Team Mavericks has scheduled for 5 User Testings in total
    • We have accomplished 3 User Testings as of Iteration 11
    • As UT3 was conducted on 13 Feb 2018, the team is still in the midst of consolidating results

For more information regarding User Testing and view results, please click here.

Reflection

Team Reflection

This project has made us realise that positive synergy is the holy grail of team work. Despite our differences in backgrounds, capabilities and perspectives, we have built common goals - to excel and to enjoy the most from this learning process. Through open communication and respect for others, we were able to find common consensus when solving problems. Since acceptance, our team has improved on our overall performance in both project management and web application development. In addition, we managed to achieve and excel for our midterms x-factor. This is highly encouraging for our team, thus, moving forward, Team Mavericks is even more motivated to push through and ensure project success.

Individual Reflections

Jamie's Reflection
I have learnt the importance of my role in team and stakeholder management, as well as maintaining overall responsibility of the project. As much as communication is essential within a team, it is also important for me to recognise every member's strengths and weaknesses, so that everyone can achieve their fullest potential to accomplish a common goal.


Yi Xiang's Reflection

We should be open to change in order for us to improve. Even if we have a good idea, which we believe to be the best, we should be open to feedback and make necessary changes. The ones who decide whether if the idea is good or not are the target users. Thus, it is important to have proof of deploment and user testing to get users' feedback for improvements.


Gerald's Reflection
As the Tech Architect, there is always something new to learn, best practices that we ought to adopt. It’s important that I’m up to speed with the right implementation to ensure that the application performs according to expectation. I’ve learnt that other than implementing a feature, we ought to implement it well so that it is usable for the users.


Bertran's Reflection
What I have learnt is that in a software project, non-functional requirements are extremely important. In fact, it might even be more difficult to fulfil a non-functional requirement, and these are often overlooked or underestimated in project plans.


Yi An's Reflection