IS480 Team wiki: 2017T2 Mavericks Midterms Wiki
- 1 Project Progress Summary
- 2 Project Management
- 3 Quality of product
- 4 Reflection
Project Progress Summary
Midterm slides and deployed site link will be posted here by 18 February 2018.
This project has been split into 15 sprints - we are currently at iteration 11 (12 February to 25 February 2018). We have completed 96% of core functions, and 20% of our secondary functions. Throughout the 11 iterations, we have conducted 3 User Testings with actual students from Singapore Management University and student interns from Ngee Ann Polytechnic. We have also completed a major milestone of conducting our Proof of Deployment Lab Session at Ngee Ann Polytechnic, with 15 users.
An unexpected event that occurred would be underestimation of time taken to complete certain functionalities due to certain APIs unavailable, as well as the team's unfamiliarity with the technologies used. To resolve this, the team has come together to discuss and revise new project schedule, to ensure all deadlines to be met.
In addition, our team has removed a good-to-have scope: availability of Chinese language. Due to the nature of our project and Singapore's English-based education system, our team & sponsor have decided to remove this feature, and the time allocated for this feature will be used for enhancements of Admin Module II. Therefore, the team is confident of the completion of this project.
- Completion of Features
- Core functions: 96% (25 of 26 features)
- Secondary functions: 20% (1 of 5 features)
- Requirement changes
- Added new functionality for Customer Request module: Standing Instruction
- Revised process flow for Educational module
- Included new features in Admin Analytics Dashboard
- Descriptive Analysis
- Cluster Analysis
- Completed 3 User Testing sessions at Singapore Management University
- Conducted Proof of Deployment at Ngee Ann Polytechnic
- 15 students from Financial Informatics course
- Achieved and exceeded midterms X-factor
Iteration Progress: 11 of 15
Features Completion: 83% (26 out of 31 features)
Confidence Level: 100%
The following diagram shows the completed functionalities. Those highlighted in red are outstanding functionalities that will be completed by Iteration 14.
Highlight changes to modules, the completion status (implemented, user testing done, client approved, deployed, etc), the confidence level (0-1 where 0 is no confident of getting it done, 1 is 100% confident in getting it done) and comments (who has been assigned to do it, new scope, removed scoped, etc). Please use a table format to summarize with links to function details.
|Function||Scope||Status||Confident Level (0-1)||Comments|
|Account Module||Core||Fully deployed and tested 100%||1||Reviewed and accepted by sponsor|
|Customer Request Module||Core||Fully deployed and tested 100%||1||Added new features: Standing Instruction|
Reviewed and accepted by sponsor
|Chat Module||Core||Fully deployed and tested 100%||1||Reviewed and accepted by sponsor|
|AI Module||Core||Fully deployed and tested 100%||1||Reviewed and accepted by sponsor|
|Security Module||Core||Fully deployed and tested 100%||1||Reviewed and accepted by sponsor|
|Educational Module||Core||75% completed||1||Remaining features: Request-reply details of API|
|Admin Module I||Core||Fully deployed and tested 100%||1||Reviewed and accepted by sponsor|
|Admin Module II||Secondary||33.33% completed||1||Remaining features: Descriptive & Cluster Analysis|
|Advanced Search Module||Secondary||0%||1||Remaining features: Search by Keyword & Date|
|Language Module||Good-to-Have||-||-||Removed scope|
Project Schedule (Plan Vs Actual):
Planned Project Schedule
Changes Made on Planned Project Schedule
Actual Project Schedule
|1||Team members are unfamiliar with human–computer interaction technologies based on natural language conversations.||High||High||Technical Unfamiliarity||Lead developers will perform intensive research and guide the team. Project Manager (PM) and Tech Architect (TA) will work on the project timeline together and allocate more time to unfamiliar tasks.|
|2||Our voice to text functionality and natural language processing are implemented via request forwarded to a cloud-based speech recognition service. This exposes us to the risk of service failure in unforeseen circumstances, such as when the external service becomes unavailable either through malicious attacks, natural disasters or data wipes.||Medium||High||External Risk||Instead of using a single cloud-based speech recognition service for the voice to text functionality, our team will adopt services from Bing Speech, Google Cloud Speech. For the natural language processing, we will utilise DialogFlow and Wit.AI. This mitigates the risk of a single point of failure as redundancy is included.|
|3||Our tBuddy application needs to be integrated with the existing SMU tBank modules to simulate real-life scenarios of performing bank requests.||Medium||Medium||Technical Unfamiliarity||Our team will work closely with our sponsor to provide progress updates and retrieve all updated API to connect with all necessary tBank modules.|
|4||Users (faculty and students) may not be familiarized with conversational banking and face difficulties utilizing the application during their academic work.||Medium||Medium||Users||It is necessary to conduct user testing regularly with the stakeholders, so that they can provide the team with feedback for improvement, and familiarize themselves with the technology.|
|5||Possibility of SMU tBank API not working at times due to various possible reasons.||Medium||High||External Risk||Proper communication with other teams and ensuring that classes do not run the lab session at the same time. Schedule urgent meeting with Sponsor to run through functionalities & discover underlying causes.|
|6||Since AI is constantly evolving, DialogFlow may recognise the intents wrongly sometimes.||High||Medium||External Risk||It is necessary to conduct user testing regularly with the stakeholders, so that they can provide the team with feedback for improvement, and familiarize themselves with the technology.|
|7||Usability of voice-recording function is dependent on user’s own mobile phones. Thus, browser updates may result to users not being able to use the voice-recording function. This is because voice-recording is relatively new, esp for iOs devices.||Medium||Medium||External Risk||The team needs to be updated on the latest browser updates in order to make changes to our app accordingly.|
Describe and list the technical complexity of your project in order of highest complexity first. For example, deploying on iPhone using Objective-C, customizing Drupal with own database, quick search for shortest flight path, database structure, etc.
1) Speech-to-text Speed Optimisation
- Version 1
- Duration Range: 8 to 26s
- Task performed serially
- User must be done with recording before voice uploaded
- Version 2
- Duration Range: 4 to 10s
- Voice recording and voice data transmission done concurrently between client & server
- Enabled by Websocket protocol implementation
- Supports full duplex transmission
- Version 3 (Work-in-Progress)
- Duration Range: 100ms to 6s
- Voice recording and voice data transmission done concurrently client, server & Google
- Enabled by Websocket implementation
- Supports full duplex transmission
- Integrating Google-client gRPC lib to receive Websocket stream
2) Maintaining User-DialogFlow Conversation
- Http requests are stateless
- Maintain the state
- Storing sessions in application variable
- Pass user id as metadata when forwarding requests to DialogFlow
3) New technologies
- Web Audio Recording
- Cross browser compatibility and limitations
- MediaRecorder API does not work on iOS
- Instead, use Web Audio API, and we have to
- Manually encode PCM audio samples.
- AudioContext.createScriptProcessor buffer size inconsistent across browsers
- Smaller buffer is faster
- Larger buffer has better sound quality
- After experimentation
- 8192 is the most optimal for our use case
- Google Speech
- gRPC protocol
- Learn and understand how to perform gRPC requests
- Unclear Library dependencies
- Too many JAR files to keep track
- Repository in Alpha stage, SNAPSHOT fixes not published as releases
- Invalid Server Authentication
- Example shown does not integrate with tBuddy
- gRPC protocol
- Unclear documentation
- Follow up intents
- Request lifespan and context
- Resulting in repeated response.
- Experimentations required
- Query different request and determine if response is the expected response
- Try multiple intents and ensure that follow up intents sticks to the correct intent
- Unclear documentation
4) Deployment Configuration
- Sponsor requirement to maintain dual deployment
- Limitations on Dialogflow’s Webhook Config
- Webhook requests can only be forwarded to one address
- Need to have maintain separate projects for each deployment
- Each project a unique API key
- Conventional Deployment Config
- Read configurations from .properties
- Dialogflow’s Java-client library compiled apikey variable as final constant
- Not able to simply read from .properties file and override value
- Non-trivial to insert overriding API keys
- Lack of documentation
- Usage of web.xml for parameter initialisation
Challenge comes where team has to continuously experiment and perform trial & error Actual key name to override is apiaiKey
Quality of product
|Analysis||System Architecture Diagram|
|Testing||User Test 1|
|User Test 2|
|User Test 3|
In an iterative approach, ready to use system should be available (deployed) for client and instructions to access the system described here (user name). If necessary, provide a deployment diagram link.
- Internal Testing is performed for every new function developed
- Team Mavericks has scheduled for 5 User Testings in total
- We have accomplished 2 User Testings as of Iteration 9
For more information regarding User Testing and view results, please click here.
This project has made us realise that positive synergy is the holy grail of team work. Despite our differences in backgrounds, capabilities and perspectives, we have built common goals - to excel and to enjoy the most from this learning process. Through open communication and respect for others, we were able to find common consensus when solving problems. Since acceptance, our team has improved on our overall performance in both project management and web application development. In addition, we managed to achieve and excel for our midterms x-factor. This is highly encouraging for our team, thus, moving forward, Team Mavericks is even more motivated to push through and ensure project success.
I have learnt the importance of my role in team and stakeholder management, as well as maintaining overall responsibility of the project. As much as communication is essential within a team, it is also important for me to recognise every member's strengths and weaknesses, so that everyone can achieve their fullest potential to accomplish a common goal.
Yi Xiang's Reflection
We should be open to change in order for us to improve. Even if we have a good idea, which we believe to be the best, we should be open to feedback and make necessary changes. The ones who decide whether if the idea is good or not are the target users. Thus, it is important to have proof of deploment and user testing to get users' feedback for improvements.
As the Tech Architect, there is always something new to learn, best practices that we ought to adopt. It’s important as the tech architect that I’m up to speed with the right implementation to ensure that the application performs according to expectation. I’ve learnt that other than implementing a feature, we ought to implement it well so that it is usable for the users.
What I have learnt is that in a software project, non-functional requirements are extremely important. In fact, it might even be more difficult to fulfil a non-functional requirement, and these are often overlooked or underestimated in project plans.
Yi An's Reflection