IS480 Team wiki: 2017T1 Team Atoms MidTerm
- 1 Project Progress Summary
- 2 Project Management
- 3 Quality of Product
- 4 Reflection
Project Progress Summary
User: demo | Password: demopassword123
Project Schedule (Plan vs. Actual)
Several changes were made to the project schedule due to greater emphasis on Data Classification module and preparation for Lab Release(Live usage) as requested by sponsors. Hence, Atoms had rescheduled and dropped afew tasks that were not critical for the lab completion (as reflected in the actual schedule). The changes in the iterations were made to ensure the completion of the project on time and to also optimize the sponsor's requirements. Progress of the team is well-paced and optimistic.
Planned Project Schedule
Actual Project Schedule
Below is the change log for Iteration 7 to 10 (after acceptance):
|Iteration||Date||Type||Change Request||Rationale||Feasibility||Outcome||Priority||Status of Request||Issued By|
|7||19/8/2017||Scope & Schedule||Reschedule classification and clustering module||Reschedule functions based on lab exercise and release dates||Fits into schedule without any expected delay as a consequence.||Accepted||High||Closed||Team|
|8||23/8/2017||Scope||Remove Neural Network function||Team feels that scope is too large and functionality is not critical for labs completion||Group, sponsor and supervisor agree to removal of unnecessary functions||Accepted||High||Closed||Team|
|8||02/9/2017||Scope||Add Aggregation function||Team realize that the functions is required to effectively complete Lab1||Fits into schedule without any expected delay as a consequence||Accepted||Low||Closed||Team|
|9||08/8/2017||Scope||Add XG Boost(New Classification technique) function||Sponsor highlighted that this is a commonly used classification technique and will be really useful for student projects. Team has excess time to complete this||Fits into schedule without any expected delay as a consequence||Accepted||Low||Closed||Sponsor|
|9||08/8/2017||Scope||Add static documentation web page||Sponsor request for documentation/user guide to allow new users to be familiarize with the system. Team has excess time to complete this function||Fits into schedule without any expected delay as a consequence||Accepted||Low||Closed||Sponsor|
Existing & Potential Risk
Currently there are no outstanding risk. All identified risk and challenges have been addressed. However, from the period of Acceptance till before Mid-Terms, we have faced and resolved concerns arising from 1) Technical Risk and 2) Client Management Risk as described below:
Risks & Challenges Faced
1. Canvas Graph Traversing Algorithm
In the KDDLabs, user can draw their own data mining process in the canvas with all the legit combination of functions. When users execute the process in the canvas, they are able to choose to partially execute the process or fully execute. In order to achieve this feature, our team designed a graph traversing algorithm to handle all the possible combinations the user could draw in the canvas. The pseudo code and logical flow are as follow:
Following is an example when user is trying to execute the “Decision Tree Classifier 2”.
Only the nodes in execution list will be traversed and executed as shown below.
2. Concurrency issue with Django & Matplotlib
- Django’s default architecture handles multiple requests using a built-in load balancer to cater to concurrent users and actions.
- Matplotlib is a library in Python used for plotting charts.
- Standard requests such as read/write operations work out of the box without issue
- An issue will arise from a common use case shown above, when an Ensemble node runs and triggers multiple Decision Trees
- Each Decision Tree when executed, plots images (Confusion Matrix) - as a result, images appear to be drawn on a same canvas and 3 charts overlap each other (which becomes unreadable)
- There is limited resources and documentation on this specific topic, therefore we had to find a solution ourselves
- One workaround is to assign a random id to each plot from (0,10000) and have every chart function create a plot on a different Figure object in the backend
- We also found out that each figure has to be closed after saving to prevent further complications (memory leaks)
- As a result, we also had to implement this for every other visualization to prevent the same issue when multiple users run a plot at the same time.
3. Ensemble Algorithm
In Machine Learning (Classification), A hard Voting Classifier Ensemble technique “combines conceptually different machine learning classifiers and use a majority vote” (Sklearn). For most algorithms, we make use of sklearn libraries to perform tasks. However, there is a problem in this particular use case shown below:
- This scenario means that each Classifier (Decision Tree) is trained before the Ensemble combines the results
- However, sklearn’s VotingClassifier requires each Classifier to be created and trained together as a whole - Once the Ensemble is created to accept different Classifiers it loses its trained state!
- This means that we cannot use this library and would have to implement our own Voting Classifier.
- Once we understood how and Ensemble Voting works, we had to call each Classifier’s “predict” function and select the most occurring value for each row
- This would mean overriding Ensemble’s predict function for our use case as shown in the code below:
Quality of Product
1. Deployment Script
Manual deployment can lead to multiple human error. Hence, we have created a deployment shell script that partially automates the process of the deployment of our web application. The steps that are automated includes:
1. Stopping/Starting of system services running our web server and our web application
2. Downloading of new source code from git repository
3. Changing file system permission of directories and files
4. Execution of Django specific deployment command
With a frequent deployment rate (every iteration - 2 weeks) the chances of error due to manual deployment is much higher. Hence with the deployment script we will be able to reduce such errors.
2. Bench marking for Visualization
- This would cause datasets which a large number of columns to take a considerable amount of time, consuming resources for a single user
- If many users execute this chart at the same time, it would result in a very long response time
- To accurately measure how much time it would take for different dimensions of data sets, we generated datasets of different columns and rows and ran each charting function to see how much time it took.
- From the benchmark tests, we found out that the number of rows did not affect the performance as much as the number of columns
- From our findings, we implemented validations in place to disallow users to select too many columns (>10) for scatter matrix
3. Secure API - System security
- All backend APIs require user login to prevent unauthorized direct API calls
- For each API request to modify files, there is an implementation to verify if the file belongs to the user before the operation.
- If there is an unauthorized API request to the system, an appropriate error message will be displayed and the request will also be logged for investigation
4. Google Analytics tracking implementation
Google analytics tool can help our sponsor understand how students are interacting with our KDD Labs website, where they’re coming from and how often they visit, what parts of the site are capturing their attention and what parts aren’t sparking interest.
We also keep tracking user's’ behaviour and count number of events triggered in our system. This will allow us to keep a close monitor on functionalities being utilized in our system and assist us with tracking abnormal behavior. Furthermore, this will also be a useful tracking tool for the teaching team to understand the students usage behaviour. There are a total of 32 different activities that have been tracked since the website was announced to the students on the 7th Sep 17. The graph below shows the top 10 activities in our website.
5. System logger
Our system will consistently monitor and log down critical user actions and the problems the user encounters. This will help us to automatically track errors made in the system which will be used for our internal feedback when users utilize the KDD Labs system. We will then analyse these errors further and derive the root cause of such errors to try improve on our system if possible.
Furthermore, to make the logging files easy to locate, we have created a logger that will rotate the logging file twice a day. The file name will be changed to the date and time when the file was last modified.
|Topic of Interest||Link|
|Project Management||Project Schedule|
|Project Overview||Project Overview|
|Low & Mid Fidelity Prototypes|
Note: Application link provided is for our test server, which is only available within the SMU Network. Otherwise consider using VPN to access the SMU Network
To view application, visit
Test server: http://10.0.106.101/
Note: This public server is currently being utilized for Live Usage by the IS424 Data Mining and Business Analytics students for their labs and project completion.
Production server: https://kddlabs.cn/
We engage in comprehensive manual testing in every iteration. The developers will conduct individual testing before committing their codes on our shared repository, GitHub. We believe in testing the application manually at this level because tests can be specially adjusted to cater to changes in the application, both on the front and back end. Furthermore, manual testing brings about the human factor, allowing us to better discover problems that might surface during real usage due to natural human behavior.
Once the developers have fixed the bugs, the fixed set of codes will be merged and integrated with the other functionalities. Subsequently, the integrated code is then deployed on the test server and the lead quality assurance will run a final check against the set of test cases created. This helps to ensure that the deployed application works with no major incidents.
The team's lead quality assurance then performs regression testing on the test server where previous functionalities developed are tested again. This helps to ensure that existing functionalities in the application are not affected by the integration. Once bugs have been identified, the lead quality assurance will then update the bug-tracking Excel sheet and notify the relevant developers of the issues and the corresponding priority level.
The team’s list of test cases can be found on our private repository here.
User Acceptance Test 1 & 2
Team Atoms has conducted 2 user tests which allowed us to better manage sponsor expectations as well as improve on usability of our application interface.
For more detailed version of Team Atoms user acceptance test results, access it here:
Through our live usage and roll out, IS424 students were able to complete their take home lab assignments on our system. Thus we were able to gather feedback from our end users about KDD Labs system directly. In addition, we were also able to compare the user experience with the existing alternative used in class (SAS EM). From our feedback, we have received positive response about the KDD Labs system stating that the students were able to complete their in class lab exercise. Not only that, they also found that the KDD Labs system was easier to use as compared to SAS EM.
Release Date: 01 Sep 2017, Friday
Duration: 2-3 hours per user
Number of Users(s): 45
Lab1 User Guide: Created instructions can be found here
FEEDBACK RESULTS FROM LIVE USERS
Lab1 Feedback Results: The rest of the results from the Live user feedback can be found here
Release Date: 15 Sep 2017, Friday
Duration: 2-3 hours per user
Number of Users(s): 45
Lab2 User Guide: Created instructions can be found here
Lab2 Feedback Results: Live user feedback can be found here
This journey has proven to be an enriching learning experience for Team Atoms. The project had many new learning points for the team as it was highly technical- we had to understand and grasp the concepts of the data mining process and algorithms within a short period of time. In addition, we also learnt the importance of good stakeholder management which allows us to better react to unforeseen circumstances. Through an active team participation and communication we were able to mitigate existing issues and deliver a quality project on time.
"Team ATOMS is a capable and sincere team, that has done very well in the course of the project. KDD Labs project is very challenging and in particular requires a very diverse set of technical skills. ATOMS have made substantial efforts in acquiring new skills and integrating them to deliver a quality product in a timely manner. They have stoically faced the technical issues and challenging feature/change requests, and demonstrated an excellent work ethic in delivering on their targets. Discussions with them have been thought provoking and rewarding, and have significantly contributed towards improving the product quality" - Sponsor, Doyen Sahoo