HeaderSIS.jpg

IS480 Team wiki: 2012T1 6-bit Final Wikipage

From IS480
Jump to navigation Jump to search
6-bit logo.png
6-bit's Chapalang! is a social utility that connects people with friends and new friends
by offering a place for exchanging ideas and information on its public domain.
http://www.chapalang.com

Final Wikipage
Home Technical Overview Project Deliverables Project Management Learning Outcomes


Project Progress Summary

Final Presentation Slides


Our website: http://www.chapalang.com

Project Overview

‎Click Here to View our X-Factors!
Problem Scenario: ‎Click Here
Team 6-bit is formed during the month of May in 2012. We have battled through 16 iterations of planning, designing, constructing and deploying our project, Chapalang and have successfully completed the project, maintaining to our planned scope. Throughout 16 iterations, we have our ups and downs. We faced several unexpected technical challenges which brought down the morale of the team. People with different methods of solving problems created communication problems. However, having a common goal in mind, we endure and frantically looking for workarounds and alternatives to overcome these challenges. We, too, faced some project management challenges where some tasks are not complete on time and caused disturbance to our schedule. These project challenges allow us to iron out our differences and streamlined a better working process. After overcoming these challenges, we had improved ourselves as individuals as well as improved as a team!


6-bit poster.jpg


Click here to view our 1min Video Pitch!

6-bit videopitch.gif















Project Challenges

There are several challenges that the team has faced throughout the IS480 journey. The bulk of Chapalang! is a highly functional and complete solution to a business opportunity of our sponsor, and hence it is not on the basis of a single sophisticated function.

However, it is a cumulatively complex system as it is formed by several functions, and the team has been challenged in different levels on integration issues, and micro-components of the project. One example of the micro-component is Paypal integration where Paypal documentations have been brief and changes are frequent, resulting in integration difficulties but eventually resolved. Another example is on our image cropping tool, where significant amount of time is spent on getting it right especially when none of us in the team has ever ventured into image manipulation.

Nonetheless, there are two other independent challenges which introduce much learning lessons for the team.

Scalability and Load Testing

As part of our User Test 4, we attempted to have a better understanding of scalability and load handling of our system by conducting a test.
To reiterate, performance measures the speed with which a single request can be executed, while scalability measures the ability of a request to maintain its performance under increasing load. In order to conduct the test, the following steps are required.

  • Performance Testing
    • Identify a system process with a series of activities
    • Measure the elapse time of each activity
    • Identify the bottleneck which is the single important activity which has highest elapse time
  • Scalability Testing
    • Increase the number of concurrent connections for the identified activity and measure of elapse time of each connection
    • Count the number of concurrent connections possible, within the same approximate amount of elapse time of each connection
  • Load Testing
    • Measure the elapse time of each connection over a range of concurrent connections, including 1, 25, 50, 75, 100 as markers

Difficulty in simulating accurate multiple concurrent connections

Amongst all the steps involved, it is easiest to conduct performance testing because we have already developed Chapalang Benchmark which will help us measure the elapsed time from the moment the framework receives the command until the output is prepared and sent to the end-user.

However, the first challenge appeared when simulating multiple concurrent connections. There were several options that we tried for the simulation, and they include:

  • Asynchronous Javascript (AJAX)

We started off with AJAX because we are most familiar with it and it is one of the easiest to implement. However, when we observe the start_time in Chapalang Benchmark, each connect has a different start_time even though it overlaps between the start_time and end_time of the previous connection.

At this stage, we cleared our misunderstanding about asynchronous calls that we had all along. While it is capable of initiating multiple asynchronous calls, where each call initiates a new connection, it does not initiate all the calls concurrently. It continues to initiate calls sequentially, but it does not wait for the previously initiated connection to be closed before it initiates the next one.

As such, we realized that AJAX will not be able to accurately simulate the same-second concurrent connections that we needed for our test.

  • PHP popen()

After some research and tests, we found out popen() which opens a pipe to a process executed by forking the command. It is also possible to create an environment where executions are done in parallel and will be suitable for simulating concurrency.

While it is fairly easy to use, it offers only a unidirectional process pipe and hence unable to give us a connection response to confirm the number of concurrent connections registered at the server-side. As such, it is not a suitable tool for our test.

  • PHP fsocketopen()

We went back to research for a new solution and it appears that fsocketopen() will be able to help us with concurrency and a bidirectional process pipe.

As we developed a simple script to simulate concurrent users, we faced a problem where Apache Web Server crashes at an unexpectedly low number of 96 concurrent connections. After some investigations, we realized that fsocketopen() itself opens the same number of concurrent connections as the number of connections Apache is expected to receive and create. Essentially, there were a doubled number of connections that the server has to create, and a check on the background process on the server-side revealed a total of 156 registered apache processes.

This led to some biasness in our test, because the intention was to create concurrency at the receiving end but the execution causes inaccuracies in the results. We attempt to solve it by hosting the wrapper file for fsocketopen() on our localhost machine, but we really wanted to take out the network performance uncertainty of every different users who may be using our services, and normalize it solely based on our system performance.

Again, we need a methodology where we can simulate concurrency at the web server, without creating additional connections to execute.

  • PHP curl_multi_exec()

Finally, we chanced upon curl_multi_exec() method in PHP 5 documentation manual. Libcurl is a PHP library where it performs a cURL session calling upon an internet or UNIX pointer. Additionally, curl_multi enables us to execute its sub-connections in the stack.

This method did exactly what we need; it offers us a single connection, multiple threads on the execution. The overall processing time is also faster than fsocketopen() that we attempted earlier by several times.

Personalization Analytics

The personalized dashboard is an adopted feature at a later stage of the project, and hence the natural difficulty is the time constraints. In the process, we had to go through standard cycle of research, prototypes, implement, integrate and test. Every stage is time-consuming and plagued with problems.

More objectively, though we are able to map an analytical process, the most challenging task is identifying a suitable data-drive semantic algorithm which is able to suit our needs. Due to time constraints, our research sources are highly limited to consultations with professors and online research.

We spoke to an Adjunct Professor who is also a Consulting Researcher in a government research agency, and he recommended for us to try out

Multivariate Distribution-based Clustering


6-bitChapalang1.png
The above diagrams illustrates some sample on Multivariate Distribution-based Clustering. Clusters can then easily be defined as objects belonging most likely to the same distribution. A nice property of this approach is that this closely resembles the way artificial data sets are generated: by sampling random objects from a distribution.

While the theoretical foundation of these methods is excellent, they suffer from one key problem known as overfitting, unless constraints are put on the model complexity. A more complex model will usually always be able to explain the data better, which makes choosing the appropriate model complexity inherently difficult.

Distribution-based clustering is a semantically strong method, as it not only provides you with clusters, but also produces complex models for the clusters that can also capture correlation and independence of attributes. However, using these algorithms puts an extra burden on the user: to choose appropriate data models to optimize, and for many real data sets, there may be no mathematical model available the algorithm is able to optimize

In short, we are not able to simply create a system model based on this statistical model for the purpose of our analytics engine.

Latent Semantic Index (LSI)

While researching deeper into semantic analysis, we discovered another model which is Latent Semantic Indexing (LSI). LSI is an indexing and retrieval method which uses a mathematical technique called Singular Value Decomposition (SVD) to identify patterns in relationships between terms and concepts contained in an unstructured collection of text.

LSI is based on the principle that words that are used in the same contexts tend to have similar meaning, and its key feature is to extract conceptual content of a body of text by establishing associations. The key benefits of LSI is that it overcomes two of the most problematic constraints of Boolean keyword queries, which is synonymy (words with similar meanings) and polysemy (words with more than one meaning).

However, LSI was originally a patented property in the late 1980s and till today, there are no opensource materials that we are able to make use of in our project.

Naïve Bayes Classifier (NBC)

Finally, we went back to the basics of statistics and found that in Bayesian statistics, it supports the model Naïve Bayes Classifier (NBC). NBC is a simple probabilistic classifier based on Bayes’ Theorem with strong independence assumptions. Its major and naïve assumption is that all input features are independent and contributes to the object. For example, an orange is “round” and is about 4 inches in “diameter”. While diameter is a measurement unit for rounded objects, and hence a dependence of “diameter” on “round”, NBC assumes that they are independent and may affect the objectivity of the results being oversimplified.

While NBC appears to be skewed and bias in most applications, it appears to match what we need because we have an exhaustive list of product or topical categories that we want to make an exhaustive recommendations from. Hence, the constraint of objectivity and dependence in the results does not create a new problem for our analytics engine.

Correspondingly, we found an open source script which applies NBC and tested to work. It appears that it is possible to configure NBC to accept different weightage of each independent characteristic.

By default, each characteristic has equal weightage with a combined maximum of 1. We found this particularly useful because we are able to adjust the weightage to suit different circumstances. Firstly, the weightage can be adjusted if there is any particular category of products or topics that we want to have higher exposure rates to system users. Hence, this can potentially be a model that suits targeted advertising. Secondly, we are able to incorporate machine learning into the system which skews the weightage according to user’s actual and future activities. Since we capture user activities in our system, the data can be further used to help in the automated skewing of weightage to provide a data-driven model not solely based on historical data but instead, more recent and futuristic data.



Project Management

Schedule

6-bit schedule.png

Detailed Schedule: Click Here

Schedule Changes

Throughout the whole project period, there were several scope adjustments and changes and they affect the schedule directly. The following documents some of the noteworthy changes made:

60bit schedulechanges.png


The tasks and allocated time for the functions removed, which occurs in iteration 14-15 are replaced with the tasks for the newly added tasks.


Click Here to see how we prioritize our scope!
Visit http://www.chapalang.com to try these features out yourself! =)

Metrics

Schedule Metric

Schedule Metric: Click Here
Metric Value: 6bit schedulevalue.png

The acceptable range of schedule metric value is between 90% and 110%, catering to the natural inaccuracies in forecasting and actual implementation of time required to complete a task. Most of our schedules are on time, with the exception of 3 iterations.

In iteration 6, there was overestimation of the amount of work that we are able to handle within the first 2-week iteration that we have, resulting in an off schedule by 20%. Previously, all our iterations are weekly. The additional tasks allocated for that iteration was to catch up with the scope to be presented for Acceptance Presentation. Many of us were however, busy with the handing over of work at our respective internship workplace and could not meet the original schedule plan. As there is a schedule lapse, we have reviewed the workload per iteration and spaced out the work for future iterations.

In iteration 9, there was sufficient time to complete tasks that were planned for the iteration. However, the rush to complete the tasks before User Test 2 has led to lower quality unit testing performed by each developer in the team. This resulted in higher bug counts than usual, and the team had to stop new developers and instead spend the time fixing the bugs. As such, the schedule missed by 30% and we gained a new learning lesson that there should be buffer time for each iteration to cater for unexpected events. In iteration 15, there were fewer tasks in the iteration than much other previous iteration. However, the schedule was missed by approximately 20%. The team has discussed, and attributed the lapse to project fatigue as well as the submission of projects from other academic modules. While this could be better planned for, it was difficult to forecast the effort required for each member’s other projects. Nonetheless, there was buffer time in the final iteration which was eventually activated to make up for the schedule lapse in iteration 15.

Bug Metric

Bug Metric: Click Here
Metric Value: 6bit bugvalue.png

As our acceptable range of bug points is under 20, there are 2 iterations where we have exceeded and remedies were applied as per our action plan.

In iteration 11, we have conducted our User Test 2 which has a wide range of functions coverage. As a bigger system tends to also have more bugs, the user test exposes bugs that were previously unknown to us. Immediately after the user test, we have stopped new developments and focused on fixing all the bugs before we continue. Steps were taken to allow more time per task to ensure developments in the team have more time to conduct unit testing before committing their codes. Our team tester has also drafted more comprehensive test cases for her test work.

In iteration 15, there was another spike in bugs metric value after the team is gradually facing project fatigue. We accepted the nature that the team has been working on the system for the past 7 months, and that there were many other school projects due during that period. As much as we have justified the cause, we stopped new developments and proceeded with a full day of bugs fixing to ensure the system is functioning properly.

Quality of Product

Stage Specification Modules
Project Management Minutes ‎Click Here
Metrics Schedule metrics: ‎Click Here
Bug metrics: ‎Click Here
Requirements Scope Click Here
Scope Prioritization Click Here
Problem Scenario As-Is: Click Here
To-Be: Click Here
Analysis Use case Click Here
Business Process Diagram Click Here
Screen Shots Click Here
Design Logical Diagram Click Here
Class Diagram Click Here
Sequence Diagrams Click Here
Data Architecture Click Here
Testing Test plan [Click Here]

Key Performance Indicators(KPI)

For the purpose of benchmarking ourselves with specific goals, we have derived on a set of indicators to understand our progress.
6bit kpi.png
The following figures are accurate as of 21st November 2012.
505 real members
106 real transactions
189 real physical items sold
$2536.50 real revenue
9 registered and active sellers
62 real products
49 days of operations

Testing

There are a total of 4 User Tests conducted, each with a different coverage and test methodology.
6bituser-testingoverview.png
Click Here to view the details of User Test 1
Click Here to view the details of User Test 2
Click Here to view the details of User Test 3
Click Here to view the details of User Test 4

Reflection

Click Here to Download our Sponsor's Comments!

Team Reflection

Learning Outcome: ‎Click Here

6-bit teamReflection.png

Individual Reflection

6-bit tianxiangReflection.png


6-bit geksengReflection.png


6-bit houstonReflection.png


6-bit aloysiusReflection.png


6-bit kennethReflection.png


6-bit huilingReflection.png