HeaderSIS.jpg

Difference between revisions of "IS480 Team wiki: 2012T1 6-bit Final Wikipage"

From IS480
Jump to navigation Jump to search
 
(203 intermediate revisions by the same user not shown)
Line 2: Line 2:
 
<div class="center" style="width:60%; margin-left:auto; margin-right:auto;">'''6-bit's Chapalang!''' is a social utility that connects people with friends and new friends <br> by offering a place for exchanging ideas and information on its public domain. <br> http://www.chapalang.com
 
<div class="center" style="width:60%; margin-left:auto; margin-right:auto;">'''6-bit's Chapalang!''' is a social utility that connects people with friends and new friends <br> by offering a place for exchanging ideas and information on its public domain. <br> http://www.chapalang.com
 
</div>
 
</div>
<font face="Arial" size="3">
+
<font face="Calibri">
 
<!--Navigation-->
 
<!--Navigation-->
 
{| style="background-color:#ffffff; color:#000000" width="100%" cellspacing="0" cellpadding="8" valign="top" border="1" |
 
{| style="background-color:#ffffff; color:#000000" width="100%" cellspacing="0" cellpadding="8" valign="top" border="1" |
  
| style="filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#FF0066', endColorstr='#FF0066'); background: -webkit-gradient(linear, left top, left bottom, from(#FF0066), to(#FF0066)); background: -moz-linear-gradient(top,  #FF0066,  #FF0066); font-size:90%; text-align:center; color:#ffffff" width="10%" | [[IS480_Team_wiki:_2012T1_6-bit_Final_Wikipage | <font color="#FFFFFF"><b>Final Wikipage</b></font>]]
+
| style="filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#FF0066', endColorstr='#FF0066'); background: -webkit-gradient(linear, left top, left bottom, from(#FF0066), to(#FF0066)); background: -moz-linear-gradient(top,  #FF0066,  #FF0066); font-size:110%; text-align:center; color:#ffffff" width="10%" | [[IS480_Team_wiki:_2012T1_6-bit_Final_Wikipage | <font color="#FFFFFF"><b>Final Wikipage</b></font>]]
  
 
|}
 
|}
 
 
{| style="background-color:#ffffff; color:#000000" width="100%" cellspacing="0" cellpadding="8" valign="top" border="0" |
 
{| style="background-color:#ffffff; color:#000000" width="100%" cellspacing="0" cellpadding="8" valign="top" border="0" |
  
| style="filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#FF0066', endColorstr='#FF0066'); background: -webkit-gradient(linear, left top, left bottom, from(#FF0066), to(#FF0066)); background: -moz-linear-gradient(top,  #FF0066,  #FF0066); font-size:90%; text-align:center; color:#ffffff" width="10%" | [[IS480_Team_wiki:_2012T1_6-bit | <font color="#FFFFFF"><b>Home</b></font>]]
+
| style="filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#FF0066', endColorstr='#FF0066'); background: -webkit-gradient(linear, left top, left bottom, from(#FF0066), to(#FF0066)); background: -moz-linear-gradient(top,  #FF0066,  #FF0066); font-size:110%; text-align:center; color:#ffffff" width="10%" | [[IS480_Team_wiki:_2012T1_6-bit | <font color="#FFFFFF"><b>Home</b></font>]]
  
| style="filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#FF0066', endColorstr='#FF0066'); background: -webkit-gradient(linear, left top, left bottom, from(#FF0066), to(#FF0066)); background: -moz-linear-gradient(top,  #FF0066,  #FF0066); font-size:90%; text-align:center; color:#ffffff" width="10%" | [[IS480_Team_wiki:_2012T1_6-bit_Technical_Overview | <font color="#FFFFFF"><b>Technical Overview</b></font>]]
+
| style="filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#FF0066', endColorstr='#FF0066'); background: -webkit-gradient(linear, left top, left bottom, from(#FF0066), to(#FF0066)); background: -moz-linear-gradient(top,  #FF0066,  #FF0066); font-size:110%; text-align:center; color:#ffffff" width="10%" | [[IS480_Team_wiki:_2012T1_6-bit_Technical_Overview | <font color="#FFFFFF"><b>Technical Overview</b></font>]]
  
| style="filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#FF0066', endColorstr='#FF0066'); background: -webkit-gradient(linear, left top, left bottom, from(#FF0066), to(#FF0066)); background: -moz-linear-gradient(top,  #FF0066,  #FF0066); font-size:90%; text-align:center; color:#ffffff" width="10%" | [[IS480_Team_wiki:_2012T1_6-bit_Project_Deliverables | <font color="#FFFFFF"><b>Project Deliverables</b></font>]]
+
| style="filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#FF0066', endColorstr='#FF0066'); background: -webkit-gradient(linear, left top, left bottom, from(#FF0066), to(#FF0066)); background: -moz-linear-gradient(top,  #FF0066,  #FF0066); font-size:110%; text-align:center; color:#ffffff" width="10%" | [[IS480_Team_wiki:_2012T1_6-bit_Project_Deliverables | <font color="#FFFFFF"><b>Project Deliverables</b></font>]]
  
| style="filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#FF0066', endColorstr='#FF0066'); background: -webkit-gradient(linear, left top, left bottom, from(#FF0066), to(#FF0066)); background: -moz-linear-gradient(top,  #FF0066,  #FF0066); font-size:90%; text-align:center; color:#ffffff" width="10%" | [[IS480_Team_wiki:_2012T1_6-bit_Project_Management | <font color="#FFFFFF"><b>Project Management</b></font>]]
+
| style="filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#FF0066', endColorstr='#FF0066'); background: -webkit-gradient(linear, left top, left bottom, from(#FF0066), to(#FF0066)); background: -moz-linear-gradient(top,  #FF0066,  #FF0066); font-size:110%; text-align:center; color:#ffffff" width="10%" | [[IS480_Team_wiki:_2012T1_6-bit_Project_Management | <font color="#FFFFFF"><b>Project Management</b></font>]]
  
| style="filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#FF0066', endColorstr='#FF0066'); background: -webkit-gradient(linear, left top, left bottom, from(#FF0066), to(#FF0066)); background: -moz-linear-gradient(top,  #FF0066,  #FF0066); font-size:90%; text-align:center; color:#ffffff" width="10%" | [[IS480_Team_wiki:_2012T1_6-bit_Learning_Outcomes | <font color="#FFFFFF"><b>Learning Outcomes</b></font>]]
+
| style="filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#FF0066', endColorstr='#FF0066'); background: -webkit-gradient(linear, left top, left bottom, from(#FF0066), to(#FF0066)); background: -moz-linear-gradient(top,  #FF0066,  #FF0066); font-size:110%; text-align:center; color:#ffffff" width="10%" | [[IS480_Team_wiki:_2012T1_6-bit_Learning_Outcomes | <font color="#FFFFFF"><b>Learning Outcomes</b></font>]]
  
 
|}
 
|}
 +
<br>
 
=<div style="background: #FF0080; background: -webkit-gradient(linear, left top, left bottom, from(#FF0080), to(#F660AB)); padding: 12px; font-weight: bold; text-align: center "><font color="white" size="6" >Project Progress Summary</font></div>=
 
=<div style="background: #FF0080; background: -webkit-gradient(linear, left top, left bottom, from(#FF0080), to(#F660AB)); padding: 12px; font-weight: bold; text-align: center "><font color="white" size="6" >Project Progress Summary</font></div>=
[[Media:Chapalang_Final_Slides.pdf‎|<font color="#CD004E"> Final Presentation Slides]]
+
[https://dl.dropbox.com/u/56071797/Final_Presentation.pdf <font color="#CD004E"><b>Final Presentation Slides</b></font>]
 +
 
 
<br>
 
<br>
 
Our website: http://www.chapalang.com
 
Our website: http://www.chapalang.com
 +
==Project Overview==
 +
<div style="text-align: center">
 +
[https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_Project_Deliverables#X-Factors ‎Click Here to View our X-Factors!]
 
<br>
 
<br>
 +
Problem Scenario: [https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_Project_Deliverables#Problem_Scenario ‎Click Here]<br>
 +
''Team 6-bit is formed during the month of May in 2012. We have battled through 16 iterations of planning, designing, constructing and deploying our project, Chapalang and have successfully completed the project, maintaining to our planned scope. Throughout 16 iterations, we have our ups and downs. We faced several unexpected technical challenges which brought down the morale of the team. People with different methods of solving problems created communication problems. However, having a common goal in mind, we endure and frantically looking for workarounds and alternatives to overcome these challenges. We, too, faced some project management challenges where some tasks are not complete on time and caused disturbance to our schedule. These project challenges allow us to iron out our differences and streamlined a better working process. After overcoming these challenges, we had improved ourselves as individuals as well as improved as a team!''
 +
 +
</div>
 +
<br>[[Image:6-bit_poster.jpg|right|450px]]
 +
<br>[http://www.youtube.com/watch?v=8Y8Q9iCKQUY Click here to view our 1min Video Pitch!]
 
<br>
 
<br>
''Team 6-bit is formed during the month of May in 2012. We have battled through 16 iterations of planning, designing, constructing and deploying our project, Chapalang and have successfully completed the project maintaining to our planned scope. Throughout the 16 iterations, we have our ups and downs. We faced several unexpected technical challenges which brought down the morale of the team. People with different methods of solving problems created communication problems. However, we had a common goal in mind, we endure and frantically looking for workarounds and alternatives to overcome these challenges. We, too, faced some project management challenges where some tasks are not complete on time and caused disturbance to our schedule. These project challenges allow us to iron out our differences and streamlined a better working process. After overcoming these challenges, we had improved ourselves as individuals as well as improved as a team!''
+
[[Image:6-bit_videopitch.gif|left|450px]]
 +
<br><br><br><br><br><br><br><br><br><br><br>
 
<br>
 
<br>
[[Image:6-bit_videopitch.gif|left|450px]]
 
[http://www.youtube.com/watch?v=8Y8Q9iCKQUY Click here to view our 1min Video Pitch!]
 
[[Image:6-bit_poster.jpg|right|450px]]
 
 
</div>
 
==Project Highlights==
 
<b>What we have completed:</b>
 
* Facebook API
 
** Extend access token expiration
 
** Post on user's Facebook wall
 
** Facebook login
 
** Pulling of basic data
 
** Pulling of user's friends list
 
** Updating of user's friends list by using Cronjob
 
* Logout
 
* AJAX UI
 
* Posts/Thread
 
* Replies/Comments
 
* Like & Unlike
 
* Deletion
 
** UI, mouseover to display delete icon
 
* Programming forum
 
* Spinner
 
** UI, replacement for loading process
 
* Uploading images
 
** Scalable photo hosting
 
* Image processor
 
** Convertion upon upload
 
** Dynamic thumbnail creation
 
* Post links
 
* Redirection processor (HTML Parser)
 
* Marketplace product management
 
** List all products
 
** View product item
 
* Search function
 
** Post
 
* Product
 
* Notifications
 
* Shopping Item Processing
 
** Order confirmation page
 
** payment page
 
** payment confirmation page
 
* Product review
 
* Product rating
 
* Marketplace transaction
 
* Paypal payment gateway
 
** Refund API
 
* Email notifications
 
** Welcome email
 
** Auto-follow email
 
** General notifications email
 
* Date/Time reflection
 
** Posts
 
** Replies
 
** Products
 
** Reviews
 
* Cronjob setup
 
* Profile page
 
* Friends
 
** Ability to Follow/Unfollow friends
 
* Escrow
 
** "Confirm order received" button
 
* Gift sharing
 
** Campaign (Group sharing)
 
** Individual
 
* Dashboards
 
** To manage orders/products/sales
 
  
 
<br>
 
<br>
<b>What unexpected events occurred:</b>
 
* Accidental deletion of live website database
 
* Members machine breakdown
 
  
==Project Management==
 
  
Provide more details about the status, schedule and the scope of the project. Describe the complexity of the project.
 
  
===Project Schedule (Plan Vs Actual):===
+
==Project Challenges==
 +
There are several challenges that the team has faced throughout the IS480 journey. The bulk of Chapalang! is a highly functional and complete solution to a business opportunity of our sponsor, and hence it is not on the basis of a single sophisticated function.
 +
<br><br>
 +
However, it is a cumulatively complex system as it is formed by several functions, and the team has been challenged in different levels on integration issues, and micro-components of the project. One example of the micro-component is Paypal integration where Paypal documentations have been brief and changes are frequent, resulting in integration difficulties but eventually resolved. Another example is on our image cropping tool, where significant amount of time is spent on getting it right especially when none of us in the team has ever ventured into image manipulation.
 +
<br><br>
 +
Nonetheless, there are two other independent challenges which introduce much learning lessons for the team.
  
Compare the project plan during midterm with the actual work done at this point. Briefly describe a summary here. Everything went as plan, everything has changed and the team is working on a new project with new sponsors or the supervisor is missing. A good source for this section comes from the project weekly report.
+
===Scalability and Load Testing===
 +
As part of our User Test 4, we attempted to have a better understanding of scalability and load handling of our system by conducting a test.<br>
 +
To reiterate, performance measures the speed with which a single request can be executed, while scalability measures the ability of a request to maintain its performance under increasing load. In order to conduct the test, the following steps are required.
 +
* Performance Testing
 +
** Identify a system process with a series of activities
 +
** Measure the elapse time of each activity
 +
** Identify the bottleneck which is the single important activity which has highest elapse time
 +
* Scalability Testing
 +
** Increase the number of concurrent connections for the identified activity and measure of elapse time of each connection
 +
** Count the number of concurrent connections possible, within the same approximate amount of elapse time of each connection
 +
* Load Testing
 +
** Measure the elapse time of each connection over a range of concurrent connections, including 1, 25, 50, 75, 100 as markers
  
Provide a comparison of the plan and actual schedule. Has the project scope expanded or reduced? You can use the table below or your own gantt charts.
+
====Difficulty in simulating accurate multiple concurrent connections====
 +
Amongst all the steps involved, it is easiest to conduct performance testing because we have already developed Chapalang Benchmark which will help us measure the elapsed time from the moment the framework receives the command until the output is prepared and sent to the end-user.
 +
<br><br>
 +
However, the first challenge appeared when simulating multiple concurrent connections. There were several options that we tried for the simulation, and they include:
 +
* Asynchronous Javascript (AJAX)
 +
We started off with AJAX because we are most familiar with it and it is one of the easiest to implement. However, when we observe the start_time in Chapalang Benchmark, each connect has a different start_time even though it overlaps between the start_time and end_time of the previous connection.
 +
<br><br>
 +
At this stage, we cleared our misunderstanding about asynchronous calls that we had all along. While it is capable of initiating multiple asynchronous calls, where each call initiates a new connection, it does not initiate all the calls concurrently. It continues to initiate calls sequentially, but it does not wait for the previously initiated connection to be closed before it initiates the next one.
 +
<br><br>
 +
As such, we realized that AJAX will not be able to accurately simulate the same-second concurrent connections that we needed for our test.
 +
* PHP popen()
 +
After some research and tests, we found out popen() which opens a pipe to a process executed by forking the command. It is also possible to create an environment where executions are done in parallel and will be suitable for simulating concurrency.
 +
<br><br>
 +
While it is fairly easy to use, it offers only a unidirectional process pipe and hence unable to give us a connection response to confirm the number of concurrent connections registered at the server-side. As such, it is not a suitable tool for our test.
 +
* PHP fsocketopen()
 +
We went back to research for a new solution and it appears that fsocketopen() will be able to help us with concurrency and a bidirectional process pipe.
 +
<br><br>
 +
As we developed a simple script to simulate concurrent users, we faced a problem where Apache Web Server crashes at an unexpectedly low number of 96 concurrent connections. After some investigations, we realized that fsocketopen() itself opens the same number of concurrent connections as the number of connections Apache is expected to receive and create. Essentially, there were a doubled number of connections that the server has to create, and a check on the background process on the server-side revealed a total of 156 registered apache processes.
 +
<br><br>
 +
This led to some biasness in our test, because the intention was to create concurrency at the receiving end but the execution causes inaccuracies in the results. We attempt to solve it by hosting the wrapper file for fsocketopen() on our localhost machine, but we really wanted to take out the network performance uncertainty of every different users who may be using our services, and normalize it solely based on our system performance.
 +
<br><br>
 +
Again, we need a methodology where we can simulate concurrency at the web server, without creating additional connections to execute.
 +
* PHP curl_multi_exec()
 +
Finally, we chanced upon curl_multi_exec() method in PHP 5 documentation manual. Libcurl is a PHP library where it performs a cURL session calling upon an internet or UNIX pointer. Additionally, curl_multi enables us to execute its sub-connections in the stack.
 +
<br><br>
 +
This method did exactly what we need; it offers us a single connection, multiple threads on the execution. The overall processing time is also faster than fsocketopen() that we attempted earlier by several times.
  
{| border="1"
+
===Personalization Analytics===
|- style="background:blue; color:white"
+
The personalized dashboard is an adopted feature at a later stage of the project, and hence the natural difficulty is the time constraints. In the process, we had to go through standard cycle of research, prototypes, implement, integrate and test. Every stage is time-consuming and plagued with problems.
|| Iterations
+
<br><br>
|colspan="2" align="center"| Planned
+
More objectively, though we are able to map an analytical process, the most challenging task is identifying a suitable data-drive semantic algorithm which is able to suit our needs. Due to time constraints, our research sources are highly limited to consultations with professors and online research.
|colspan="2" align="center"| Actual
+
<br><br>
|| Comments
+
We spoke to an Adjunct Professor who is also a Consulting Researcher in a government research agency, and he recommended for us to try out
|-
 
  
|rowspan="2"| 1
+
====Multivariate Distribution-based Clustering====
|| Customer CRUD
+
<br>
|| 1 Sept 2010
+
[[Image:6-bitChapalang1.png|500px]]
||
+
<br>
|| 25 Aug 2010
+
The above diagrams illustrates some sample on Multivariate Distribution-based Clustering. Clusters can then easily be defined as objects belonging most likely to the same distribution. A nice property of this approach is that this closely resembles the way artificial data sets are generated: by sampling random objects from a distribution.
|| Fiona took the Sales CRUD as well.
+
<br><br>
|-
+
While the theoretical foundation of these methods is excellent, they suffer from one key problem known as overfitting, unless constraints are put on the model complexity. A more complex model will usually always be able to explain the data better, which makes choosing the appropriate model complexity inherently difficult.
 +
<br><br>
 +
Distribution-based clustering is a semantically strong method, as it not only provides you with clusters, but also produces complex models for the clusters that can also capture correlation and independence of attributes. However, using these algorithms puts an extra burden on the user: to choose appropriate data models to optimize, and for many real data sets, there may be no mathematical model available the algorithm is able to optimize
 +
<br><br>
 +
In short, we are not able to simply create a system model based on this statistical model for the purpose of our analytics engine.
  
|| Trend Analytic
+
====Latent Semantic Index (LSI)====
|| 1 Sept 2010
+
While researching deeper into semantic analysis, we discovered another model which is Latent Semantic Indexing (LSI). LSI is an indexing and retrieval method which uses a mathematical technique called Singular Value Decomposition (SVD) to identify patterns in relationships between terms and concepts contained in an unstructured collection of text.
||
+
<br><br>
|| 15 Sept 2010
+
LSI is based on the principle that words that are used in the same contexts tend to have similar meaning, and its key feature is to extract conceptual content of a body of text by establishing associations. The key benefits of LSI is that it overcomes two of the most problematic constraints of Boolean keyword queries, which is synonymy (words with similar meanings) and polysemy (words with more than one meaning).
|| Ben is too busy and pushed iteration 1 back
+
<br><br>
|-
+
However, LSI was originally a patented property in the late 1980s and till today, there are no opensource materials that we are able to make use of in our project.
 +
====Naïve Bayes Classifier (NBC)====
 +
Finally, we went back to the basics of statistics and found that in Bayesian statistics, it supports the model Naïve Bayes Classifier (NBC).
 +
NBC is a simple probabilistic classifier based on Bayes’ Theorem with strong independence assumptions. Its major and naïve assumption is that all input features are independent and contributes to the object. For example, an orange is “round” and is about 4 inches in “diameter”. While diameter is a measurement unit for rounded objects, and hence a dependence of “diameter” on “round”, NBC assumes that they are independent and may affect the objectivity of the results being oversimplified.
 +
<br><br>
 +
While NBC appears to be skewed and bias in most applications, it appears to match what we need because we have an exhaustive list of product or topical categories that we want to make an exhaustive recommendations from. Hence, the constraint of objectivity and dependence in the results does not create a new problem for our analytics engine.
 +
<br><br>
 +
Correspondingly, we found an open source script which applies NBC and tested to work. It appears that it is possible to configure NBC to accept different weightage of each independent characteristic.
 +
<br><br>
 +
By default, each characteristic has equal weightage with a combined maximum of 1. We found this particularly useful because we are able to adjust the weightage to suit different circumstances. Firstly, the weightage can be adjusted if there is any particular category of products or topics that we want to have higher exposure rates to system users. Hence, this can potentially be a model that suits targeted advertising. Secondly, we are able to incorporate machine learning into the system which skews the weightage according to user’s actual and future activities. Since we capture user activities in our system, the data can be further used to help in the automated skewing of weightage to provide a data-driven model not solely based on historical data but instead, more recent and futuristic data.
  
|rowspan="2"| 2
 
|| User tutorial
 
|| 1 Oct 2010
 
|colspan="2"|
 
|| Removed proposed by Ben
 
|-
 
  
|colspan="2"|
 
|| Psycho analysis
 
|| 1 Oct 2010
 
|| New module proposed by sponsor
 
|}
 
  
===Project Metrics:===
+
<!-- ==Project Achievements==
 +
Methods, technologies, processes, teamwork, etc. which were particularly successful – highlight things which worked very well towards completing the project. A bulleted list of one to two sentences each will do. If there are no achievement, remove this section. -->
  
Summary of analysis for the metrics collected. You may refer to another page for the details about the metrics and how it is collected.
+
=<div style="background: #FF0080; background: -webkit-gradient(linear, left top, left bottom, from(#FF0080), to(#F660AB)); padding: 12px; font-weight: bold; text-align: center "><font color="white" size="6" >Project Management</font></div>=
 +
==Schedule==
 +
[[Image:6-bit_schedule.png|center|600px]]
 +
Detailed Schedule: [https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_Project_Management#Schedule Click Here]
 +
===Schedule Changes===
 +
Throughout the whole project period, there were several scope adjustments and changes and they affect the schedule directly. The following documents some of the noteworthy changes made:
 +
<br>
 +
[[Image:60bit_schedulechanges.png|center|500px]]
 +
<br>
 +
The tasks and allocated time for the functions removed, which occurs in iteration 14-15 are replaced with the tasks for the newly added tasks.
 +
<br>
  
===Technical Complexity:===
 
  
Describe and list the technical complexity of your project in order of highest complexity first. For example, deploying on iPhone using Objective-C, customizing Drupal with own database, quick search for shortest flight path, database structure, etc.
+
<div style="text-align: right;">
 +
<font size="2">''[https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_Technical_Overview#Scope_Prioritization Click Here to see how we prioritize our scope!]
 +
<br>
 +
Visit http://www.chapalang.com to try these features out yourself! =)''
 +
</font>
 +
</div>
 +
==Metrics==
 +
===Schedule Metric===
 +
Schedule Metric: [https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_Project_Management#Schedule_Metric Click Here]
 +
<br>
 +
Metric Value: [[Image:6bit_schedulevalue.png|500px]]
 +
<br><br>
 +
The acceptable range of schedule metric value is between 90% and 110%, catering to the natural inaccuracies in forecasting and actual implementation of time required to complete a task. Most of our schedules are on time, with the exception of 3 iterations.
 +
<br><br>
 +
In iteration 6, there was overestimation of the amount of work that we are able to handle within the first 2-week iteration that we have, resulting in an off schedule by 20%. Previously, all our iterations are weekly.  The additional tasks allocated for that iteration was to catch up with the scope to be presented for Acceptance Presentation. Many of us were however, busy with the handing over of work at our respective internship workplace and could not meet the original schedule plan. As there is a schedule lapse, we have reviewed the workload per iteration and spaced out the work for future iterations.
 +
<br><br>
 +
In iteration 9, there was sufficient time to complete tasks that were planned for the iteration. However, the rush to complete the tasks before User Test 2 has led to lower quality unit testing performed by each developer in the team. This resulted in higher bug counts than usual, and the team had to stop new developers and instead spend the time fixing the bugs. As such, the schedule missed by 30% and we gained a new learning lesson that there should be buffer time for each iteration to cater for unexpected events.
 +
In iteration 15, there were fewer tasks in the iteration than much other previous iteration. However, the schedule was missed by approximately 20%. The team has discussed, and attributed the lapse to project fatigue as well as the submission of projects from other academic modules. While this could be better planned for, it was difficult to forecast the effort required for each member’s other projects. Nonetheless, there was buffer time in the final iteration which was eventually activated to make up for the schedule lapse in iteration 15.
 +
===Bug Metric===
 +
Bug Metric: [https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_Project_Management#Bug_Metric Click Here]
 +
<br>
 +
Metric Value: [[Image:6bit_bugvalue.png|500px]]
 +
<br><br>
 +
As our acceptable range of bug points is under 20, there are 2 iterations where we have exceeded and remedies were applied as per our action plan.
 +
<br><br>
 +
In iteration 11, we have conducted our User Test 2 which has a wide range of functions coverage. As a bigger system tends to also have more bugs, the user test exposes bugs that were previously unknown to us. Immediately after the user test, we have stopped new developments and focused on fixing all the bugs before we continue. Steps were taken to allow more time per task to ensure developments in the team have more time to conduct unit testing before committing their codes. Our team tester has also drafted more comprehensive test cases for her test work.
 +
<br><br>
 +
In iteration 15, there was another spike in bugs metric value after the team is gradually facing project fatigue. We accepted the nature that the team has been working on the system for the past 7 months, and that there were many other school projects due during that period. As much as we have justified the cause, we stopped new developments and proceeded with a full day of bugs fixing to ensure the system is functioning properly.
  
==Quality of product==
+
=<div style="background: #FF0080; background: -webkit-gradient(linear, left top, left bottom, from(#FF0080), to(#F660AB)); padding: 12px; font-weight: bold; text-align: center "><font color="white" size="6" >Quality of Product</font></div>=
  
Provide more details about the quality of your work. For example, you designed a flexible configurable system using XML.config files, uses Strategy Design Pattern to allow plugging in different strategy, implement a regular expression parser to map a flexible formula editor, etc.
+
{| class="wikitable"
 
+
|- style="background:#58ACFA; color:white"  
===Project Deliverables:===
 
 
 
List the artifacts produced for this project. The entire deliverable can be submitted in a separate thumb drive, web repository or place in the IS480 team wiki.
 
 
 
{| border="1"
 
|- style="background:blue; color:white"  
 
 
|align="center"| Stage  
 
|align="center"| Stage  
 
|align="center"| Specification
 
|align="center"| Specification
Line 173: Line 185:
 
|-
 
|-
  
|rowspan="2"| Project Management
+
|rowspan="3"| Project Management
 
|| Minutes
 
|| Minutes
|| Sponsor weeks -10 -5 3 7 Supervisor weeks -2 3 5 7
+
|| [https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_Project_Management#Meeting_Minutes ‎Click Here]
 
|-
 
|-
  
|| Metrics
+
|rowspan="2"| Metrics
|| Bug metrics
+
|| Schedule metrics: [https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_Project_Management#Meeting_Minutes ‎Click Here]
 
|-
 
|-
  
|| Requirements
+
|| Bug metrics: [https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_Project_Management#Bug_Metric ‎Click Here]
|| Story cards
 
|| [http://www.agilemodeling.com/artifacts/userStory.htm CRUD Customer], [http://www.agilemodeling.com/artifacts/userStory.htm Trend Analytic]
 
 
|-
 
|-
  
|rowspan="4"| Analysis
+
|rowspan="4"| Requirements
|| Use case
+
|| Scope
|| [http://en.wikipedia.org/wiki/Use_case_diagram overall]
+
|| [https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_Project_Deliverables Click Here]
 
|-
 
|-
  
|| System Sequence Diagram
+
|| Scope Prioritization
|| [http://en.wikipedia.org/wiki/System_Sequence_Diagram client], [http://en.wikipedia.org/wiki/System_Sequence_Diagram server]
+
|| [https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_Technical_Overview#Scope_Prioritization Click Here]
 
|-
 
|-
  
|| [http://en.wikipedia.org/wiki/Business_Process_Modeling_Notation Business Process Diagram]
+
|rowspan="2"| Problem Scenario
||
+
|| As-Is: [https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_Project_Deliverables#As-Is Click Here]
 
|-
 
|-
  
|| Screen Shots
+
|| To-Be: [https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_Project_Deliverables#To-Be Click Here]
|| CRUD Customer, Trend Analysis
 
 
|-
 
|-
  
|rowspan="2"| Design
+
|rowspan="3"| Analysis
|| [http://en.wikipedia.org/wiki/Entity-relationship_model ER Diagram]
+
|| Use case
|| 1, 2, 3
+
|| [https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_PD_Technical_Diagrams#Use_Case_Diagram Click Here]
 
|-
 
|-
  
|| [http://en.wikipedia.org/wiki/Class_diagram Class Diagram]
+
|| Business Process Diagram
|| [http://en.wikipedia.org/wiki/Class_diagram 1], [http://en.wikipedia.org/wiki/Class_diagram 2], [http://en.wikipedia.org/wiki/Class_diagram 3]
+
|| [https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_Project_Management#Bug_Metric Click Here]
 
|-
 
|-
  
 +
|| Screen Shots
 +
|| [https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_PD_User_Interface_Prototype Click Here]
 +
|-
  
|| Testing
+
|rowspan="4"| Design
|| Test plan
+
|| Logical Diagram
|| [[IS480_Midterm_Wiki#Testing: | instructions]]
+
|| [https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_PD_Technical_Diagrams#Logical_Diagram Click Here]
 
|-
 
|-
  
|rowspan="3"| Handover
+
|| Class Diagram
|| Manuals
+
|| [https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_PD_Technical_Diagrams#Class_Diagram Click Here]
|| User tutorial, Developer manual, Setup manual
 
 
|-
 
|-
  
|| Code
+
|| Sequence Diagrams
|| client server
+
|| [https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_PD_Technical_Diagrams#Sequence_Diagram Click Here]
 +
|-
 +
 
 +
|| Data Architecture
 +
|| [https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_Technical_Overview#Data_Architecture Click Here]
 +
|-
 +
|| Testing
 +
|| Test plan
 +
|| [[https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_Project_Management#Testing Click Here]]
 
|-
 
|-
  
|| [http://en.wikipedia.org/wiki/Deployment_diagram Deployment Diagram]
 
|| [[IS480_Midterm_Wiki#Deployment: | instructions]]
 
 
|}
 
|}
  
Not all parts of the deliverables are necessary but the evidence should be convincing of the scope.
+
==Key Performance Indicators(KPI)==
 +
For the purpose of benchmarking ourselves with specific goals, we have derived on a set of indicators to understand our progress.
 +
<br>
 +
[[Image:6bit_kpi.png]]
 +
<br>
 +
The following figures are accurate as of 21st November 2012.
 +
<br>
 +
505 real members
 +
<br>
 +
106 real transactions
 +
<br>
 +
189 real physical items sold
 +
<br>
 +
$2536.50 real revenue
 +
<br>
 +
9 registered and active sellers
 +
<br>
 +
62 real products
 +
<br>
 +
49 days of operations
 +
<br>
 +
<div style="text-align: right;">
 +
<font size="2">''[https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_Learning_Outcomes Click Here to see our Learning Outcome!]''</font>
 +
</div>
  
=== Quality:===
+
==Testing==
 +
There are a total of 4 User Tests conducted, each with a different coverage and test methodology.
 +
<br>
 +
[[Image:6bituser-testingoverview.png|500px]]
 +
<br>
 +
[https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_Project_Management_UT1#User_Testing Click Here to view the details of User Test 1]
 +
<br>
 +
[https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_Project_Management_UT2#User_Testing Click Here to view the details of User Test 2]
 +
<br>
 +
[https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_Project_Management_UT3#User_Testing Click Here to view the details of User Test 3]
 +
<br>
 +
[https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_Project_Management_UT4#User_Testing Click Here to view the details of User Test 4]
 +
<br>
  
Explain the quality attributes (non functional) of your project deliverables. Have you designed the architecture, use a design pattern, etc? Does your architecture address scalability, performance, reliability, availability, fault tolerance, usability, etc. Does your design address maintainability, flexibility, configurability, etc. Be brief here but you can link to diagrams or code detail pages. Do not repeat the technical complexity part, link to it if necessary.
+
=<div style="background: #FF0080; background: -webkit-gradient(linear, left top, left bottom, from(#FF0080), to(#F660AB)); padding: 12px; font-weight: bold; text-align: center "><font color="white" size="6" >Reflection</font></div>=
  
===Deployment:===
+
==Sponsor Comment==
 +
[https://dl.dropbox.com/u/56071797/Sponsor%28DotCom%29%20Evaluation.doc Click Here to Download our Sponsor's Comments!]
  
In an iterative approach, ready to use system should be available (deployed) for client and instructions to access the system described here (user name). If necessary, provide a [[IS480_Final_Wiki#Project_Deliverables: | deployment diagram link]].
+
==Team Reflection==
 +
<div style="text-align: center">
 +
Learning Outcome: [https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_Learning_Outcomes#Learning_Outcome ‎Click Here]
 +
</div>
 +
[[Image:6-bit_teamReflection.png|850px|center]]
 +
==Individual Reflection==
 +
[[Image:6-bit_tianxiangReflection.png|850px|center]]
 +
<br>
 +
[[Image:6-bit_geksengReflection.png|850px|left]]
 +
<br>
 +
[[Image:6-bit_houstonReflection.png|850px|right]]
 +
<br>
 +
[[Image:6-bit_aloysiusReflection.png|850px|center]]
 +
<br>
 +
[[Image:6-bit_kennethReflection.png|850px|left]]
 +
<br>
 +
[[Image:6-bit_huilingReflection.png|850px|right]]
 +
<br>
  
===Testing:===
+
<br>
 
 
Describe the testing done on your system. For example, the number of user testing, tester profile, test cases, survey results, issue tracker, bug reports, etc.
 
 
 
==Reflection==
 
 
 
Compile common lessons and reflection for the team and for each team member. Be brief.
 
 
 
===Team Reflection:===
 
 
 
Key lessons learned – indicating where the team improved, or would do things differently next time. You may refer to the learning outcome summary in your proposal. A very short checklist style will suffice. It would be very convincing if the knowledge is share at the wiki [[Knowledge_base | knowledge base]] and linked here.
 
 
 
===Individual Reflection:===
 
 
 
Describe in a paragraph, the key areas of learning or improvement. These should be personal areas of growth or learning. Each individual should list his/her effort, responsibility, actual contributions and personal reflection. Do not repeat team project contributions or member roles. Link if necessary.
 
 
 
===Sponsor Comment:===
 
Sometimes, the client writes a report to feedback on the system; this sponsor report can be included or linked from here.
 

Latest revision as of 15:05, 5 December 2012

6-bit logo.png
6-bit's Chapalang! is a social utility that connects people with friends and new friends
by offering a place for exchanging ideas and information on its public domain.
http://www.chapalang.com

Final Wikipage
Home Technical Overview Project Deliverables Project Management Learning Outcomes


Project Progress Summary

Final Presentation Slides


Our website: http://www.chapalang.com

Project Overview

‎Click Here to View our X-Factors!
Problem Scenario: ‎Click Here
Team 6-bit is formed during the month of May in 2012. We have battled through 16 iterations of planning, designing, constructing and deploying our project, Chapalang and have successfully completed the project, maintaining to our planned scope. Throughout 16 iterations, we have our ups and downs. We faced several unexpected technical challenges which brought down the morale of the team. People with different methods of solving problems created communication problems. However, having a common goal in mind, we endure and frantically looking for workarounds and alternatives to overcome these challenges. We, too, faced some project management challenges where some tasks are not complete on time and caused disturbance to our schedule. These project challenges allow us to iron out our differences and streamlined a better working process. After overcoming these challenges, we had improved ourselves as individuals as well as improved as a team!


6-bit poster.jpg


Click here to view our 1min Video Pitch!

6-bit videopitch.gif















Project Challenges

There are several challenges that the team has faced throughout the IS480 journey. The bulk of Chapalang! is a highly functional and complete solution to a business opportunity of our sponsor, and hence it is not on the basis of a single sophisticated function.

However, it is a cumulatively complex system as it is formed by several functions, and the team has been challenged in different levels on integration issues, and micro-components of the project. One example of the micro-component is Paypal integration where Paypal documentations have been brief and changes are frequent, resulting in integration difficulties but eventually resolved. Another example is on our image cropping tool, where significant amount of time is spent on getting it right especially when none of us in the team has ever ventured into image manipulation.

Nonetheless, there are two other independent challenges which introduce much learning lessons for the team.

Scalability and Load Testing

As part of our User Test 4, we attempted to have a better understanding of scalability and load handling of our system by conducting a test.
To reiterate, performance measures the speed with which a single request can be executed, while scalability measures the ability of a request to maintain its performance under increasing load. In order to conduct the test, the following steps are required.

  • Performance Testing
    • Identify a system process with a series of activities
    • Measure the elapse time of each activity
    • Identify the bottleneck which is the single important activity which has highest elapse time
  • Scalability Testing
    • Increase the number of concurrent connections for the identified activity and measure of elapse time of each connection
    • Count the number of concurrent connections possible, within the same approximate amount of elapse time of each connection
  • Load Testing
    • Measure the elapse time of each connection over a range of concurrent connections, including 1, 25, 50, 75, 100 as markers

Difficulty in simulating accurate multiple concurrent connections

Amongst all the steps involved, it is easiest to conduct performance testing because we have already developed Chapalang Benchmark which will help us measure the elapsed time from the moment the framework receives the command until the output is prepared and sent to the end-user.

However, the first challenge appeared when simulating multiple concurrent connections. There were several options that we tried for the simulation, and they include:

  • Asynchronous Javascript (AJAX)

We started off with AJAX because we are most familiar with it and it is one of the easiest to implement. However, when we observe the start_time in Chapalang Benchmark, each connect has a different start_time even though it overlaps between the start_time and end_time of the previous connection.

At this stage, we cleared our misunderstanding about asynchronous calls that we had all along. While it is capable of initiating multiple asynchronous calls, where each call initiates a new connection, it does not initiate all the calls concurrently. It continues to initiate calls sequentially, but it does not wait for the previously initiated connection to be closed before it initiates the next one.

As such, we realized that AJAX will not be able to accurately simulate the same-second concurrent connections that we needed for our test.

  • PHP popen()

After some research and tests, we found out popen() which opens a pipe to a process executed by forking the command. It is also possible to create an environment where executions are done in parallel and will be suitable for simulating concurrency.

While it is fairly easy to use, it offers only a unidirectional process pipe and hence unable to give us a connection response to confirm the number of concurrent connections registered at the server-side. As such, it is not a suitable tool for our test.

  • PHP fsocketopen()

We went back to research for a new solution and it appears that fsocketopen() will be able to help us with concurrency and a bidirectional process pipe.

As we developed a simple script to simulate concurrent users, we faced a problem where Apache Web Server crashes at an unexpectedly low number of 96 concurrent connections. After some investigations, we realized that fsocketopen() itself opens the same number of concurrent connections as the number of connections Apache is expected to receive and create. Essentially, there were a doubled number of connections that the server has to create, and a check on the background process on the server-side revealed a total of 156 registered apache processes.

This led to some biasness in our test, because the intention was to create concurrency at the receiving end but the execution causes inaccuracies in the results. We attempt to solve it by hosting the wrapper file for fsocketopen() on our localhost machine, but we really wanted to take out the network performance uncertainty of every different users who may be using our services, and normalize it solely based on our system performance.

Again, we need a methodology where we can simulate concurrency at the web server, without creating additional connections to execute.

  • PHP curl_multi_exec()

Finally, we chanced upon curl_multi_exec() method in PHP 5 documentation manual. Libcurl is a PHP library where it performs a cURL session calling upon an internet or UNIX pointer. Additionally, curl_multi enables us to execute its sub-connections in the stack.

This method did exactly what we need; it offers us a single connection, multiple threads on the execution. The overall processing time is also faster than fsocketopen() that we attempted earlier by several times.

Personalization Analytics

The personalized dashboard is an adopted feature at a later stage of the project, and hence the natural difficulty is the time constraints. In the process, we had to go through standard cycle of research, prototypes, implement, integrate and test. Every stage is time-consuming and plagued with problems.

More objectively, though we are able to map an analytical process, the most challenging task is identifying a suitable data-drive semantic algorithm which is able to suit our needs. Due to time constraints, our research sources are highly limited to consultations with professors and online research.

We spoke to an Adjunct Professor who is also a Consulting Researcher in a government research agency, and he recommended for us to try out

Multivariate Distribution-based Clustering


6-bitChapalang1.png
The above diagrams illustrates some sample on Multivariate Distribution-based Clustering. Clusters can then easily be defined as objects belonging most likely to the same distribution. A nice property of this approach is that this closely resembles the way artificial data sets are generated: by sampling random objects from a distribution.

While the theoretical foundation of these methods is excellent, they suffer from one key problem known as overfitting, unless constraints are put on the model complexity. A more complex model will usually always be able to explain the data better, which makes choosing the appropriate model complexity inherently difficult.

Distribution-based clustering is a semantically strong method, as it not only provides you with clusters, but also produces complex models for the clusters that can also capture correlation and independence of attributes. However, using these algorithms puts an extra burden on the user: to choose appropriate data models to optimize, and for many real data sets, there may be no mathematical model available the algorithm is able to optimize

In short, we are not able to simply create a system model based on this statistical model for the purpose of our analytics engine.

Latent Semantic Index (LSI)

While researching deeper into semantic analysis, we discovered another model which is Latent Semantic Indexing (LSI). LSI is an indexing and retrieval method which uses a mathematical technique called Singular Value Decomposition (SVD) to identify patterns in relationships between terms and concepts contained in an unstructured collection of text.

LSI is based on the principle that words that are used in the same contexts tend to have similar meaning, and its key feature is to extract conceptual content of a body of text by establishing associations. The key benefits of LSI is that it overcomes two of the most problematic constraints of Boolean keyword queries, which is synonymy (words with similar meanings) and polysemy (words with more than one meaning).

However, LSI was originally a patented property in the late 1980s and till today, there are no opensource materials that we are able to make use of in our project.

Naïve Bayes Classifier (NBC)

Finally, we went back to the basics of statistics and found that in Bayesian statistics, it supports the model Naïve Bayes Classifier (NBC). NBC is a simple probabilistic classifier based on Bayes’ Theorem with strong independence assumptions. Its major and naïve assumption is that all input features are independent and contributes to the object. For example, an orange is “round” and is about 4 inches in “diameter”. While diameter is a measurement unit for rounded objects, and hence a dependence of “diameter” on “round”, NBC assumes that they are independent and may affect the objectivity of the results being oversimplified.

While NBC appears to be skewed and bias in most applications, it appears to match what we need because we have an exhaustive list of product or topical categories that we want to make an exhaustive recommendations from. Hence, the constraint of objectivity and dependence in the results does not create a new problem for our analytics engine.

Correspondingly, we found an open source script which applies NBC and tested to work. It appears that it is possible to configure NBC to accept different weightage of each independent characteristic.

By default, each characteristic has equal weightage with a combined maximum of 1. We found this particularly useful because we are able to adjust the weightage to suit different circumstances. Firstly, the weightage can be adjusted if there is any particular category of products or topics that we want to have higher exposure rates to system users. Hence, this can potentially be a model that suits targeted advertising. Secondly, we are able to incorporate machine learning into the system which skews the weightage according to user’s actual and future activities. Since we capture user activities in our system, the data can be further used to help in the automated skewing of weightage to provide a data-driven model not solely based on historical data but instead, more recent and futuristic data.



Project Management

Schedule

6-bit schedule.png

Detailed Schedule: Click Here

Schedule Changes

Throughout the whole project period, there were several scope adjustments and changes and they affect the schedule directly. The following documents some of the noteworthy changes made:

60bit schedulechanges.png


The tasks and allocated time for the functions removed, which occurs in iteration 14-15 are replaced with the tasks for the newly added tasks.


Click Here to see how we prioritize our scope!
Visit http://www.chapalang.com to try these features out yourself! =)

Metrics

Schedule Metric

Schedule Metric: Click Here
Metric Value: 6bit schedulevalue.png

The acceptable range of schedule metric value is between 90% and 110%, catering to the natural inaccuracies in forecasting and actual implementation of time required to complete a task. Most of our schedules are on time, with the exception of 3 iterations.

In iteration 6, there was overestimation of the amount of work that we are able to handle within the first 2-week iteration that we have, resulting in an off schedule by 20%. Previously, all our iterations are weekly. The additional tasks allocated for that iteration was to catch up with the scope to be presented for Acceptance Presentation. Many of us were however, busy with the handing over of work at our respective internship workplace and could not meet the original schedule plan. As there is a schedule lapse, we have reviewed the workload per iteration and spaced out the work for future iterations.

In iteration 9, there was sufficient time to complete tasks that were planned for the iteration. However, the rush to complete the tasks before User Test 2 has led to lower quality unit testing performed by each developer in the team. This resulted in higher bug counts than usual, and the team had to stop new developers and instead spend the time fixing the bugs. As such, the schedule missed by 30% and we gained a new learning lesson that there should be buffer time for each iteration to cater for unexpected events. In iteration 15, there were fewer tasks in the iteration than much other previous iteration. However, the schedule was missed by approximately 20%. The team has discussed, and attributed the lapse to project fatigue as well as the submission of projects from other academic modules. While this could be better planned for, it was difficult to forecast the effort required for each member’s other projects. Nonetheless, there was buffer time in the final iteration which was eventually activated to make up for the schedule lapse in iteration 15.

Bug Metric

Bug Metric: Click Here
Metric Value: 6bit bugvalue.png

As our acceptable range of bug points is under 20, there are 2 iterations where we have exceeded and remedies were applied as per our action plan.

In iteration 11, we have conducted our User Test 2 which has a wide range of functions coverage. As a bigger system tends to also have more bugs, the user test exposes bugs that were previously unknown to us. Immediately after the user test, we have stopped new developments and focused on fixing all the bugs before we continue. Steps were taken to allow more time per task to ensure developments in the team have more time to conduct unit testing before committing their codes. Our team tester has also drafted more comprehensive test cases for her test work.

In iteration 15, there was another spike in bugs metric value after the team is gradually facing project fatigue. We accepted the nature that the team has been working on the system for the past 7 months, and that there were many other school projects due during that period. As much as we have justified the cause, we stopped new developments and proceeded with a full day of bugs fixing to ensure the system is functioning properly.

Quality of Product

Stage Specification Modules
Project Management Minutes ‎Click Here
Metrics Schedule metrics: ‎Click Here
Bug metrics: ‎Click Here
Requirements Scope Click Here
Scope Prioritization Click Here
Problem Scenario As-Is: Click Here
To-Be: Click Here
Analysis Use case Click Here
Business Process Diagram Click Here
Screen Shots Click Here
Design Logical Diagram Click Here
Class Diagram Click Here
Sequence Diagrams Click Here
Data Architecture Click Here
Testing Test plan [Click Here]

Key Performance Indicators(KPI)

For the purpose of benchmarking ourselves with specific goals, we have derived on a set of indicators to understand our progress.
6bit kpi.png
The following figures are accurate as of 21st November 2012.
505 real members
106 real transactions
189 real physical items sold
$2536.50 real revenue
9 registered and active sellers
62 real products
49 days of operations

Testing

There are a total of 4 User Tests conducted, each with a different coverage and test methodology.
6bituser-testingoverview.png
Click Here to view the details of User Test 1
Click Here to view the details of User Test 2
Click Here to view the details of User Test 3
Click Here to view the details of User Test 4

Reflection

Click Here to Download our Sponsor's Comments!

Team Reflection

Learning Outcome: ‎Click Here

6-bit teamReflection.png

Individual Reflection

6-bit tianxiangReflection.png


6-bit geksengReflection.png


6-bit houstonReflection.png


6-bit aloysiusReflection.png


6-bit kennethReflection.png


6-bit huilingReflection.png