Difference between revisions of "IS480 Team wiki: 2012T1 6-bit Final Wikipage"
|Line 117:||Line 117:|
<div style="text-align: right;">
<div style="text-align: right;">
<font size="2">''[https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_Technical_Overview#Scope_Prioritization Click Here to see how we prioritize our scope!]
[https://wiki.smu.edu.sg/is480/IS480_Team_wiki:_2012T1_6-bit_Technical_Overview#Scope_Prioritization Click Here to see how we prioritize our scope!]</font>
Revision as of 20:37, 3 December 2012
by offering a place for exchanging ideas and information on its public domain.
|Home||Technical Overview||Project Deliverables||Project Management||Learning Outcomes|
- 1 Project Progress Summary
- 2 Project Management
- 3 Quality of Product
- 4 Reflection
Project Progress Summary
Click Here to View our X-Factors!
Problem Scenario: Click Here
Team 6-bit is formed during the month of May in 2012. We have battled through 16 iterations of planning, designing, constructing and deploying our project, Chapalang and have successfully completed the project, maintaining to our planned scope. Throughout 16 iterations, we have our ups and downs. We faced several unexpected technical challenges which brought down the morale of the team. People with different methods of solving problems created communication problems. However, having a common goal in mind, we endure and frantically looking for workarounds and alternatives to overcome these challenges. We, too, faced some project management challenges where some tasks are not complete on time and caused disturbance to our schedule. These project challenges allow us to iron out our differences and streamlined a better working process. After overcoming these challenges, we had improved ourselves as individuals as well as improved as a team!
What unexpected events occurred:
- Accidental deletion of live website database
- Website server down
- Members machine breakdown
There are several challenges that the team has faced throughout the IS480 journey. The bulk of Chapalang! is a highly functional and complete solution to a business opportunity of our sponsor, and hence it is not on the basis of a single sophisticated function.
However, it is a cumulatively complex system as it is formed by several functions, and the team has been challenged in different levels on integration issues, and micro-components of the project. One example of the micro-component is Paypal integration where Paypal documentations have been brief and changes are frequent, resulting in integration difficulties but eventually resolved. Another example is on our image cropping tool, where significant amount of time is spent on getting it right especially when none of us in the team has ever ventured into image manipulation.
Nonetheless, there are two other independent challenges which introduce much learning lessons for the team.
Scalability and Load Testing
As part of our User Test 4, we attempted to have a better understanding of scalability and load handling of our system by conducting a test.
To reiterate, performance measures the speed with which a single request can be executed, while scalability measures the ability of a request to maintain its performance under increasing load. In order to conduct the test, the following steps are required.
- Performance Testing
- Identify a system process with a series of activities
- Measure the elapse time of each activity
- Identify the bottleneck which is the single important activity which has highest elapse time
- Scalability Testing
- Increase the number of concurrent connections for the identified activity and measure of elapse time of each connection
- Count the number of concurrent connections possible, within the same approximate amount of elapse time of each connection
- Load Testing
- Measure the elapse time of each connection over a range of concurrent connections, including 1, 25, 50, 75, 100 as markers
Difficulty in simulating accurate multiple concurrent connections
Amongst all the steps involved, it is easiest to conduct performance testing because we have already developed Chapalang Benchmark which will help us measure the elapsed time from the moment the framework receives the command until the output is prepared and sent to the end-user.
However, the first challenge appeared when simulating multiple concurrent connections. There were several options that we tried for the simulation, and they include:
We started off with AJAX because we are most familiar with it and it is one of the easiest to implement. However, when we observe the start_time in Chapalang Benchmark, each connect has a different start_time even though it overlaps between the start_time and end_time of the previous connection.
At this stage, we cleared our misunderstanding about asynchronous calls that we had all along. While it is capable of initiating multiple asynchronous calls, where each call initiates a new connection, it does not initiate all the calls concurrently. It continues to initiate calls sequentially, but it does not wait for the previously initiated connection to be closed before it initiates the next one.
As such, we realized that AJAX will not be able to accurately simulate the same-second concurrent connections that we needed for our test.
- PHP popen()
After some research and tests, we found out popen() which opens a pipe to a process executed by forking the command. It is also possible to create an environment where executions are done in parallel and will be suitable for simulating concurrency.
While it is fairly easy to use, it offers only a unidirectional process pipe and hence unable to give us a connection response to confirm the number of concurrent connections registered at the server-side. As such, it is not a suitable tool for our test.
- PHP fsocketopen()
We went back to research for a new solution and it appears that fsocketopen() will be able to help us with concurrency and a bidirectional process pipe.
As we developed a simple script to simulate concurrent users, we faced a problem where Apache Web Server crashes at an unexpectedly low number of 96 concurrent connections. After some investigations, we realized that fsocketopen() itself opens the same number of concurrent connections as the number of connections Apache is expected to receive and create. Essentially, there were a doubled number of connections that the server has to create, and a check on the background process on the server-side revealed a total of 156 registered apache processes.
This led to some biasness in our test, because the intention was to create concurrency at the receiving end but the execution causes inaccuracies in the results. We attempt to solve it by hosting the wrapper file for fsocketopen() on our localhost machine, but we really wanted to take out the network performance uncertainty of every different users who may be using our services, and normalize it solely based on our system performance.
Again, we need a methodology where we can simulate concurrency at the web server, without creating additional connections to execute.
- PHP curl_multi_exec()
Finally, we chanced upon curl_multi_exec() method in PHP 5 documentation manual. Libcurl is a PHP library where it performs a cURL session calling upon an internet or UNIX pointer. Additionally, curl_multi enables us to execute its sub-connections in the stack.
This method did exactly what we need; it offers us a single connection, multiple threads on the execution. The overall processing time is also faster than fsocketopen() that we attempted earlier by several times.
Detailed Schedule: Click Here
Throughout the whole project period, there were several scope adjustments and changes and they affect the schedule directly. The following documents some of the noteworthy changes made:
The tasks and allocated time for the functions removed, which occurs in iteration 14-15 are replaced with the tasks for the newly added tasks.
Quality of Product
|Project Management||Minutes||Click Here|
|Metrics||Schedule metrics: Click Here|
|Bug metrics: Click Here|
|Scope Prioritization||Click Here|
|Problem Scenario||As-Is: Click Here|
|To-Be: Click Here|
|Analysis||Use case||Click Here|
|Business Process Diagram||Click Here|
|Screen Shots||Click Here|
|Design||Logical Diagram||Click Here|
|Class Diagram||Click Here|
|Sequence Diagrams||Click Here|
|Data Architecture||Click Here|
|Testing||Test plan||[Click Here]|
Key Performance Indicators(KPI)
For the purpose of benchmarking ourselves with specific goals, we have derived on a set of indicators to understand our progress.
The following figures are accurate as of 21st November 2012.
505 real members
106 real transactions
189 real physical items sold
$2536.50 real revenue
9 registered and active sellers
62 real products
49 days of operations
There are a total of 4 User Tests conducted, each with a different coverage and test methodology.
Click Here to view the details of User Test 1
Click Here to view the details of User Test 2
Click Here to view the details of User Test 3
Click Here to view the details of User Test 4
Learning Outcome: Click Here