Difference between revisions of "Goodmix Final Wiki"
|Line 107:||Line 107:|
<font size=3 color=#0000FF> Project Metrics:</font>
<font size=3 color=#0000FF> Project Metrics:</font>
These are the 3 metrics that Goodmix use:
These are the 3 metrics that Goodmix use :
'''Requirement Evaluation Metric'''
'''Requirement Evaluation Metric
|Line 118:||Line 116:|
[[Image:riskMatrix.jpg | 200px]]
[[Image:riskMatrix.jpg | 200px]]
|Line 129:||Line 127:|
Revision as of 03:11, 15 November 2010
Main Page: IS480 Team wiki: 2010T2 Good Mix
Project Progress Summary
This section is about sudden requirement changes or requests since midterm which we took up. More information on how requirement changes are handled here.
|1||User request customizing symbols by choosing from a list of images||Impact: high as it affects many other functionality
Difficulty: high as no research about this is done before.
|Team consulted sponsor with the following options:
1. Implement change but outcome is not the responsibility of Goodmix
a. Might have major bugs that cannot be solved and have to revert
b. Less time to work on existing bugs but able to pass UAT
2. Do not implement change and focus on debugging
Sponsor chose option 1.
|Team split into coding team (Bernard, Shazlee and George) and project management team (Naresh and Jess) to work concurrently.
Scenario 1(b) occurred.
|2||New “find coordinates” function requested on 8th November to be up by 10th November for UAT||Impact: low because it is a standalone function
Difficulty: low because similar techniques are used before
|Went ahead with the request but tight deadline is a challenge so collaboration is critical. Bernard had to finish the coding and UI before passing it to Naresh to update Test Plan and Shazlee to update User Guide.||Request completed and RIBA tested before UAT|
|3||Client failed the spatial error handling for the UAT conducted on 10 November 2010. If this is not addressed, it means that the UAT failed.||Impact: low because it does not implicate other codes
Difficulty: medium as previous attempts to give specific errors had failed.
|Team is offered 2 options from sponsor
1.Fix it 2.Not fix it and write a statement as explanations which will be submitted back to the client who graded fail for approval.
|Jess thought of an idea and managed to accomplish the specific error handling.|
1) Expectations of end users from different departments in biodiversity center
Even though our source of requirements is our sponsor, we attended all usability study sessions to get direct feedback from them. This means indirectly managing users’ expectations which can be different and sometimes contradictory. After each usability study sessions, we will discuss with our sponsor during the next meeting to prioritize the changes requested for the next deliverable. We also make use of our project metrics to help us evaluate the requests. This plan had worked well for us.
2) Managing many milestones and deliverables
Other than FYP milestones, there are biweekly sponsor meetings and client usability sessions where different deliverable for RIBA is expected. “Just keep going” attitude to deliver what is expected to our best. We will first meet our sponsor to show him what we have done before he organizes the usability study sessions with KOOPrime and National Parks. This is both a challenge and a benefit for GoodMix. With many milestones, we are able to keep our schedule on track regularly.
Scheduling the project is the most complex task for this project. Due to the nature of our development process, we design our own method to mitigate the disadvantage of adopting this process which we called it dynamic scheduling. This method was effective as it successfully helped us overcome many milestones to satisfy our stakeholders.
Although we make use of external libraries, we create methods to access them and logics to integrate them with functionalities. When the libraries cannot perform what we hope they will, we modify them or seek other alternatives such as using PostGIS functions.
To make RIBA interactive and getting the architecture right, it takes a lot of planning, research, learning, explorations and testing at the backend.
Project Schedule (Plan vs. Actual):
Changes to project schedule were elaborated up to midterm. Therefore comparison will start at phase 6 where we face some of the most turbulent phases. However, even with many changes to the schedule, we are on track! This is probably because we decided to sacrifice our plan to freeze coding after phase 6 as mentioned during midterm to implement changes and make RIBA better.
Note: The difference between “new” and “for user experience” is that “new” refers to requests made by sponsor and the latter refers to team’s initiative.
We increased our “working hours” where we met up on weekdays from 10am to 6pm during week 8 midterm break. While working hard, we did not forget to relax a little by having weekends off. This allowed us accomplish many new requirements on top of the allocated tasks and even brought some tasks forward. However, we were quite concerned about the 1 week delay for our last usability session.
We feel that this is the most stressful phase as we are approaching presentation week for other modules. Sponsor UAT is also delayed for a week due to phase 6 usability session delay. This is to give us time to implement feedback from the usability session.
On a happier note, we are glad to see our hard work paid off in this phase. By pushing ourselves to make all the changes, there is a significant drop of changes to schedule and requests.
Although week 12’s National Parks road show is canceled, UAT with client is still delayed by a week due to the rolling effect of delay at phase 6 and sudden request mentioned above. However, we managed to polish up RIBA and made it to the UAT where we can finally stop the development work.
These are the 3 metrics that Goodmix use and here is the summary of the metrics collected:
In lieu of tight schedule and high risk of requirement change, GoodMix created requirement evaluation metric to help us manage. Impact and difficulty scores are proposed by the person in-charge of the function and the team will discuss if scores are reasonable. Given many months of doing RIBA, it is safe to say that we are able to judge these measurements reasonably.
Comparison of the midterm and final risks:
(to be filled up)
(what does this comparison mean?)
We ranked the technical complexity by functions and provided explanations for top 3 choices:
|1|| Symbol Customization
This is the most technical component for us because attempts to customize the symbols of the layers with color picker was made before but failed. This is why we modified it to become randomized color symbols and no prior research was done specific to this function. Furthermore, this feature affects most of the functionalities since they were built on top of the marker population algorithm. Achieving dynamic icon list where administrators just need to upload icons to the symbol folder is one of the most satisfying feeling.
|2|| Layer Control Manager
This function is considered a basic feature but it is one of the most difficult function particularly trying to implement this in the early stages. This means that if we don't get this feature out in time, it will be a critical bottleneck for the rest of RIBA development. This feature is also the one that morphed the most throughout the project - from changing UI to changing logic.
|3|| Spatial Search
Spatial search is both difficult and special at the same time. Through our usability sessions, our end users seem to be most awed by this function. This is also another function that transformed a lot through the phases - from single point buffer to multiple point buffer to polygon buffers. A lot of hiccups were met during the development of this function. As this is a geospatial concept search, a lot of careful considerations and revisions are made for its UI.
|11||Map Export to Images (Snapshot)|
Quality of Product
|Project Management||1) Minutes
|Client, Sponsor, Supervisor & Internal|
|Analysis||Use Case & Description||Use Case Materials|
|Design||1) System Architecture
2) UI Design Changes
3) Database Schema
Media:DataSchema.pdf of NParks_DB
|Testing||UAT||Test Plan & UAT Details|
|Media:User Guide.pdf for National Parks, Deployment Guide for KOOPrime
On Test Server
We've built RIBA that strives to be dynamic not just for usage but also for deployment and ease of maintenance. For example, clients request for icon customisation for different data layers. Instead of asking them to add codes, all the administrator has to do is drop the icons into the icon folder.
For coding wise, external libraries are clearly separated from our codes which are modularized into different classes. This is also applied to PHP which can be as seemingly trivial as having a configuration file so that password change is only required once. Comments are done in details for all methods that we built.
We've also designed the database architecture which our administrators must use it for RIBA to function and for them to maintain geospatial layers.
Deployment was done on a test server instead of the client server. We tried our best to get RIBA deployed on our client’s server but they could not confirm this possibility by week 11. However, as RIBA is a flash enabled application, a successful deployment to test server meant that it is possible on client server.
For this FYP, GoodMix tries to model the project such that it is as dynamic and close to the will-be live project as possible. In the upcoming demonstration during presentation, we will be accessing a URL that contains the IP address of the test server remotely. A step-by-step deployment guide is created just for our client to build, deploy and maintain RIBA. It comes together with an installation guide pack containing all the codes and installers.
Navigate to this page for deployment materials: Deployment Details
Bug reporting was significantly improved after our midterm feedback that we might be under reporting the bugs. As we get more alert, members took turn to do debugging to test and report RIBA bugs to bug tracking document immediately.
On top of bug tracking, we've also done UAT. We achieved all passed for the UAT except a section from an end user on the spatial error handling. He felt that the error is not helpful enough for the users. Our team has rectified this and sent him the end product for his approval. Once he approves, we will achieve 100% UAT pass :)
Benjamin Gan Reflection
As project manager, in terms of schedule, I am lucky to have our developers helping out in managing the deliverables, particularly in the project development aspect. On top of this, FYP is probably the longest project, about 6 months, for SMU undergraduates so sometimes, the team does get tired or less motivated. However, I truly experienced the spirit of “keep moving on” from this team which enabled us to deliver what we were set out to. To me, this is a journey of stamina, management and motivation.
Initially in our team, only Jess and George are exposed to Flex coding while the rest (Bernard, Naresh and Shazlee) do not have any background knowledge about it. Therefore, the initial learning curve is extremely steep for me but with the help of reference books and examples from the web, and slowly I come to terms with Flex coding. Then for the rest of the coding of the project, it would be basically more of reading what each APIs can do and applying programming logic into it. Besides Flex coding, I also has to understand PHP programming as well as using APIs from our PostGreSQL database in order to achieve some of the spatial functions (spatial query, area calculation and etc) in our project. All in all, through this project, I have been exposed to several programming languages and also the interaction between each phase of our architecture.
After a grueling 6 months, the FYP is finally coming to an end. It has been a long and tough journey, especially having to juggle other modules, external classes and work. Personally, I feel that the challenge in this project is the new programming language of which made it hard to search the Web for information on it. That aside, having to handle the client’s request proved to be a challenge as well. Due to the various constant changes requested, it was hard having to keep up with it. Something which I loved about this project was coming up with some of the algorithms to make it work. I felt a sense of completion seeing some of the things work in the project such as the randomization of color for the markers, which unfortunately was removed after one of the usability session with the end user.
I personally think that IS480 is a good educational channel for me to experience a real project life cycle, from gathering requirements to project management and to completion. As an educational environment, I have a chance to learn from my mistakes and amend from it, so that I am more prepared when I enter the business world. Apart from the technicality aspect of our application, the takeaways from this project are mostly the soft-skills that I learnt; managing stakeholders’ opinions, addressing internal conflicts, and how we can convey our ideas more effectively. I have also learnt how to advice stakeholders by presenting different outcomes from various implementations and our recommendations, balance tough decisions and strive for a win-win situation. I would like to thank our project sponsor and supervisor for giving me keen insights in project development and management.
Disclaimer: All images and content on this page are done by Good Mix and should not be published without their permission.
Main Page: IS480 Team wiki: 2010T2 Good Mix