HeaderSIS.jpg

IS480 Team wiki: 2016T1 IPMAN Internal Testing

From IS480
Revision as of 13:25, 11 August 2016 by Kester.yeo.2014 (talk | contribs)
Jump to navigation Jump to search
Team IPMAN Logo 1600x800.png


Team IPMAN Icon Home.png   HOME

 

Team IPMAN Icon AboutUs.png   ABOUT US

 

Team IPMAN Icon ProjectOverview.png   PROJECT OVERVIEW

 

Team IPMAN Icon ProjectManagement.png   PROJECT MANAGEMENT

 

Team IPMAN Icon ProjectDocumentation.png   DOCUMENTATION

 


IPMAN UserTesting Title.png


IPMAN InternalTesting Icon.png IPMAN UserTesting1 Icon.png IPMAN PlaceholderTesting Icon.png IPMAN PlaceholderTesting Icon.png IPMAN PlaceholderTesting Icon.png


Define Objectives and Scope

In every sprint, testing is conducted on every functionality that has been implemented and completed till date. The objective of testing in every sprint is to identify the presence of bugs and any irregularities in the application that were not previously discovered by the developers in the development phase. To ensure performance consistency over different platforms, we conduct application testing on the following operating systems and browsers:

  • Operating Systems
    • Windows 10.1
    • Mac OS X EI Capitan (10.11.3)
  • Browsers
    • Google Chrome (Version 52.0.2743.116 (64-bit))

Testing Approach

The team takes pride in testing and quality assurance to ensure that functionalities implemented in the application thus, we engage comprehensive manual testing in every iteration. The developers will conduct individual testing before committing their codes on our shared repository, GitHub. We believe in testing the application manually at this level because tests can be specially adjusted to cater to changes in the application, both on the front and back end. Furthermore, manual testing brings about the human factor, allowing us to better discover problems that might surface during real usage due to natural human behavior.


Once the developers have fixed the bugs, the fixed set of codes will be integrated with the other functionalities. Subsequently, the integrated code is then deployed on the staging server and the lead quality assurance will run a final check against the set of test cases created earlier. This helps to ensure that the deployed application works with no major incidents.


The team's lead quality assurance then performs regression testing on the staging server and run tests against all test cases that have been created thus far. This helps to ensure that existing functionalities in the application are not affected by the integration. Once bugs have been identified, the lead quality assurance will then update the bug-tracking Excel sheet and notify the relevant developers of the issues and the corresponding priority level.



The team’s list of test cases can be found on our private repository here.

Testing Schedule

As the Scrum methodology advocates the agile approach, a time period of approximately 2-3 days, during each sprint, has been allocated for comprehensive testing. This allows for integration and testing of the application to occur simultaneously hence enabling a more efficient and time-effective way of testing.


Silhouettes wiki background IPMAN.png