Software testing is an essential and important technique for assessing the quality of a particular software product/service. In software testing, test cases and scenarios play an inevitable and a pivotal role. A good strategic design and technique help to improve the quality of the software testing process. The process improves the quality of the product/service and ensures effectiveness. Software testing is the process of analyzing a software item to know the differences between the existing and required conditions (bugs).Testing helps to evaluate the features of the software, to ensure it is free of bug. It is an activity that is carried out in co-ordinance with the development cycle and before the deployment.
This paper provides information about test case design activities, test analysis, quality risks, testing techniques, phases of test development. The paper also, explains the factors that need to be considered while choosing the right testing techniques and provides a checklist of test cases based on our rich experience of testing mobile apps.
Test design is an art that has interesting techniques associated with it, which make it effective and useful for testing. Many people, when creating a test plan or writing the test case, fail to decipher what to test in a given cycle or what not to test in a given project. When you are checking the parameters that are not be tested, one must ensure that the requirement which is not testable by testers must be addressed here.
Once the test planning is completed, the test analysis and design in software testing life cycle need to be carried out. As the first step, you need to schedule all the test-basis. Test-basis is all the requirements and design specification such as network architecture, system architecture. All the documents that help in testing must be reviewed when performing the test analysis. Based on the review of test basis, you need to evaluate testability of the test basis and objects. Identify and prioritize test conditions, test requirements, or test objectives and required test data, based on analysis of test items. Once the test condition has been ascertained, you can start prioritizing high-level test cases.
Writing high-level test cases is kind of a pseudo test case design that means the test cases that do not have any test data. For example, while writing a test case for a login screen in an application or entering valid username and password, the user must go to the login screen, not writing the test data for username and password. In analysis and design phase, you have to focus on high level test cases, which are logical test cases. In this phase, you have to set the test environment that is based on the application domain (healthcare, social, banking etc.). Another important aspect is identifying infrastructure and testing tools. In case of mobile application testing, you need to select the device model, version and screen resolutions. Once you have identified the infrastructure and tools, create bi-directional traceability between test cases and test basis.
Test development takes place in the following phases:
Once the quality risk analysis is completed, you need to work on a high-level test design and eventually, move on to a low-level test design. The input to risk analysis can be functional specification, system specification and output can be high-level test plan and risk analysis document. All the output from risk analysis goes as input to the high-level test design. Output from high-level test design is high-level test design document, logical test cases; all the documents serve as input to low-level test design. Low-level test design is characterized by a lot of test cases, test suites and test design documents.
Quality risk analysis. In quality risk analysis, identify all risks in the system, using risk analysis techniques, like informal risk analysis technique (functionality, data quality, error handling and recovery, performance, localization, usability etc.). All risks must be noted and covered in high-level and low-level test design. Also, you can use formal risk analysis techniques like ISO9126. (It explains the major six categories of system risk and sub-categories).
High-level test Design. In high-level test design, create test suites for all quality risk categories that are identified. During risk analysis, make sure to cover every risk category, with at least one test suite. You must trace every risk with the test suite.
Risk is something that can result into undesirable consequences. The level of risk can be determined by likelihood and impact. With experience, some of the risk can be mitigated, but not all. In quality or product risk, there is a possibility that the system will fail to satisfy the customers, users, or other stakeholders. A set of possible bugs are behind the quality risk.
Risk based testing reduces quality risk throughout the project, when identified, and assesses the risk, and guides the test process, using risk. More knowledge about risk helps to answer key testing questions. Ideally, risk based testing is part of a larger risk management approach. Once you identify the risk then assign the level of risk; only separate risk items, when necessary, to distinguish between different levels of risk. You need to think about technical and business risk; impact of technical risk on system and business risk on users. Testing, follow-up and re-alignment of risk analysis are mandatory with regard to key project milestone.
There are three main types of test design techniques.
Specification-based technique is, also, called black-box technique. In this technique, you create tests, primarily, by analysis of the test basis and tracing the bugs in order to know how the system behaves. One of the basic specification-based techniques used is equivalence partitioning technique. In equivalence partitioning, divide the inputs, outputs, behaviors and environments into classes. Define, at least, one test case in each partition, or use boundary values in partitions that are in ranges. Tests can be designed to cover all valid and invalid partitions.
Second technique in specification-based is boundary value analysis. Boundary value analysis is the refinement of equivalence partitioning that selects the edges or end-points of each partition for testing. Equivalence partitioning looks for bugs in the code that handles each equivalent class. Boundary values are members of equivalence classes that, also, look for bugs in the definition of the edges. Boundary value technique can be applied when the elements of the equivalence partition are ordered. In non-functional testing, you can use non-functional boundaries.
Use case testing is, typically, used when you are about to enter UAT stage, at the end of the system testing. In this testing, you use the use cases or business scenarios for end to end systems’ testing. Use case has preconditions and post conditions to be met. Use cases have main flows, alternative flows and sometimes, exceptional flows. Use case testing uncovers defects in process flows during real world use of system.
Structure-based technique is, also, called white-box technique. The key concepts include code coverage, statement and decision coverage, and control-flow test design technique. Structure-based tests are based on how the system works inside; which helps to determine and achieve a level of coverage of control flows based on code analysis. Data flows, based on code and data analysis, also determine and achieve a level of coverage of interfaces, classes, call flows, and the like, based on APIs, system design etc.
Different levels of code coverage include:
Experience-based technique is based on the tester’s skill, perception, experience with similar applications, and experience with similar technologies. In this technique, tests are often, created during the test execution, that is, test strategy is dynamic. Examples include error guessing, bug hunting, breaking applications based on checklists, and exploratory testing. Testing experience provides much more to understand the scenario that helps to improve the experience-based testing on different applications.
Choosing the right test technique depends on the following factors:
Creating an exhaustive checklist having test cases covering all the possible scenarios and tracking it throughout the project life cycle would help you achieve the optimal software quality. Here is a sample checklist of test cases, which could be considered while testing a mobile app.
The process of test designing is of high priority. A poorly designed test will lead to improper testing of an application and thereby, yield test wrong and harmful results. This, in turn, will lead to the failure in identifying defects. As a consequence, an application, containing errors, may be released.
There are various types of designing techniques and the challenge lies in selecting the right set of relevant test design techniques for the particular application. The different types of testing techniques have their own unique benefits. The use of any particular technique is considered, only, after much contemplation and by giving maximum emphasis on the type of application.