Subscribe by Email


Tuesday, December 16, 2008

Types of testing

What are the different types of testing that one normally comes across ? If there are others besides these, please add in the comments.

• Black box testing - This is a testing method that is not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
• White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions. This is more like testing based on code, and is typically handled by a person who has knowledge of coding.
Black box and White Box testing are the 2 most well know types of testing.
In addition, there are testing carried out at different stages, such as unit, integration and system testing.
• Unit testing - the most 'micro' scale of testing; to test particular functions or code modules. This is testing that happens at the earliest stage, and can be done by either the programmer or by testers (further stages of testing are typically not done by programmers). Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses. It could also be used to denote something as basic as testing each field to see whether the field level validations are okay.
• Incremental integration testing - this stage of testing means the continuous testing of an application as and when new functionality is added to the application; the testing requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; this testing is done by programmers or by testers.
• Integration testing - This form of testing implies the testing of the combined parts of an application to determine if they function together correctly. When we say combined parts, this can mean code modules, individual applications, client and server applications on a network, etc. Integration testing can reveal whether parts that seem to be well built by themselves work properly when they are all fitted together. Integration testing should be done by testers.
• Functional testing - Functional testing means testing of Black-box type testing geared to functional requirements of an application; functional testing should be done by testers. Functional testing is geared to validate the work flows that happen in the project.
• System testing - System testing is a black-box type of testing that is based on testing against individual overall requirements specifications; the testing covers all combined parts of a system and is meant to validate the marketing requirements for the project.
• End-to-end testing - End to end testing sounds very similar to system testing just with the name itself, and is similar to system testing. The testing operates at the 'macro' end of the test scale, at the big picture level ; end-to-end testing involves testing of the complete application environment in a situation that simulates the actual real-world use, the final use, (example, interacting with a database, using network communications, or interacting with other dependencies in the system such as hardware, applications, or systems if appropriate).
• Sanity testing - Sanity testing, as it sounds like, is typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. This sort of testing could also happen on a regular basis to ensure that regular builds are worth testing. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state. Sanity testing is not supposed to be a comprehensive testing.
• Regression testing - Regression testing plays an important part of the bug life cycle. Regression testing involves re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle, but there should never be an attempt to try and minimise the need for regression testing. Automated testing tools can be especially useful for this type of testing.
• Acceptance testing - Acceptance testing, as the name suggests, is the final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time. This type of testing can also mean the make or break situation for a project to be accepted.
• Load testing - Again, as the name suggests, load testing means testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails. This is part of a system to ensure that even when a system is under heavy load, it will not suddenly collapse, and can help in infrastructure planning.
• Stress testing - Stress testing is a term often used interchangeably with 'load' and 'performance' testing. Stress testing is typically used to describe conducting tests such as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
• Performance testing - Performance testing is a term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans. Performance testing is also used to determine the time periods involved for certain operations to take place, such as launching of the application, opening of files, etc.
• Usability testing - Usability testing is becoming more critical with a higher focus on usability. Usability testing means testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. This is done ideally through the involvement of specialist usability people.
• Install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes. Given that the first thing users see is an installer, how the installer works, whether people are able to get clarity, and so on are some of the measurements through installer testing. In addition, the install / uninstall / repair etc should work smoothly.
• Recovery testing - One does not like to anticipate such problems, but given that crashes or other failure can occur, recovery testing measures how well a system recovers from crashes, hardware failures, or other catastrophic problems.
• Security testing - Security testing is getting more important now, with the focus on increased hacking, and security measures to prevent data loss. Security testing determines how well the system protects against unauthorized internal or external access, willful damage, etc and may require sophisticated testing techniques.
• Compatibility testing - Compatibility testing determines how well the software performs in a particular hardware/software/operating system/network/etc environment.
• Exploratory testing - This type of testing is often employed in cases where we need to have a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it, common in situations where the software being developed is of a new type.
• Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
• User acceptance testing - determining if software is satisfactory to an end-user or customer. Similar to the acceptance test described above.
• Comparison testing - Comparison testing means comparing software weaknesses and strengths to competing products, very important to evaluate your market, and to determine which are the features you need to develop.
• Alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
• Beta testing - Also called pre-release testing, it is the testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers. The advantage is that you can test with users, as well as get verification about software compatibility on a wide range of devices.
• Mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.


No comments:

Facebook activity