Subscribe by Email


Thursday, September 30, 2010

Verification Strategies - Reviews - Technical Reviews and Requirement Review

Technical reviews confirm that product conforms to specifications, adheres to regulations, standards, guidelines, plans, changes are properly implemented, changes affect only those system areas identified by the change specification.
The main objectives of technical reviews are as follows:
- Ensure that the software confirms to the organization standards.
- Ensure that any changes in the development procedures are implemented as per the organization pre-defined standards.

In technical reviews, the following software products are reviewed:
- Software requirements specification.
- Software design description.
- Software test documentation.
- Software user documentation.
- Installation procedure.
- Release notes.
The participants of the review play the roles of decision-maker, review leader, recorder, technical staff.

Requirement Review : A process or meeting during which the requirements for a system, hardware item or software item are presented to project personnel, managers, users, customers, or other interested parties for comment or approval. Types include system requirements review, software requirements review. Product management leads the requirement review. Members from every affected department participates in the review.

Input Criteria: Software requirement specification is the essential document for the review. A checklist can be used for the review.
Exit Criteria: It includes the filled and completed checklist with the reviewers comments and suggestions and the re-verification whether they are incorporated in the documents.


Wednesday, September 29, 2010

What is Verification and different strategies of Verification - Reviews and Management Reviews

Verification is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase.
Verification process helps in detecting defects early and preventing their leakage downstream. Thus, the higher cost of later detection and re-work is eliminated.

The different strategies in Verification are:

Review


A process or meeting during which a work product or set of work products is presented to project personnel, managers, users, customers, or other interested parties for comment or approval. The main goal of reviews is to find defects. reviews are a good compliment for testing to help assure quality. Few purposes of SQA reviews are assuring the quality of deliverable's before the project moves to the next stage. Once a deliverable has been reviewed, revised as required, and approved, it can be used as a basis for the next stage in the life cycle.
Types of reviews include management reviews, technical reviews, inspections, walk-through and audit.

- Management Reviews :
These reviews are performed by those directly responsible for the system in order to monitor progress, determine status of plans and schedules, confirm requirements and their system allocation. The main objectives of management reviews are:
* Validate from a management perspective that the project is making progress according to the project plan.
* Ensure that the deliverables are ready for management attention.
* Resolve issues that require management attention.
* Identify any project bottlenecks.
* Keeping the project in control.
Support decisions are made during such reviews include corrective actions i.e. changes in the allocation of resources or changes to the scope of the project.
In management reviews, audit reports, contingency plans, installation plans, risk management plans, software q/a are reviewed.
The participants of the review play the roles of decision maker, review laeder, recorder, management staff and technical staff.


Tuesday, September 28, 2010

Different testing activities in Programming/Construction and Operations and Maintenance Phase

The main testing points in this phase are:
- Check the code for consistency with design
The areas to check include modular structure, module interfaces, data structures, functions, algorithms and I/O handling.

- Perform the testing process in an organized and systematic manner with test runs dated, annotated and saved.
A plan or schedule can be used as a checklist to help the programmer organize testing efforts. If errors are found and changes are made to the program, all tests involving the erroneous segment must be re-run and recorded.

- Asks some challenges for assistance
Some independent party, other than the programmer of the specific part of the code should analyze the development product at each phase. The programmer should explain the product to the party who will then question the logic and search for errors with a checklist to guide the search. This is needed to locate errors the programmer has overlooked.

- Use available tools
The programmer should be familiar with various compilers and interpreters available on the system for the implementation language being used because they differ in their error analysis and code generation capabilities.

- Apply stress to the program
Testing should exercise and stress the program structure, the data structures, the internal functions and the externally visible functions or functionality. Both valid and invalid data should be included in the test set.

- Test one at a time
Pieces of code, individual modules and small collections of modules should be exercised separately before they are integrated into the total program, one by one. Errors are easier to isolate when the number of potential interactions should be kept small. Instrumentation-insertion of the some code into the program solely to measure various program characteristics can be useful here.

- Measure testing coverage/ When should testing stop?
If errors are still found every time the program is executed, testing should continue. Because errors tend to cluster, modules appearing particularly error-prone require special scrutiny. The metrics used to measure testing thoroughness include statement testing, branch testing and path testing. Statement testing is the coverage metric most frequently used as it is relatively simple to implement.

Testing Activities in Operations and Maintenance Phase


Correctness, modifications and extensions are bound to occur even for small programs and testing is required every time there is a change. Testing during maintenance is termed regression testing. The test set, test plan, and the test results for the original program should exist. Modifications must be made to accommodate the program changes, and then all portions of the program affected by the modifications must be re-tested. After regression testing is complete, the program and test documentation must be updated to reflect the changes.


Monday, September 27, 2010

Different Testing activities in Design phase

The design document aids in programming, communication, and error analysis and test data generation. The requirements statement and the design document should together give the problem and the organization of the solution i.e.what the program will do and how it will be done.
The design document should contain:
- Principal data structures.
- Functions, algorithms, heuristics or special techniques used for processing.
- The program organization, how it will be modularized and categorized into external and internal interfaces.
- Any additional information.

The testing activities should consist of:
- Analysis of design to check its completeness and consistency
The total process should be analysed to determine that no steps or special cases have been overlooked. Internal interfaces, I/O handling and data structures should specially be checked for inconsistencies.

- Analysis of design to check whether it satisfies the requirements
Check whether both requirements and design documents contain the same form, format, units used for input and output and also that all the functions listed in the requirement document have been included in the design document. Selected test data which is generated during the requirement analysis phase should be manually simulated to determine whether the design will yield the expected values.

- Generation of test data based on the design
The tests generated should cover the structure as well as the internal functions of the design like the data structures, algorithm, functions, heuristics and general program structure etc. Standard extreme and special values should be included and expected output should be recorded in the test data.

- Re-examination and refinement of the test data set generated at the requirements analysis phase.

The first two steps should also be performed by some colleague and not only by the designer or developer.


Software Localization - some details in terms of how the process work - Part 4

The previous post in this series (Part 3 of localization testing) talked about the time period when to start the process of doing the internationalization of the software, including getting the list of strings. In this post, I will talk more about the processes involved when you are trying to internationalize the code, more in terms of trying to ensuring maximum efficiency and cost reduction.
Most companies do not have the requisite talent for localizing the software, in terms of people who know the various languages in which the software needs to be localized, and for many, hiring and keeping people who know all this can be a task that they would rather outsource, and there are a number of companies that handle all this translation and localization work. However, before we get into this, there are more details that need to be explained.
Consider the case when your software needs to be internationalized. What is the process ? In the previous posts, we have talked about when to start the process, and how it is technically done, but not more details once the required strings are available for translation.
Well, during the translation process, the following is required:
- Line up resources in each desired language who can convert the various strings and return them back for incorporation
- For accuracy purposes, it is desired that these translations be reviewed through a formal review process. So, first the strings are generated for translation, these are translated, and then another language expert does the review
- Once the review is done, these are then incorporated and built into the software
- The software is now ready for internationalization testing, and there are 2 levels of testing required
- The first is functional testing where it is confirmed that the software works fine in the various desired languages. A language expert is not needed; in most cases, the tester takes the English language version and tests the other language keeping the English one in mind (except when the test cases specify a different need in the language version)
- Next, a language expert is needed to verify that the translations appear properly in the various sections of the application, the translations fit the context, and that there are no grammatical mistakes, etc. It is desirable to have a native language speaker do this level of testing.
-


Sunday, September 26, 2010

Testing activities in Requirements Analysis phase

The following test activities should be performed during this stage :
- Invest in analysis at the beginning of the project.
Having a clear, concise and formal statement of the requirements facilitates programming, communication, error analysis and the test data generation. The requirements statement should record the following information and decisions :
+ Program function - what the program must do?
+ The form, format, data types and units for input.
+ The form, format, data types and units for output.
+ How exceptions, errors and deviations are to be handled?
+ For scientific computations, the numerical method or at least the required accuracy of the solution.
+ The hardware and software environment required or assumed.
Deciding the above issues is one of the activities related to testing that should be performed during this stage.

- Start developing the test set at the requirements analysis phase
Data should be generated which can be used to determine whether the requirements have been met. To do this, the input domain should be partitioned into classes of values that the program will treat in a similar manner and for each class a representative element should be included in the test data. In addition, following should also be included in the data set: boundary values, any non-extreme input values that would require special handling. The output domain should be treated similarly. Invalid input requires the same analysis as valid input.

- The correctness, consistency and completeness of the requirements should also be analyzed.
Consider whether the correct problem is being solved, check for conflicts and inconsistencies among the requirements and consider the possibility of missing cases.


Friday, September 24, 2010

Categories of Heuristics of Software Testing

There is a set of characteristics that lead to a testable software.
- Operability: The better it works, more efficiently it can be tested.The system should have few bugs or no bugs that should block the execution of tests and the product should evolve in functional stages.

- Observability: What we see is what we test. It includes a distinct output should be generated for each input, the current and past system states and variables should be visible during testing, all factors affecting the output should be visible, incorrect output should be easily identified, source code should be easily accessible and internal errors should be automatically detected and reported.

- Controllability: The better we control the software, the more the testing process can be automated and optimized. Check that all outputs can be generated and code can be executed through some combination of input. Check that the software and hardware states can be controlled directly by the test engineer. Check that inputs and outputs formats are consistent and structured. Check that the test can be conveniently specified, automated and reproduced.

- Decomposability: By controlling the scope of testing, we can quickly isolate problems and perform effective and efficient testing. The software system should be built from independent modules which can be tested independently.

- Simplicity: The less there is to test, the more quickly we can test it. the points to consider in this regard are functional, structural and code simplicity.

- Stability: The fewer the changes, the fewer are the disruptions to testing. The changes to software should be infrequent, controlled and not invalidating existing tests. The software should be able to recover well from failures.

- Understandability: The more information we will have, the smarter we will test. The testers should be able to understand well the design, changes to the design and the dependencies between internal, external and shared components.

- Suitability: The more we know about the intended use of the software, the better we can organize our testing to find important bugs.

The above heuristics can be used by a software engineer to develop a software configuration which is convenient to test and verify.


Thursday, September 23, 2010

Two heuristics of Software testing : Visibility and Control

Software testability is how easily, completely and conveniently a computer program can be tested. Software engineers design a computer product, system or program keeping in mind the product testability. Good programmers are willing to do things that will help the testing process and a checklist of possible design points, features and so on can be useful in negotiating with them.
Visibility has already been discussed.

Control refers to our ability to provide inputs and reach states in the software under test. The features to improve controllability are:
- Test Points: Allows the data to be inspected, inserted or modified at points in the software. It is especially useful for data-flow applications. In addition, a pipe and filters architecture provides many opportunities for test points.

- Custom User Interface Controls: Custom UI controls often raise serious testability problems with GUI test drivers. Ensuring testability usually requires adding methods to report necessary information, customizing test tools to make use of these methods, getting a tool expert to advise developers on testability and to build the required support and asking the third party control vendors regarding support by test tools.

- Test Interfaces: Interfaces may be provided specifically for testing e.g. Excel and Xconq etc.Existing interfaces may be able to support significant testing e.g. InstallShield, AutoCad, Tivoli etc.

- Fault Injection: Error seeding, instrumenting low level input/output code to simulate errors makes it much easier to test error handling. It can be handled at both system and application level.

- Installation and Setup: Testers should be notified when installation has completed successfully. They should be able to verify installation, pro grammatically create sample records and run multiple clients, daemons or servers on a single machine.


Wednesday, September 22, 2010

Two heuristics of Software testing : Visibility and Control

Software testability is how easily, completely and conveniently a computer program can be tested. Software engineers design a computer product, system or program keeping in mind the product testability. Good programmers are willing to do things that will help the testing process and a checklist of possible design points, features and so on can be useful in negotiating with them.
Visibility will be discussed in this section.

The two main heuristics of software testing are :
Visibility:
Visibility is our ability to observe the states and outputs of the software under test. The features to improve the visibility are :
- Access to code: Developers must provide full access(source code, infrastructure, etc) to testers. The code, change records and design documents should be provided to the testing team. The testing team should read and understand the code.
- Event Logging: The events to log include user events, system milestones, error handling and complete transactions. The logs may be stored in files, ring buffers in memory and/or serial ports.
- Error detection mechanisms: Data integrity checking and system level error detection are useful error detection mechanisms. In addition, assertions and probes with the following features are really helpful:
+ Code is added to detect internal errors.
+ Assertions abort an error.
+ Probes log errors.
+ Design by contract theory : It requires assertions be defined for functions. Preconditions apply to input and violations implicate calling functions while post-conditions apply to outputs and violations implicate called functions.
- Resource Monitoring: Memory usage should be monitored to find memory leaks. States of running methods, threads or processes should be watched. In addition, the configuration values should be dumped. Resource monitoring is of particular concern in applications where the load on the application in real time is estimated to be considerable.


Tuesday, September 21, 2010

Software Localization - some details in terms of how the process work - Part 3

In the previous post (Part of software localization), I was describing how the process of doing localization works, as well as the schedule in which this needs to be done. In this post, I will provide more details of what needs to be done.
In this post, I will go back to the beginning of a cycle, where the planning for a product development cycle needs to be done. Consider a product cycle for a product release that spans a period of 2 years. When the product cycle is planned, two of the key milestones in the schedule deal with the following 2 points:
1. The point in the schedule where the UI components in the application are frozen and all strings finalized
2. The release dates for the various language versions. There are 2 possible options, one where all the language versions of the product are released at the same time, and the other where the English and a couple of the more important languages are released earlier, and the other languages are released later.
The critical part of this planning is the time period required between these 2 milestones, since as said earlier, the actual work of generating the various locale translations, the testing, all this needs to happen in this time period. Make this too aggressive, and you will find the team stretching to meet the timeframes for the localization process.


Types of Software Systems : Database Management Systems, Data Acquisition Systems, Data Presentation, Decision planning Systems, Pattern and Image Pr

- Database Management Systems:
The database management system handles the management of databases. It is basically a collection of programs that enable the storage, modification and extraction from the database. The DBMS can be of various types ranging from small systems that run on PC's to mainframes.
- Data Acquisition:
Data Acquisition systems, take in real time data and store them for future use. a simple example of data acquisition system can be ATC (Air Traffic Control) software which takes in real time data of the position and speed of the flight and stores it in compressed form for later use.
- Data Presentation:
Data Presentation software stores data and displays the same to the user when required. Example is a content management system. Suppose there is a website which is in English. Website is also available in other languages. The user can select the language he wishes to see and the system displays the same web site in the user chosen language.
- Decision and Planning Systems:
These systems use artificial intelligence techniques to provide decision making solutions to the user.
- Pattern and Image Processing Systems:
These systems are used for scanning, storing, modifying and displaying graphic images. The use of such systems is now being increased as research tests are being conducted in visual modeling and their use in our daily lives is increasing. These systems are used for security requests such as diagnosing photograph, thumb impression of the visitor etc.


Monday, September 20, 2010

What is the possible test approach for simulation system ?

A simulation system's primary responsibility is to replicate the behavior of the real system as accurately as possible. Therefore, a good place to start creating a test plan would be to understand the behavior of the real system.

- Subjective Testing:
It mainly depends on an expert's opinion. An expert is a person who is proficient and experienced in the system under test. Conducting the test involves test runs of the simulation by the expert and then the expert evaluates and validates the results based on some criteria. Advantage of this approach is that it can test those conditions which cannot be tested objectively. Disadvantage is that the evaluation of the system is based on the expert's opinion which may differ from expert to expert.
- Objective Testing:
It is mainly used in the systems where the data can be recorded while the simulation is running. This testing technique relies on the application of statistical and automated methods to the data collected.
Statistical methods are used to provide an insight into the accuracy of the simulation. These methods include hypothesis testing, data plots, principle component analysis and cluster analysis.
Automated testing requires a knowledge base of valid outcomes for various runs of simulation. The knowledge base is created by domain experts of the simulation system being tested. The data collected in various test runs is compared against this knowledge base to automatically validate the system under test. An advantage of this kind of testing is that the system can continually be regression tested as it is being developed.


Sunday, September 19, 2010

Types of Simulation Systems: Dynamic, Discrete, Continuous and Social Simulation Systems

- Dynamic Simulation Systems: It has a model that accommodates for changes in data over time. This means that the input data affecting the results will be entered in to the simulation during its entire lifetime than just at the beginning. A simulation system used to predict the growth of the economy may need to incorporate changes in economic data is a good example of a dynamic simulation systems.

- Discrete Simulation Systems: These systems use models that have discrete entities with multiple attributes. Each of these entities can be in any state, at any given time, represented by the value of its attributes. The state of the system is a set of all the states of all its entities. This stage changes one discrete step at a time as events happen in the system. therefore, the actual designing of the simulation involves making choices about which entities to model. Examples include simulated battlefield scenarios, highway traffic control systems etc.

- Continuous Simulation Systems: If instead of using a model with discrete entities, we use data with continuous values, we will end up with continuous simulation.

- Social Simulation Systems: It is not a technique by itself but uses the various types of simulation described above. However, because of the specialized application of those techniques for social simulation, it deserves a special mention of its own. The field of social simulation involves using simulation to learn about and predict various social phenomenon such as voting patterns, migration patterns, economic decisions made by the general population etc.


Saturday, September 18, 2010

Types of Simulation Systems: Deterministic, Stochastic, Static Simulation Systems

Simulation is widely used in many fields. Some of the applications are :
- Models of planes and cars that are tested in wind tunnels to determine the aerodynamic properties.
- It is used in computer games e.g. simCity, car games etc. This simulates the roads, people talking, playing games etc.
- War tactics that are simulated using simulated battlefields.
- Most of the embedded systems are developed by simulation software before they ever make it to the chip fabrication labs.
- Stochastic simulation models are often used to model applications such as weather forecasting systems.
- Social simulation is used to model socio-economic situations.
- It is extensively used in the field of operations research.

Simulation systems can be characterized in numerous ways depending on the characterization criteria applied. Some of them are:
- Deterministic Simulation Systems: These systems have completely predictable outcomes. If given a certain input, we can predict the exact outcome. Another feature of these systems is idem-potency which means that the results for any given input are always the same. Examples include population prediction models, atmospheric science etc.
- Stochastic Simulation Systems: These systems have models with random variables. This means that the exact outcome is not predictable for any given input resulting in potentially very different outcomes for the same input.
- Static Simulation Systems: These systems use statistical models in which time does not play any role. These models include various probabilistic scenarios which are used to calculate the results of any given input. Examples of such systems include financial portfolio valuation models.


Friday, September 17, 2010

Types of Software Systems : Diagnostic Software Systems, Sensor and Signal Processing Systems, Simulation Systems

The type of software system refers to the processing that will be performed by that system.
Diagnostic Software Systems:
These systems helps in diagnosing the computer hardware components. When a new device is plugged into your computer, a diagnostic software system does some work. The "New Hardware Found" dialog can be seen as a result of this system.

Sensor and Signal Processing Systems:
The message processing system helps in sending and receiving messages. These systems are more complex because they make use of mathematics for signal processing. In a signal processing system, the computer receives input in the form of signals and then transforms the signals to a user understandable output.

Simulation Systems:
Simulation is the process of designing a model of a real system and conducting experiments with this model for the purpose of understanding the behavior of the system or evaluating various strategies for the operation of the system. A simulation is a software package that re-creates or simulates, albeit in a simplified manner, a complex phenomenon, environment, experience providing the user an opportunity for some new level of understanding.
Simulation systems are easier, cheaper and safer to use as compared to real systems and often the only way to build the real systems. For example, learning to fly a fighter plane using a simulator is much safer and less expensive than learning on a real fighter plane. System simulation mimics the operation of a real system such as the operation in a bank or the running of an assembly line in a factory.
Simulation in the early stage of design cycle is important because the cost of mistakes increases dramatically later in the product life cycle. Also, simulation software can analyze the operation of a real system without the improvement of an expert i.e. it can also be analyzed with a non-expert like a manager.


Thursday, September 16, 2010

Types of Software Systems : Batch Systems, Event Control Systems, Process Control Systems, Advanced Mathematical Models, Message Processing Systems

The type of software system refers to the processing that will be performed by that system.
Batch systems:
These are a set of programs that perform activities which do not require any input from the user. For example, when you type something on word document, you press the key you require and the same gets printed on the monitor. This is performed by the batch systems. These systems contain on or more Application Programming Interface (API) which perform various tasks.

Event Control Systems:
These systems process real time data to provide the user with results for what command was given. For example, when something is typed on the word document and press Ctrl+S, it tells the computer to save the document. These real time command communications to the computer are provided by the event controls that are pre-defined in the system.

Process Control Systems:
There are two or more different systems that communicate to provide the end user a specific utility. When two systems communicate, the co-ordination or data transfer becomes vital. Process Control Systems are the one's that receive data from a different system and instructs the system which sends the data to perform specific tasks based on the reply sent by the system which received the data.

Advanced Mathematical Models:
Systems, which make use of heavy mathematics fall into the category of mathematical models. Usually, all the computer software make use of mathematics in some way or other. An example of advanced mathematical model is the simulation system which uses graphics and control the positioning of software on the monitor or decision and strategy making software.

Message Processing Systems:
A simple example of this type of system is SMS management system used by the mobile operator which handle incoming or outgoing messages. Another system which is noteworthy is the system used by the paging companies.


Software Localization - some details in terms of how the process work - Part 2

In the previous post on this topic (Some details of software localization), I described the localization process at a high level, with not too many execution details. In this post, I will continue at the same high level, still continuing to explain the process of localization and its relation to the product schedule.
When you consider a product schedule, there are many milestones that you need to meet, with one of the milestones being a state whereby you freeze all the UI related content, specifically strings (these could be text on the dialogs, could be error messages, popups that appear when the mouse hovers over certain parts of the screen, etc). The milestone where the strings need to be frozen can be a single milestone in the second half of the schedule, or it could as part of periodic milestones where individual features are frozen in terms of their UI.
Once the strings are frozen, they can be pulled out (as described in the first post in this series), and then sent to the appropriate language vendors for translation. Once the translation happens and these translations are sent back, the translations can be incorporated in the product code (in a resource file that contains all the strings for different languages). The net result is, when the product is launched in a specific language, the application picks up the relevant language strings and shows them.


Tuesday, September 14, 2010

Risk Based Testing and the strategy behind risk based testing

Risk analysis is applicable on the level of system, subsystem and individual function or module. Testing is used in software development to reduce risks associated with a system. Risk-based testing (RBT) is a type of software testing that prioritizes the features and functions to be tested based on the risk they represent.
Risk-based testing is a skill. It’s not easy to know the ways that a product might fail, determine how important the failures would be if they occurred, and then develop and execute tests to discover whether the product indeed fails in those ways.
The main input into risk based testing is the business requirements supplied by the customer of a software application or system which outlines all of the features which must be present and explain how they should work, how each process should function and what the software should do.

Test managers prioritize tests to fit in with the project’s schedule and the test resources available.A risk based approach to testing takes a much deeper look at the real underlying needs of the project and what really matters to the end-customer.
Risk based testing is about carefully analysing each requirement and each test to ensure that the most important areas of the system and at the same time, those areas which are more likely to experience a failure receive the most attention from the test team. When risk based testing is deployed, every requirement must be rated for likelihood of failure and the impact of failure.
By analyzing the risk of a failure occuring with a specific component or feature and also the impact of failure if that component or feature failed in a real-life situation, project resources can be more efficiently allocated to focus on testing what really matters in the limited time available.
A risk based testing (RBT) approach can help save time and reduce costs on your testing project. Risk based testing enables the test manager to make an informed choice when allocating test resources on a project.


Sunday, September 12, 2010

Data Driven Testing - Automation Frameworks

Data driven testing is a very important aspect of test automation. In data-driven testing, the scripts read data from an external storage site like a file or database, rather than use values hard-coded in the script. A data-driven test includes the operations like retrieving input data from storage, entering data in an application form, verifying the results and continuing with the next set of input data. It significantly increases test coverage and also helps reduce the need to create more tests with different variables.

The data-driven testing approach can be used with unit, functional and load testing. Data-driven testing separates the test data from the test itself. This makes both the test and the data more flexible and reusable and certainly much more easy to maintain.

Data driven scripts are those application-specific scripts captured or manually coded in the automation tool’s proprietary language and then modified to accommodate variable data. Variables will be used for key application input fields and program selections allowing the script to drive the application with external data supplied by the calling routine or the shell that invoked the test script.

Data-driven testing is built around the need to test how an application deals with a range of inputs. An important use of data driven tests is ensuring that applications are tested for boundary conditions and invalid input. A data-driven test alleviates the pains of testing with large sets of data by separating test input from the test itself.


Saturday, September 11, 2010

Keyword-driven Testing - Anatomy of a Successful Automated Testing Framework

Key-word driven test automation also commonly known as Table Driven test automation is typically an application-independent automation framework.
Keyword-driven testing is a technique that separates much of the programming work from the actual test steps so that the test steps can be developed earlier and can often be maintained with only minor updates, even when the application or testing needs change significantly. In key-word driven testing, keywords are actions which are nothing but the tasks to be executed in a test.
A keyword-driven test, the functionality of the application-under-test is documented in a table as well as in step-by-step instructions for each test.

Base requirements of Keyword Driven Testing are:
- Test development and automation must be fully separated.
- Test cases must have a clear and differentiated scope.
- The tests must be written at the right level of abstraction.

Test creation in keyword-driven testing is divided into two stages:
- Planning Stage which includes determining the objects and operations to test and to determine customized keywords.
- Implementation Stage includes building object repository and develop keywords in function libraries.

Concepts of Keyword-driven testing are:
- Keywords such as click, enter and select.
- Business templates such as login and enter transaction.
- Action Words, or short "Actions".

Benefits of keyword driven testing are:
- It is appropriate for non-technical staff.
- Re-usability.
- Maintainability.
- Data-driving capability.
- Eligible to test-first programming.

Keyword-driven testing should improve test management with easy test case writing, editing and sharing and with no programming required. Design test cases earlier in the development process so you can find and fix defects sooner and reduces the testing efforts and cost effective.


Friday, September 10, 2010

What is Automated Testing Framework (AUT) ?

The organization needs a testing framework that provides a end to end approach for implementing testing activities.It is necessary to provide a flexible, scalable and cost effective ‘on-demand’ testing approach for all non-functional and functional testing needs and setup the overall testing process. It increases the long-term success of a project.Also, testing activities are easier to manage.

The automated tools require the test engineer to understand a scripting language (VB Script, Java Script, etc.) to write their automated test cases. These tools usually have the ability to create the scripts using record and playback, but this does not always write the most efficient scripting code and is not as re-usable and maintainable.
Automated Testing Framework is a collection of libraries and utilities designed to ease unattended application testing in the hands of developers and end users of a specific piece of software. An AUT is :
- set of abstract concepts,
- processes,
- procedures and,
- environment in which automated tests will be designed, created and implemented.

Automated Testing Framework will achieve the fastest time to value by implementing automation on a foundation of proven processes, technology and knowledge and to reduce total automation costs, increase the quality, focus on quality.


Wednesday, September 8, 2010

Automated Regression Testing and its features

Automated regression testing refers to the process by which computer software is regression tested in an automated manner by using testing scripts. These scripts are run against software code for validating the changes that has been made to the code.
Test scripts are the instructions which include the requirements for inputs and outputs of a test case. Each test case is entered into a test script to create a full test harness of an application. Test scripts can be automated by writing code that will execute the instructions within a test case. The simplest way to set up automated regression testing is to construct a suite of test cases, each of which consists of a test input file and a "correct answer" output file.

Automated regression testing is important because :
- It provides portability when the code is moved to another machine.
- A confidence is achieved that changing the program has not introduced any bug.
- Automating regression testing allows users to manage the ever increasing number of test cases and suites, while ensuring both reliability and afford ability.
- Testing efforts are reduced to a greater extent.
- It could be tested in parallel with product development.
- Test coverage is increased.


Tuesday, September 7, 2010

Selecting a test strategy for regression testing

Regression testing is selective retesting of the system; executed with an objective to ensure the bug fixes work and those bug fixes have not caused any un-intended effects in the system.
The selection of test cases for regression testing depends on:
- Requires knowledge on the bug fixes and how it affect the system.
- Includes the area of frequent defects.
- Includes the area which has undergone many/recent code changes.
- Includes the area which is highly visible to the users.
- Includes the core features of the product which are mandatory requirements of the customer.
Selection of test cases for regression testing depends more on the criticality of bug fixes than the criticality of the defect itself.
Do not focus on the test cases that are bound to fail and those test cases which has no or less relevance to the bug fixes. Select more positive test cases than negative test cases for final regression test cycle as this may create some confusion and unexpected heat. It is also recommended that the regular test cycles before regression testing should have right mix of both positive and negative test cases. Negative test cases are those test cases which are introduced newly with intent to break the system.

Good approach is to plan and act for regression testing from the beginning of project before the test cycles. One of the ideas is to classify the test cases into various Priorities based on importance and customer usage.

For an Effective Regression Testing :
- Create a regression test plan identifying focus areas, strategy, test entry and exit criteria. It can also outline Testing Prerequisites, Responsibilities, etc.
- Create test cases.
- Defect tracking.


Monday, September 6, 2010

Overview of Regression Testing and its objectives.

Regression testing is an important part of the software development life cycle. regression means going back. If any kind of modification is done in software, testing needs to be done to ensure that it works as specified and that it has not negatively impacted any functionality that it offered previously.
Common methods of regression testing include re-running previously run tests and checking whether previously fixed faults have re-emerged.

The objectives of regression testing are :
- To identify uncovered and unexpected defects.
- To ensure that changes or additions in the code are safe & are not liable to break the existing functionality of the application.
- To ensure and track the quality of its output.
- The changes to the software application have not introduced any new bugs.

Regression testing is necessary when there is a change made to an existing tested software. Each change implies more regression testing needs to be done to ensure that the system meets the project goals.
Regression testing can become cost effective if the test cases are automated the test cases may be executed using scripts after each change is introduced in the system. Also, teams do not execute all the test cases during the regression testing. They test only what they decide is relevant.

In short, regression testing means rerunning tests of things that used to work to make sure that a change didn't break something else. The set of tests used is called the Regression Test Set. It's enormously helpful when you change an application, change the environment, and during integration of pieces.
Regression testing is a simple concept, but it needs to be done just right to work in the real world.


Sunday, September 5, 2010

Software Localization - some details in terms of how the process work - Part 1

What is software localization ? Localization means releasing software that works in different countries the same way. For a person who is not experienced at this, they would wonder as to how the same software can work in different countries almost identically ? After all, if you look at the test that shows up in a software, the text is different in different languages, and it must be a lot of effort to get this done. Well, it is a lot of effort to get a software that works properly in different countries, but not as high as you might expect. Consider a website that showcases news or has articles (which means the site is almost entirely text based). Such sites would need re-writing the entire content into different languages, and the effort can be considerable, and for sites that depend on getting news out quickly, the amount of time involved can be considerable.
However, if you consider a software, there are 2 main elements. One element is the text that a user views (whether it be text on dialogs, or error messages) - this needs to be different in different languages. On the other hand, a huge amount of the internals of a software is the code, and this code does not need to be translated (which is a huge amount of effort savings), since this code is not visible to the users.
In this post, I will give a very high level summary of how the localization of software can be done, and then break this up in future posts. Inside the software, in any part of the code where there is an output of text that the user can see, there is a special section of code that identifies this is as a UI content. When this is done all over the software, all the text, error messages, information given to users, etc, all of this has a small identifier that marks that this section of code is different.
Next, a script is run that gathers all these sections of code that has an identifier, and presto, you get a large set of phrases. These are then sent off for translation into different languages, and when translated, are put back language by language into the software. So, when a user launches German version of the software, the code pulls out all the German translations, and shows those to the users instead of the English originals. Thus, you find that your software has become localized.
This is a simplified version of the entire process, and I will add more details in future posts.


Friday, September 3, 2010

Mutation Testing : how it is performed, benefits, operators and tools.

Mutation Testing is a powerful method for finding errors in software programs. Mutation testing involves deliberately altering a program’s code, then re-running a suite of valid unit tests against the mutated program. A good unit test will detect the change in the program and fail accordingly. Mutation testing is expensive to run, especially on very large applications. Mutation Testing is complicated and time-consuming to perform without an automated tool.

How Mutation testing is performed?


- Create a mutant software which is different from the original software by one mutation.
- Each of the mutant software has one fault.
- Test cases are applied to the original software and the mutant software.
- Results are evaluated. The test case that is applied is wrong if the mutant software as well as the original software produces the same result. The test case is right if the test case detects fault in the software.

Benefits of Mutation Testing


- Introduces a new level of error detection.
- Uncover errors in code that were previously thought impossible to detect automatically.
- The customer will receive a more reliable and bug free software.

On what factors Mutation testing depends ?


- It depends heavily on the types of faults that the mutation operators are designed to represent.
- Mutation operators means certain aspects of the programming techniques, the slightest change in which may cause the program to function incorrectly.

Mutation Operators and Tools


Some mutation operators for languages like Java, C++ etc. are :
- Changing the access modifiers, like public to private etc.
- Static modifier change.
- Argument order change.
- Super keyword deletion.
- Essay writing services

Tools like Jester, Pester, Nester and Insure++ are some of the tools that are available for mutation testing.


Thursday, September 2, 2010

Overview of Ad hoc testing and what are its features.

Ad hoc testing is an expression largely used in information technology industry. This is a kind of quality control testing that works on randomization and it is not a fixed technique. This allow for maximum customization and it can deliver more reliable results, and that is why the term is so popular.

In ad-hoc testing, tests are carried out without planning and prior documentation. There is no formal test plan. Ad-hoc testing helps in deciding the scope and duration of the various other testing and it also helps testers in learning the application prior starting with any other testing.
This testing is a part of exploratory testing. Ad-hoc testing helps in deciding the scope and duration of the various other testing. the best part of this testing is discovery. Another use for ad hoc testing is to determine the priorities for your other testing activities. In this aspect, ad hoc testing has been criticised because it isn't structured, but this can also be a strength: important defects can be found rapidly.

Ad-hoc testing can be done throughout the software development life cycle.The relationships between the subsystems can be exposed as ad hoc testing can find holes in your test strategy.In this way, it serves as a tool for checking the completeness of your testing. Finding new tests in this way can also be a sign that you should perform root cause analysis.


Wednesday, September 1, 2010

What is Recovery Testing and what are its features.

Recovery testing tells how well an application is able to recover from a crash, hardware failure. Recovery testing should not be confused with reliability testing, which tries to discover the specific point at which failure occurs.
- Recovery is ability to restart the operation after integrity of application is lost.
- The time taken to recover depends upon the number of restart points, volume of application, training and skill of people conducting recovery activities and the
tools available for recovery.
- Recovery testing ensures that the operations can be continued after a disaster.
- Recovery testing verifies recovery process and effectiveness of recovery process.
- In recovery testing, adequate back up data is preserved and kept in secure location.
- Recovery procedures are documented.
- Recovery personnel have been assigned and trained.
- Recovery tools have been developed and are available.

To use recovery testing, procedures, methods, tools and techniques are assessed to evaluate the adequacy. Recovery testing can be done by introducing a failure in the system and check whether the system is able to recover. A simulated disaster is usually performed on one aspect of application system. Recovery testing should be carried for one segment and then on the other segment when there are many failures.

Recovery testing is used when the continuity of the system is needed inorder for system to perform or function properly.User estimates the losses, time span to carry out recovery testing. Recovery testing is done by system analysts, testing professionals and management personnel.


Facebook activity