Subscribe by Email


Saturday, December 31, 2011

What are different aspects of web testing?

The term “web testing” defines itself well. So, web testing can be defined as a kind of software testing aimed at testing the web applications. Its entire focus is up on web applications. A complete testing of web applications is required before they go live because this helps in addressing the issues of the application well.

The following issues are generally addressed in a typical web testing:
- Security of web applications.
- Basic functionality of the site under testing.
- Accessibility to the users who are handicapped as well as those who are fully able.
- Readiness for handling the expected traffic.
- Readiness for handling the expected number of users.
- Ability to survive a massive spike in user traffic.

A web testing tool called “web application performance tool” or WAPT (as it is abbreviated) is used to test the web related interfaces and applications. These web applications testing tools are used extensively for load testing, stress testing and performance testing of web applications, web servers, web sites and other web related interfaces.

The web application performance tool in a way simulates virtual users which will repeat either specified URLs or recorded URLs. It allows users to specify or mention the number of times the virtual users will have to repeat either specified URLs or repeated URLs. This measure makes it easy for the web application performance tools to check for the performance leakage in the web application or the web site being tested.

Though the web application performance tools face various challenges during testing, they should be able to check for the following aspects:

- Compatibility of operating system with the web application or the web site.
- Compatibility of the browser with the web application or the web site and web server.
- Compatibility with windows application wherever required during the back-end testing.

Web application performance tools allow the users to specify how the virtual users are equally involved in the whole testing process as well as in testing environment.
There are following 3 types of user loads:
- Increasing user load:
In this type of user load the number of virtual users is increased from 0 to 100 step by step. It is called RAMP.
- Constant user load:
In this type of user load the number of users is kept constant or maintained i.e., a specific number of users only can use the application.
- Periodic user load:
In this type of user load the number of users is increased and decreased from time to time.

Web security testing is another aspect of web testing which determines whether the web based application requirements are met or not when the web applications or web sites are subjected to malicious data input. The user interface of web applications can be tested by using frame works that provide a tool box for testing web applications. Nowadays some open source web testing tools are also available. Few of them are:
- HTTP test tool: This tool is scriptable protocol test tool and is used to test HTTP protocol based products.
- Apache J meter: It’s a tool programmed in java for performance measurement and load testing.

There are some windows based web testing tools available these days. The tester should first develop a web testing check list before carrying out the web testing. The check list should include the following:

- Usability testing: Tests how the users use the page, and other controls.
- Functionality testing: Tests links in web pages, database connections, and forms used in web pages.
- Interface testing: tests web server and application server interface.
- Compatibility testing: tests compatibility of browser, OS, mobile and printer.
- Security testing: tests the security of the web server and applications.
- Performance testing: includes web load testing and web stress testing.


What is application programming interface testing?

An application programming interface or API as it is known by its abbreviated form can be defined as a specification based on source code. Typically application programming interface is intended to be used as an interface in the software components in order to communicate with each other. An application programming interface includes specifications and requirements basically for required data structures, routines, variables, and object classes.

The requirements and specifications of application programming interface can take many forms such as POSIX which is an international standard or Microsoft windows API which is vendor documentation or standard template library or STL (in java AP or C++) which is the library of the programming languages.

Application programming interface and application binary interface are in great contrast to each other. The basic difference is that the application programming interface is based on source whereas the application binary interface is a binary interface. The best example is given by POSIX and Linux standard base. The POSIX is an application programming interface, whereas the Linux standard base in an application binary interface.

An application programming interface has got any features.
- It is language dependent. It means that the application programming interfaces is only available by using the elements and syntax of a particular programming language. This feature makes the application programming interface more easy and convenient to use.

- An application programming interface can also be independent of language. This feature helps in calling the application programming interface from several programming languages. This is the most desirable feature for an application programming interface that is service oriented. Such an API is not bound to specific system or process. It may be provided as a web service or remote procedure calls.

- Sometimes the term “application programming interface” is used to refer to a complete programming interface, a set of APIs that have been provided by an organization or a single function. Therefore, the scope of the meaning of an API is usually determined according to its usage.

- In some cases an application programming testing may describe the way in which a task is performed. An action is usually meditated by a function call in procedural programming languages like C. hence; here API usually describes all the functions and routines provided.

- Other times the application programming interface can be interpreted as a collection of the files included by the C language and its readable descriptions.

- There are various programs development environments that provide the documentations associated with an application programming interface in digital format. Example of such language is Perl which comes with a tool perldoc.

When it comes to object oriented languages, application programming interface provides a description of definitions and behaviors associated with a set of class.

- An application programming interface generally prescribes the methods using which one can interact or handle the class derived objects.
- An application programming interface is closely related to a software library.
- Library is the actual implementation of the rules that are set by the application programming interface.
- Like other this API factor also needs to be tested. For this, API testing is employed.
- API testing is somewhat different from other types testing since GUI is rarely involved in it. for API testing, one needs to set up the testing environment and invoke the API with its respective parameters and then analyze the result.
- The problems come while setting up the environment as the GUI is not involved. For cases that don’t return anything one needs to have some mechanism to check the API’s behavior.

There are 3 main challenges faced by API testing:
- Parameter combination
- Parameter selection
- Call sequence


Friday, December 30, 2011

What are the different fault injection methods?

In the context of software engineering, fault injection is a technique or methodology meant for improving the test coverage. This is usually done by introducing faults in the program source code in order to test the code paths. In particular it is done to test the error handling test paths that might otherwise be left un- followed and thus, untested. Fault injection technique is often used in combination with the stress testing.

Fault injection technique is considered to be one of the most important parts of developing robust software. To add to your knowledge, robustness testing or syntax testing or fuzz testing is also a kind of fault injection methodology which is commonly used to detect the potential vulnerabilities of the communication interfaces such as application programming interfaces, command line parameters and protocols.

The injected fault goes through a well defined cycle before it becomes an observable failure. After the fault is injected, the program or the application is executed. Upon execution, the injected fault may cause an error which is to be considered as an invalid state within the boundaries of the system. This error may cause many further errors within the boundaries of the system. In this way each error acts as an injected fault and propagates to the system boundaries and comes to observance. When the states of the errors are observed at the boundaries of the system they are known as failures. This whole mechanism is known as “fault – error – failure – cycle”.

This mechanism forms a key mechanism in the context of dependability. In 19s the fault injection technique was used to introduce faults at the level of hardware system. This type of fault injection method is known as HWIFI or hardware implemented fault injection. It tends to simulate failures within a hardware system. Soon after that it was found that faults could also be introduced in the software system and it could be helpful in accessing the software system more appropriately.

These fault injection techniques are collectively known as SWIFI or software implemented fault injection. Techniques for software implemented fault injection can be classified into two major categories namely:

Compile time injection:
This can be defined as an injection technique in which the source code is modified so that the simulated faults can be injected into the system. One popular method for doing this is “mutation testing” in which the existing code is changed so as to induce faults. The other technique is “code mutation” in which the faults produced are very similar to those added unintentionally by the programmers. Another technique which is modification of code mutation technique known as code insertion fault injection adds code rather than modifying the existing piece of code. This is done with the use of simple functions which take an existing value and perturb it using some logic into another value. Such functions are called perturbation functions.

Run time injection:
This technique can be defined as the technique which makes use of a software trigger to inject fault in the system which is executing. Using this technique faults can be injected in a variety of ways such as those listed below:

- Time based triggers
An interrupt is generated when the timer reaches a specified time. The interrupt handler associated with the timer will inject the fault.

- Interrupt based triggers
Software trap mechanisms and hardware exceptions are effectively used to generate a fault in the code of the system at a particular point. This gives instant access to a specific memory location.

- Corruption of memory space
- Syscall interposition techniques
- Network level fault injection


Wednesday, December 28, 2011

What are different characteristics of resilience testing?

What does resilience mean? It’s important to know the meaning of resilience first because so many people confuse themselves with recovery, reliability and resilience. They think it’s all the same. But it is not so.

- Resilience means to recover from a change.
- It’s slightly different from recovery and reliability.
- Every software application or system has to have some degree of resilience in it in order to be more secure and recoverable and reliable.
- Resilience is a non functional requirement of a software system or application.
- Resilience testing falls under the category of non functional testing.

It is very common for the interchanging use of many non functional tests because of the overlapping in the scope between many non functional aspects or requirements.
One thing to be noted is that software performance is a broad and vast term and includes many specific requirements like scalability, reliability, compatibility, security and resilience.

Non functional testing contains the following testing techniques:
- Compliance testing
- Baseline testing
- Documentation testing
- Compatibility testing
- Load testing
- Localization testing
- Endurance testing
- Internationalization testing
- Recovery testing
- Performance testing
- Security testing
- Volume testing
- Usability testing
- Stress testing
- Scalability testing
- Resilience testing

Software system or application developers with disaster recovery plans or techniques are said to be actively and effectively engaged in reducing the risk of the software system or application crash, failure or data loss. But, the irony is that these disaster recovery plans become complacent.

This happens so because many of the software developers or testers have a false sense of security based on the existence of their disaster recovery plans. To ensure the safety of the software system or application the software developers need to test their data recovering strategies. Some software developers or testers feel this doesn’t applies to all programs because they conducted resilience testing when the software system or applications were put in place.

But one should always keep this in mind that the testing environment, the testing strategies and the range of cost effective solutions and tools available are always changing. It is required to keep pace with all these changes.

- The resilience testing strategies need to be tested and reviewed frequently in order to update for these changes.
- Some software developers and testers fear about the time and cost of test cases that give a better grade of tests and hence they are not able to put their good intention in to the practice and hence there remains a lack of resiliency in the software system or application.
- This does not necessarily means that each and every available test case should be implemented for testing the software system or application.

- There should be test plan for carrying out the resilience testing.
- A structured methodology always ensures that the amount of time consumed is minimum and the effectiveness of the testing is maximum.
- Resilience testing is some what similar to stability testing, fail over testing or recovery testing.
- Resilience testing is aimed at determining the behavior of the software system or application in the case of unreliable events, catastrophic problems and system failures, crashes and data losses.
- Resiliency is one of the core attributes of a good and reliable software system or application.
- Any software or hardware malfunctioning or failures are likely to have a considerable impact on the software system or application.

A software system needs to resilient against the following:
- Changes in requirements and specifications of the system.
- Hardware and software faults.
- Changes in data sources.

Resilience needs to be incorporated in the following stages of software development:
- Software design
- Hardware specification
- Configuration
- Documentation
- Testing


Tuesday, December 27, 2011

What are different characteristics of Scalability Testing?

Scalability can be essentially defined as the ability of a software application, network, process or program to effective and gracefully handle the increasing workload and effectively and easily carry out the specified tasks assigned properly. Throughput is the best example for this ability of a software application.

- Scalability as such is very difficult to define without practical examples.
- Therefore, scalability is defined based on some dimensions.
- Scalability is very much needed in communication areas like in a network, in software applications, in handling huge databases and it is also a very important aspect in routers and networking.
- Software applications and systems having the property of scalability are called scalable software systems or applications.
- They improve throughput to surprising extent after addition of new hardware devices. Such systems are commonly known as scalable systems.
- Similarly if a design, network, systems protocol, program or algorithm is suitable and efficient enough and works well when applied to greater conditions and problems in which the input data is in large amount or the problem or situation has got several nodes, they are said to be efficiently scalable.

If, during the process of increasing the quantity of input data the program fails, the program is not said to scale. Scalability is so much needed in the field of information technology. Scalability can be measured in several dimensions. Scalability testing deals with testing of these dimensions only.

The kinds of scalability testing have been discussed in detail below:

- Functional scalability testing:
In this testing new functionalities which are added to the software application or the program to enhance and improve its overall working are tested.

- Geographic scalability testing:
This testing tests the ability of the software system or the application to maintain its performance and throughput, and usefulness irrespective of distributing of working nodes in some geographical pattern.

- Administrative scalability testing:
This testing deals with the increment of working nodes in software, so that a single difficult task is divided among smaller units making it much easier to accomplish.

- Load scalability testing:
This testing can be defined as the testing of the ability of a divided program to divide further and unite again to take light and heavy workload accordingly.

There are several examples available for scalability today. Few have been listed below:

- Routing table of the routing protocol which increases in extent with respect to the increase in the extent of network.
- DBMS (data base management system) is scalable in the sense that more and more data can be uploaded to it by adding new required devices.
- Online transaction processing system can also be stated as scalable as one can upgrade it and more transactions can be done easily at one time.
- Domain name system is a distributed system and works effectively even when the hosting is on the level of World Wide Web. It is scalable.

Scaling is done basically in two ways. These two ways have been discussed below:

- Scaling out or scaling horizontally: This method of involves addition of several nodes or work stations to an already divided or distributed software application. This method has led to the development of technologies namely batch processing management and remote maintenance which were not available before the discovery of this technology.

Scaling up or scaling vertically:
Scaling up or scaling vertically can be defined as the addition of hardware or software resources to any single node of the system. These resources can either be CPUs or memory devices. This method of scaling has led to a tremendous improvement in virtualization technology.


Monday, December 26, 2011

What are different characteristics of baseline testing?

Most of us don’t know what is a baseline actually? So let us first clear up what does it actually means?

Generally a baseline is defined as a line that forms the base for any construction or for measurement, comparisons or calculations. In the context of engineering and science it refers to the point of reference. Other tests like load tests, stress tests, resilience tests; baseline tests also form a very essential and important part of the performance testing.

It’s a very crucial part of a good software system or application.

Baseline testing is very famous for improving the overall performance of the software system. It has been found that the baseline testing identifies nearly 85 percent of the software system or application issues.

Baseline testing also helps a great deal in solving most of the problems that are discovered. A majority of the issues are solved through baseline testing.

The whole implementation and idea of how the baseline testing should be performed should be clear in the mind of the software tester. He should know that why the baseline testing is being done?

Some questions ?
- What is the baseline testing referring to in the software system or application?
- How the baseline testing is to be carried out or what is the test plan?
- How the baseline tests are to be executed?
- How this baseline testing differs from performance testing?

Advantages of Baseline Testing
- The main advantage here is that a large amount of time is saved by baseline testing.
- Actually it saves the time overhead.
- If we look at the whole scenario, the performance testing is the most time consuming process and complex also. Often the software testers or developers are not able to spare that much time for carrying out the performance testing very efficiently. So usually what some testers do is that they reduce the number of baseline tests. But the fact is that the baseline testing only saves much of the time.
- Reducing the number of baseline tests causes potential errors and bugs which in turn consume more time later in getting identified and corrected.
- Baseline testing is where most of the time can be saved.
- A performance testing plan must compulsorily include baseline testing and load testing. If more time is there endurance testing, volume testing and stress testing should also be included.

The testers or the developers should have a clear and good understanding of the testing that is to be performed.
- Baseline is usually performed for each script with 1, 2, 5, 10, 20, 30 and at the most 50 users to check the baselines.
- This is done typically for determining the response times. Generally baseline tests are carried out for each individual script that is a part of the load testing and any identified problem is immediately isolated.
- To get good and useful results the baseline tests should be executed at least for 20- 30 minutes.
- In baseline testing the test data is not lost when a test run fails and the data can be prepared for the next test quickly.
- Baseline testing reduces the time consumption of load testing also.
- Baseline testing should be properly monitored. Improper monitoring requires repetition of tests which is again wastage of time.
- Expecting meaningful results without proper monitoring is meaning less. Time over time baseline testing has proved itself to be a very important part of performance testing.
- Baseline testing shows the improvement of the software system or application when the problems and errors are fixed.
- Baseline testing should be done carefully without adhering to any shortcuts. By the time you begin the load testing, your software system will already be performing well.


Sunday, December 25, 2011

What are different characteristics of endurance testing?

The term endurance defines the stamina, durability, resilience and the sufferance of something and its ability to perform actively and exert itself for a long period of time.

- Endurance also stands for the ability of the thing to resist, recover from and withstand.
- It also gives the degree of immunity of a thing towards trauma, fatigue or wounds.
- For software systems or applications, endurance testing has been developed to check the endurance of the software system or applications.
- Endurance testing is also known as soak testing.
- It is carried out to determine if the concerned software system or application can sustain the expected continuous load.
- The memory utilization is monitored during the endurance testing.
- This is done so as to check the potential memory leaks.
- The endurance is checked during the endurance testing but quite often, performance degradation is overlooked.
- The good response times and throughput are ensured throughout the life of the software system or application i.e., from the beginning of the usage of the software system or application to the last time of usage.

The endurance testing is aimed at testing the sustainability of a software system or application. Its goal is to check how the system behaves or responds under the prolonged significant pressure or load for an extended and significant period of time.

- It aims to test how the software system or application behaves under the sustained use.
- Endurance testing checks for the problems that may occur as a result of prolonged execution of the software program.
- It tests the application or the system under heavy loads of data for a desired period of time which is typically more than the normal usage time.
- Endurance testing is usually carried out to identify the problems that the software system or applications face under the prolonged execution.
- It also aims at identifying the buffer flows and memory leaks which are otherwise not identifiable without carrying out the endurance testing.
- The testing for the behavior of the program under significant load over a significant period of time is done with normal ramp down and ramp up time.

This can be illustrated with the following example:

- Always some memory is allocated to the objects of the program.
- In some cases it happens that the allocated memory is not de- allocated and thus remains occupied.
- This leads to a situation of over consumption of memory in which a chunk of memory is again and again taken away by the program whenever it is executed.

- Eventually the software program reaches a point when the leftover memory is not sufficient for the program to execute.
- At this point, the program crashes and is reported to have a breakdown. Such a situation is called a memory leak.
- Endurance testing is a part of performance testing.
- The other types of testings under performance testing include load testing, volume testing, and stress testing.
- The basic aim of the endurance is to check and correct the performance related problems being faced by the software system or application.
- Those problems appear only after the software system or application has been running for a long time under some significant load or stress.
- Some of the endurance test cases are available for free and have been checked into repository.
- It is recommended to include a report argument so that you visualize the usage of memory during the test and share it.
- The endurance tests create a real world model which employs a normal and high load pattern of usage for the software system or application to determine its potential and problems.


Saturday, December 24, 2011

What are different aspects of localization?

Localization is just the opposite of internationalization. The process of localization involves the acceptance of an internationalized software system to a particular locale or region with its specific standards and languages.

- Localization testing also forms a part of testing and typically focuses up on localization and internationalization aspects of the software products or applications.
- To localize a software system or application one needs to have the knowledge about the sets of characters which are employed in the development of today’s software product and applications.
- It also involves the basic understanding of the risks associated with the employed sets of characters.
- Localization is carried out to determine how well the build of the software product has been interpreted with a particular desired target language.
- More often and mainly localization testing helps to know that how well a particular target language has been analyzed by the build.
- For localization testing there should be a functional support within that particular locale which has already been validated because the test is founded only on the results of the globalized validation.
- The product must be globalised to a high extent and if it is not so then it will not support a given language, you will not try to give preference to that language first. But still the person has to check that the report or application which you are delivering is in a working condition or not.

Process of localizing a software product
- The process of localizing a software product or application involves the full translation of a particular application of graphical user interface and accommodate graphics for a locale.

- It also includes translation of that application program into that particular native language in the same way as the localization of business can result in a big task because the main intention for the localization of business is to implement correct business processes and practices for a locale.

- There are so many differences in how a locale conducts business. The user interface and content files are the two basic things which are mainly edited during the process of localization.

- A checklist is referred side by side during localization so as to keep a track of the process. The localization testing checklist includes the following:
1. Rules for sorting
2. Conversions in upper case
3. Conversions in lower case
4. Rules to check spelling mistakes
5. Printers
6. Operating systems
7. Size of papers
8. Text flters
9. Key boards
10. Mouse
11. Date formats
12. Hot keys
13. Available memory
14. Measurements
15. Rulers for measurement

Localization process can be initiated on a system which consists of only a few number of translators, desk top publishers or DTPs, engineers and linguists.

But the localization process is done only when certain defined conditions are there i.e., to say if there is a combination of the following aspects:

1. Independent contractors.
2. In house resources.
3. Full scope services of the localization firm.
4. Since the localization process mainly involves the translation of all the native languages in to a particular aimed string and customization of the GUI or graphical user interface, it is appropriate for the targeted market.

The software products which are offered to the international market often have to suffer a lot of domestic or in house competition which results in the blending of the localized product into a particular native language.

After the translation of the language and the updating of the graphical user interface, localization testing is needed to ensure that the software product is working well and without any problem and it also ensures that the software product is well migrated to the international market.


Friday, December 23, 2011

What are different characteristics of security testing?

Security testing as its name suggests can be defined as a process to determine that whether a software or information system or application is capable of protecting data and keeping it secure.
It also determines that the software or the information system keeps the functionality of the system intact and as intended.

Security testing needs to cover up six important concepts. They have been discussed below in detail:
1. Confidentiality
- It can be defined as a measure of security which seeks to provide protection against the disclosure information or data to the third parties or any unauthorized parties other than the authorized parties or individuals.
- This is not the only way of ensuring security of the information.

2. Integrity
- This is a security measure intended to inform the information or data receiver about whether the information or data which is being provided is correct and fully legal.
- Most often, same underlying techniques are used for both confidentially and integrity aspects of security.
- There is a basic difference between integrity and confidentiality and that is: for integral security, additional information is also provided.
- This additional information usually forms the basis of not only encoding of the whole communication data but also forms the basis for an algorithmic check.

3. Authentication
- This security measure involves the confirmation of the identity of a particular person.
- It ensures that a packed product contains exactly what its packaging and labeling claims to be.
- The process of authentication is also used to trace the origins of a software system, application or an artifact.
- The process of authentication plays a big role in determining that a computer software system or application is a trusted one or not.

4. Authorization
- The process of authorization is an important security measure.
- It verifies the identity of the receiver of that particular service.
- It can be defined as a process for determining that a person who has requested for some service is allowed and is eligible to receive that service or to carry out some operation.
- The best example of authorization security measure is given by access control.

5. Availability
- Availability security measure assures that that the communication services and information will be always ready for use whenever they are needed.
- This security measure ensures that the required information is always available to the authorized people when they are in need of it.

6. Non- Repudiation
- It basically falls under the category of digital security measures.
- Non- repudiation security measure confirms that the data, information and messages are transferred and received by the people or parties claiming to have sent the data, information or messages.
- The security measure like non- repudiation offers a way to guarantee that the person or the party who had sent the message, later cannot deny sending the message and the recipient also cannot deny having received the message if any issue is raised.

Security testing as a term has a number of different meanings and cannot be explained in just one way. Security taxonomy provides a better way to under stand all these concepts.

- Discovery
- Vulnerability scan
- Vulnerability assessment
- Security assessment
- Penetration test
- Security audit
- Security review


Thursday, December 22, 2011

What are the benefits of finding faults early?

There are several benefits of finding faults in a software system, program or application early during the development phase. Two famous researchers from the famous software engineering institute or SEI as it is called in short form, carried out a research to find benefits of detecting or finding faults early while the software project is in development phase.

There are numerous benefits of finding faults early in some software system or application. Below listed are some of the observations of the research:

- Around 70 percent of the total errors and bugs creep in to the software system during the early stages of the software development cycle. Out of these 70 percent faults, only 3.5 percent faults are found.
- Around 80 percent of the errors and faults are discovered in the later stages of the software development cycle.
- The cost of fixing faults early was recorded around the rate of 25x.
- The cost of fixing faults later in the development stages was recorded to be around 16x or sometimes even higher.

According to the finding of the results of the research, one can save himself/ herself over a large percentage on rework and software repair works by finding and fixing the records early during the initial stages of the software development cycle. Since the quality of the software is elusive, it is difficult to define it and also impossible to measure it.

The quality of a software system is affected by some essential factors which cannot be measured. Some such kinds of factors are:

- Reliability
- Efficiency
- Portability
- Usability
- Testability
- Modifiability
- Understandability

Also it won’t be enough to define the quality of software based on the above factors only. "The faults may occur in the software system itself or in the surrounding testing environment which includes the test cases and documentation. Some times the reported anomalies can also be faults."

An anomaly is scientifically defined as a condition which is deviated from the expectations regarding requirements, design, documentation, specifications and standards. The errors are made which cause failures which in turn lead to the reported anomalies. These reported anomalies are analyzed and the errors or bugs causing those failures are found and fixed. Sometimes the whole source is reworked up on. Rework is the process of revising the whole part of the software or application. It is one of the methodologies to correct the reported anomalies.
There are basically 2 types of rework:

- Avoidable Rework
It can be defined as the rework which would have been avoided if the previous source code of the software program would have been error free, consistent and complete. It depends on the hard work and efforts made to fix the problems of the software system that could have been found earlier, fixed and avoided.

- Unavoidable Rework
It can be defined as the rework which could not have been avoided since the software developers and programmers could not foresee or predict the error being made while programming the software system or application. Some such changes are changing the environmental constraints and user requirements.

In some cases, software system benefits from the avoidable rework. It is so because sometimes it is more cost effective to check for the errors and modify the system afterwards rather than putting in the significant loads of efforts for fixing the errors correctly up front. Avoidable rework is mostly preferred since in some cases a fault or an error occurs in a particular phase but it is not profitable to look for that fault in the same phase.


Wednesday, December 21, 2011

What are different aspects of Internationalization?

For the promotion of a software system or application, it should be available in all the different regions of the world. If it is not so, then the software artifact or the product remains confined to one region and is not exposed to the whole world.

Nowadays due to globalization, the software application developed in one region is used all over the world with the help of different regional standards and different regional standards. So there is a need felt for programming the software or application as such that it can be used easily and performs well when modified for use in different regions and languages.

In other words, we can say that there’s a need for internationalizing the product. Internationalization can be defined as a process of coding and designing a product in such a way that it can perform well and properly on any platform after modification for use in different regional standards and languages.

During the process of internationalization, the functionalities of the software product are kept safe and ensured. All the messages are kept externalized when they are being used in different languages, locales and standards.

Internationalization testing is well known as I18N testing. It is so called because there are 18 letters between I and N in the word internationalization.

Internationalization is implemented usually through pseudo-localization. This technique works on the principle of simulation of the local products. It involves many concepts that a localization center does when performing the localization of a software product.

Pseudo localization process involves 3 basic steps which have been discussed below:

- Messages files are translated by insertion of a suitable and specific suffix and prefix into every message file. Localizable non-message files are also modified but, they are not translated. Files in other formats are compulsorily modified in some way.
- The above pseudo- translated message files and other pseudo- translated files are installed in locale specified. Also the location is specified in the software product. For java resources, the files are named with a suffix relation to the required locale. Then the same procedure is followed for installation of the files.
- The software product is run or executed from the earlier specified locale. The GUI displays the prefixes and the suffixes that wre added to the file name instead of the default English names. All the files will run in their modified form including the localizable non message files.

The main advantage of this process is that one is allowed to use a software product including all its features without knowing the other languages. One can also translate the full files. Both internationalization and localization are a means for adapting a software product in different languages of the world.
- Internationalization seeks to make a software adaptable to any language but without modifying its internal structure.
- Internationalization and localization are together known as globalization.

An internationalized software product is fit for use in a defined range of locales. Several languages co exist in an internationalized system. A software system is considered fully internationalized only when the user has the choice to select the user interface language. The following are 4 pillars of the internationalization process:
- Computer encoded text
- Language
- Scripts
- Numeral systems.


Tuesday, December 20, 2011

What are different characteristics of load testing?

Load testing can be defined as the process of putting demands on a software system or application or a device and measuring the response of the software system or application or that device.

- Usually load testing is carried out to determine the behavior of the software system, application or device under both kinds of conditions namely the normal load condition and anticipated peak load condition.

- Load testing helps to determine the maximum degree of the operating capacity of the software system or application along with any bottlenecks and to check which element or error and bug is causing degradation of the software system or the application.

- Sometimes the load levied on the software system or application is increased beyond the normal usage limits, in order to test the response and behavior of the software system at unusually high and peak loads. This kind of testing is known as stress testing.

- The load is usually kept so high that the expected result is nothing more than loads of errors and bugs.

- In load testing there is no clear boundary is known to exist when an operation ceases to be a load test and gets converted to a stress test.

Till date it is not very much clear that what are the specific goals of load testing?

The term load testing is often used synonymously with reliability testing, software performance testing and volume testing.

- Load testing is classified under the category of non-functional testing. The term load testing carries a wide number of meanings in the field of software testing.
- Load testing can be said to refer to the practice of determining the expected usage of a software system, program or application by simulating multiple user situation in which many users are accessing the program at the same time.

- Load testing holds more relevance for multi- user software systems or applications which are often built using a client or a server model i.e., using the web servers.
- This does not implies that only the software systems built using client or server model can only undergo the load testing. Also the other types of software systems or applications which are not built using server or client model are also eligible for load testing.

- The most accurate and efficient and effective load testing simulates the actual usage environment, as compared to testing the software system or application using theoretical modeling or the analytical modeling.
- Load testing allows us to measure the QOS performance quotient of any website based on the actual behavior of the customers.
- Most of the tools and frame works that are used to carry out load testing follow the procedure of the classical load testing paradigm.


Monday, December 19, 2011

What are the characteristics of system integration testing?

In the context of software development and engineering, system integration can be defined as the juxta posing of all the software and the hardware components of the software system and ensuring that the subsystems and assemblages cooperate and work together as one software system or application.

- System integration is the process of linking of different computing systems, sub systems and software applications together functionally and physically so that as to make up the whole software system a well coordinating one.

- This whole process is carried out by the system integrator. It brings together the discrete parts of the software system or application by employing a variety of techniques.

Some of the techniques have been mentioned below:
1. Computer networking
2. Enterprise application integration
3. Business process management
4. Manual programming


- A system itself is an aggregation or collection of subsystems cooperating so that the system is able to deliver the high speed and quality performance.
- System integration also considers already existing and disparate subsystems or assemblages.
- The process of system integration involves joining the subsystems together through their interactive interfaces.
- The system integration is all about knowing how to glue two sub system interfaces together without affecting their functionality.
- System integration is also about making the system more valuable and rendering it capabilities that the system needs to achieve its aim and the desired behavior.

This whole process is called system integration testing or “SIT” as its abbreviation. Now this process can be defined as a process that implements the co existence of a software system with other software systems.

- In system integration testing it is always assumed that the individual sub systems have already tested through integration testing and have passed the tests at all the previous levels.
- System integration testing seeks to test the specified interactions of these sub systems with the other sub systems.
- Usually a pre system integration testing is carried before carrying out the major system integration testing.
- Special test cases are designed for system integration testing.
- Data driven method is the most commonly used method for system integration testing.
- The specialty of this method is that this methodology can be carried out with minimum requirements of the software system testing tools.
- It imports and exports data and then examines the behavior and function of each and every data field within every layer of integrated software system.

There are 3 main routes for data flow in system integration testing:
- Data state within the integration layer
Integration layer here means a middle ware or a web service. This layer acts an a media for the transfer of data and involves the following steps:
(a) cross checking of data properties.
(b) execution of unit tests and
(c) investigation of middle ware and server logs.

- Data state within the data base layer: This combination involves checking of data, checking of data properties, checking for data validations and constraints, checking stored procedures and investigation of server logs for the purpose of trouble shooting.

This is all done according to the specifications of the documentation.

- Data state within the application layer:
This combination involves marking of fields, creation of data map, and checking of data properties.

Apart from the above mentioned three combinations, there are several other combinations that can be carried out based on the availability of time for system integration testing. System integration is performed at the time of integration of two systems. Like for example, it carried out at the time of integrating bank accounting system with some inventory management system.


Sunday, December 18, 2011

What are different characteristics of system testing?

System testing is a word heard often. But what is meant by that actually? As the name suggest, one make out that it has got something to do with testing of systems. Scientifically it can be defined as the testing of both the components of the system i.e.software and hardware.

- The system testing is carried out on a finished, complete, and integrated system to check the system's cooperation according to the specified conditions and requirements.
- System testing is categorized under the category of black box testing, and therefore doesn’t require any knowledge of the internal structure and design of the source code.
- According to the rules and regulations of the system testing, only the integrated components that have passed the integration testing successfully can be given as input for software system.
- The software system that has been incorporated successfully with the appropriate hardware system can also be taken as input to the system testing.
- The system testing aims at detecting all the discrepancies, defects and constraints.
- The software system itself integrated with any other software or hardware system and has successfully passed the system integration testing can also be considered as an input for the system testing.
- System testing deals with the inconsistencies and flaws that are present in the system software which is made up of integrated software and hardware components. - System testing like other testing methodologies is a much limited kind of testing.
- System testing is concerned with the detecting of defects within the assemblages i.e., inter- assemblages as well as within the software system as a whole entity.
- Unlike integration testing and unit testing, the system testing is carried out on the whole software system as one unit.
- System testing mainly deals with basic and important contexts namely functional requirement specification (FRS) and system requirement specification (SRS).
- System testing is not only about testing the design of the software system but, also its behavior and the expected features of the customers.
- System testing also tests the software system up to the limits and also beyond the limits and conditions specified for the software and hardware components.
- System testing is performed to explore the functionality of a software system.
- System testing is carried out before the system is assembled and after the system has been finished and completed.

There are various testing techniques that together make up a complete system testing methodology. Few have been listed below:
- Stress testing
- Load testing
- Error handling testing
- Compatibility testing
- Performance testing
- Usability testing
- Graphical user interface testing
- Security testing
- Volume testing
- Scalability testing
- Sanity testing
- Exploratory testing
- Smoke testing
- Regression testing
- Ad hoc testing
- Installation testing
- Recovery testing
- Reliability testing
- Fail over testing
- Maintenance testing
- Accessibility testing

While carrying out the system testing it is very important to follow the systematic procedures.
- Only specifically designed test cases should be used for testing.
- Examiners test he system by breaking in the system i.e., by giving incorrect data.
- Unit testing and integration testing form the base of the system testing.
- System testing forms a crucial step of the process of quality management.
- System is tested to determine if it meets all the functional requirements and also helps in verification and validation of application architecture and business requirements.

Conditions to be followed before system testing is carried:
- All the units must have successfully passed the unit testing.
- All the modules or units must have been integrated and successfully passed the integration test.
- The surrounding environment should resemble the production environment.

Steps that should be followed during the system testing:
- A system test plan should be created.
- Test cases should be created.
- Scripts should be created to build environment.


Saturday, December 17, 2011

What are different characteristics of unit testing?

Unit testing as the name itself suggests is a testing technique to examine individual units or modules of the program to make sure that they have been programmed properly and do not contain any flaw or mistake and to determine that they are fit for integration.

A unit or module is the smallest part of a software system or program and cannot be divided further. So units need to be totally flawless so as to make up a good software system. They should be able to work in collaboration with each other. Concept of unit is different in different kinds of programming. Like in object oriented programming a module is taken to be an interface or a data type (for e.g. class, structure etc) whereas in procedure oriented programming, a unit is referred to an individual loop or a function.

Unit testing is carried out with aid of unit tests cases which are developed by professional programmers who are expert white box testers. Unit testing is done during the development of the program or we can say at a much unsophisticated level. This level is unit testing level.

Each and every test case is designed differently. Before testing a unit it should be isolated properly. Several methods are used for isolation. Few have been listed below:
- Mock objects
- Test harnesses
- Method stubs
- Fakes

Software developers only carry out the desired unit tests to determine whether the software system is behaving as desired or not. Unit tests can be implemented either manually or by the use of build automation. There are several benefits of unit testing.

- Unit testing aims at examining the individual parts or units or modules and to determine that they are working properly.
- A unit test serves as a contract or a condition that a unit must accomplish or satisfy to get listed as a proper unit.
- One of the basic advantages of unit testing is that it helps in detecting flaws early during the development phase. This is very much needed as it reduces the number of potential bugs and future hard work that would have been done to correct those resulting errors.
- Unit testing allows the tester to modify the code so that it is still able to integrate with the other units of the program. But, later it is required to develop new test cases so that the changes in the unit can be tested for errors.
- Unit testing makes it easier for the software developer to carry out other types and levels of testing.

Nowadays, the test cases are readily available. Using such test cases save the time of the software developer but they are not so effective in testing. Above all the matters, the ebst thing about unit testing is that it maintains the accuracy of the software system.

The hard ships of integration testing are reduced effectively by a large amount. - Every unit has a detailed documentation. So it becomes much easier to understand the function of the unit.
- Documentations are needed to develop effective test cases.
- Test cases bring out the weak points as well as strong points of the units.
- Each test case is uniquely designed to test a different aspect of the unit like class and its behavior.
- Sometimes while unit testing objects within an object are encountered where it becomes important to check those objects. This causes the unit test to fail. So it becomes important to isolate the unit before testing it using the techniques mentioned above.

Any kind of testing cannot be expected to find each and every error. In addition to this the unit testing is limited to test only the functionalities of a unit and nothing more than that.


Friday, December 16, 2011

What are different types of white box testing? Part 2

White-box testing or clear box testing, transparent box testing, glass box testing, structural testing as it is also known can be defined as a method for testing software applications or programs. White box testing includes techniques that are used to test the program or algorithmic structures and working of that particular software application in opposition to its functionalities or the results of its black box tests.

White-box testing can be defined as a methodology to verify the source code of the software system if it works as expected or not. White box testing is a synonym for structural testing.

Unit testing and Integration testing is already discussed in previous post.

Types of White Box Testing



- Function level testing:
This white box testing is carried to check the flow of control of the program. Adequate test cases are designed to check the control flow and coverage. During functionality level white box testing simple input values can be used.

- Acceptance level testing:
This type of white box testing is performed to determine whether all he specifications of a software system have been fulfilled or not. It involves various kinds of other tests like physical tests, chemical tests and performance tests.

- Regression level testing:
This type of white box testing can also be called as retesting. It is done after all the modifications have been done to the software and hardware units. Regression level white box testing ensures that the modifications have not altered the working of the software and has not given rise to more bugs and errors.

- Beta level testing:
Beta testing is that phase of software testing in which a selected audience tries out the finished software application or the product. It is also called pre- release testing.


Thursday, December 15, 2011

What are different types of white box testing? Part 1

White-box testing or clear box testing, transparent box testing, glass box testing, structural testing as it is also known can be defined as a method for testing software applications or programs. White box testing includes techniques that are used to test the program or algorithmic structures and working of that particular software application in opposition to its functionalities or the results of its black box tests.

White-box testing can be defined as a methodology to verify the source code of the software system if it works as expected or not. White box testing is a synonym for structural testing.

There are certain levels only at which white box testing can be applied. The levels have been given below in the list:
- Unit level
- Integration level and
- System level
- Acceptance level
- Regression level
- Beta level

Unit level testing:

This type of white box testing is used for testing the individual units or modules of the software system. Sometimes it also tests a group of modules. A unit is the smallest part of a program and cannot be divided further into smaller parts. Units form the basic structure of a software system. Unit level white box testing is performed to check whether or not the unit is working as expected. White box testing is done to ensure whether the unit is working properly or not so that later it can be integrated with the other units of the system. It is important to test units at this level because later after integration it becomes difficult to find errors. The software engineer who has written the code only knows where the potential bugs can be found. Others cannot track them. Therefore, such kinds of flaws are completely in the privacy of the writer. Unit level white box testing can find up to 65 percent of the total flaws.

- Integration level testing:
In this type of white box testing the software components and the hardware components are integrated and the program is executed. This is done mainly to determine whether both of the software units and hardware units are working together in harmony. It includes designing of test cases which check the user interfaces of the two components.


Wednesday, December 14, 2011

What are different characteristics of recovery testing?

Recovery testing itself makes clear what it is by through its name. We all know what recovery means. To recover means to return to the normal state after some failure or illness etc. This qualitative aspect is also present in today’s software system or applications.

- The recovery of a software system or application is defined as its ability to recover back form some hardware failure, crashes and similar such problems that are quite frequent with computers.
- Before the release of any software it needs to be tested for its recovery factor. This is done by recovery testing.
- Recovery testing can be defined as the testing of software system or application to determine its ability to recover fatal system crashed and hardware problems.

One should always keep one thing in mind which is that recovery testing is not to be confused with reliability testing since reliability testing aims at discovering the points at which the software system or application tends to fail.

- In a typical recovery testing, the system is forced to fail or crash or hang in order to check how the recovery asset of the software system or application is responding and how much strong it is.
- The software system or application is forced to fail in a variety of ways.
- Every attempt is made to discover the failure factors of the software system or application.

Objectives of Recovery Testing
- Apart from the recovery factor, the recovery testing also aims at determining the speed of recovery of the software system.
- It aims to check how fast the software system or application is able to recover from a failure or crash.
- It also aims to check how better the system recovers.
- It checks the quality of the recovered software system or application. There is some type and extent to which the software is recovered.
- The types and extent are mentioned in the documentation in the requirements and specifications section.
- Recovery testing is all about testing the recovering ability of the software system or application i.e., how well it recovers from the catastrophic problems, hardware failures and system crashes etc.

The following examples will further clarify the concept of recovery testing:

1. Keep the browser in runny mode and assign it multiple sessions. Then just restart your system. After the system has booted in, check whether the browser is able to recover all of the sessions that were running previously before the restart. If the browser is able to recover, then it is said to have good recovering ability.

2. Suddenly restart your computer while an application is in running mode. After the boot in session check whether the data which was being worked upon by the application is still integrate and valid or not? If the data is still valid, integrate and safe the application has a great deal of recovery factor.

3. Set some application like file downloader or similar to that on data receiving or downloading mode. Then just unplug the connecting cable. After a few minutes plug in the cable back and let the application resume its operation and check whether the application is still able to receive the data from the point where it was left. If its not able to resume the data receiving then its said to have a bad recovery factor.

Recovery testing tests the ability of application software to restart the operations that were running just before the loss of the integrity of the applications. The main objective of recovery testing is to ensure that the applications continue to run even after the failure of the system.

Recovery testing ensures the following:
- Data is stored in a preserved location.
- Previous recovery records are maintained.
- Development of a recovery tool which is available all the time.


Tuesday, December 13, 2011

What are different characteristics of performance testing?

Performance means a lot more than actually just testing the performance of a software system or application. It covers a wide range of concepts of software engineering and functionalities.

In performance testing, a software system is not merely tested on the basis of its functionalities, specifications and requirements but, it is also tested on the basis of the software system’s or application’s final performance characteristics which are measurable.

- Performance testing is both quantitative and qualitative kind of testing.
- In the field of software engineering, performance testing is typically done to determine the effectiveness and speed of a software system, hardware system, computer or device etc.

- Being a quantitative process, performance testing involves some lab tests like measurement of response time and MIPS (short form for “millions of instructions per second”) at which a software system performs.

- It also involves tests for testing the qualitative assets of a system like scalability, reliability and inter- operability.

- Often performance testing and stress testing are performed conjunction-ally.

- It’s a general kind of testing done to determine the behavior of a system whether hardware or software in the terms of stability and responsiveness when the system is provided with a significant workload.

- It is also carried out to measure, validate, verify and investigate the qualitative attributes of the system like resilience and resource usage.

- Performance testing is a sub category under performance engineering.
- It’s a kind of testing which aims to incorporate performance into the architecture and design of software or a hardware system.
- It’s basically done before the actual coding of the program.

Performance testing consists of many sub categories of testing. Few have been discussed in details below:

1.Stress testing:
This testing is done to determine the limits of the capacity of the software application. Basically this is done to check the robustness of the application software. Robustness is checked against heavy loads i.e., to say above the maximum limit.

2. Load testing:
This is simplest of all the testings. This testing is usually done to check the behavior of the application software or program under different amounts of load. Load can either be several users using the same application or the difficulty level or length of the task. Time is set for task completion. The response timing is recorded simultaneously. This test can also be used to test the databases and network servers.

3. Spike testing:
This testing is carried out by spiking the particular and observing the behavior of the concerned application software under each case that whether it is able to take the load or it fails.

Endurance testing:
As the name suggests the test determines if the application software can sustain a specific load for a certain time. This test also checks out for memory leaks which can lead to application damage. Care is taken for performance degradation. Throughput is checked in the beginning, at the end and at several points of time between the tests. This is done to see if the application continues to behave properly under sustained use or crashes down.

5.Isolation testing:
This test is basically done to check for the faulty part of the program or the application software.

6.Configuration testing:
This testing tests the configuration of the application software application. It also checks for the effects of changes in configuration on the software application and its performance.

Before carrying out performance testing some performance goals must be set since performance testing helps in many ways like:

- Tells us whether the application software meets the performance criteria or not.
- It can compare the performance of two application soft wares.
- It can find faulty parts of the program.


Monday, December 12, 2011

What are different characteristics of documentation testing?

Though the term documentation is used is various different ways, it is usually used to refer to the process of providing the evidence.
- Documentation is very important and useful.
- It also refers to the process of the documenting the knowledge about a software system.
- But in the context of software engineering, documentation refers to the process of writing the software documentation.
- Individuals or the professionals who carry out the process of writing up of the software documentation are called documentalists.

Every documentation has to follow up a composure called documentation composure. According to the document composure it should include the following:

- Written information and instructions for any technical or projection performing,
- Data and media of any kind of format and details about the reproduction.
- Other related content.

Today documentation about any software system or application is available in many formats like user guides, user manuals, online help, white papers and quick reference guides. Nowadays documentation in hard copy is rarely seen.

- The documentation for software systems or applications is distributed to the open public via software products, on- line applications and websites.
- Certain principles are followed while preparing the documentation for the software or the hardware product.
- Documentalists always compulsorily follow the ISO standards.
- These standards are not available for the general public.
- Apart from principles, certain guidelines are also to be followed regarding the documentation.

The procedure for preparing a perfect documentation involves the following steps:
- Document drafting
- Formatting
- Submitting
- Reviewing
- Approving
- Distributing
- Repositing
- Tracking

Production of documentation involves contribution from corporate communicators and technical writers since the technical writers have an expert knowledge about the software and also they are good at writing contents. They are able to design the information architecture. They are able to easily cooperate with the SMEs or subject matter experts (who are none other than software developers, engineers and also other people like clients and customers) to prepare the kind of documentation the users need.

In the field of computer science, there exist the following types of documentation:

- RFP or request for proposal
- SOW or statement of work or scope of work
- Requirements
- System design and functional specifications
- Software design and functional specification
- Change management
- Error tracking
- Enhancement tracking
- UTA or user test and acceptance

Nowadays many kinds of software applications are used to create documentation. But, SDF or software documentation folder is the most used software application used by the engineers to create the documentation for the software system or application.

While the development of the software system or the application is in progress, the software engineers keep a written record detailing the build of the application which essentially includes an interface section, a requirements section in order to provide more details about the communication interface of the software system or application.

- Usually a notes section is provided giving the details about the proof of the concept, tracking errors and enhancements.
- Apart from this a testing section is also included to give the details about how the software system or application was tested.
- The documentation confirms to the requirements and specifications stated by the client or the customer.
- The final documentation is a detailed description of the build and design of the software.
- Apart from this, it lays down the instructions for installing and uninstalling the software.
- Documentation testing is thought of as the most cost effective testing.
- Any discrepancy in the documentation will cost too much.
- The documentation is tested in a variety of ways to check the degrees of the complexity of the software system.


Sunday, December 11, 2011

What are different characteristics of compatibility testing?

First let us clear up with the concept of compatibility.

Compatibility of a software system or an application or any hardware system or components can be defined as the ability of that software system or hardware component to work efficiently with all the versions either newer or older of all CPU architecture designs and operating systems.

- Compatibility is one of the most important properties of any software system or application and hardware system.
- Not every where, every one uses the same CPU architecture designs and operating systems.
- Therefore, it becomes necessary to make software and hardware compatible will all sorts of systems available.
- Otherwise, the software or the hardware will remain confined to only one CPU architecture and operating system.
- There will be no benefit to the software and hardware developers who developed that software or hardware.

In other words, the software or the hardware product or artifact get promotion and hence it would not be widely accepted.
Keeping in view all these issues, care is taken to provide maximum compatibility to the software or the hardware system. Before its release to the open public, the software or the hardware product needs to undergo testing for determining its compatibility.
Such testing is called compatibility testing.

- Compatibility is categorized under non functional testings.
- Compatibility testing can be defined as the testing that is conducted on the software application or the hardware component to determine the concerned product’s compatibility with the computing technological environment.

A proper computing technological environment contains all of the below mentioned aspects:

- Bandwidth handling capacity: the environment should be able to handle bandwidth of the networking software and hardware.
- It should have a computing capacity of hardware platform like HP 9000 and IBM 360 etc.
- It should be compatible with all kinds of peripherals. In other words, it should have compatibility of peripherals. Peripherals include DVD drive, printers, monitors, speakers and so on.
- It should be compatible with all operating systems including UNIX, MVS, windows and so on.
- It should support all types of data bases like oracle, DB2, Sybase etc.
- It should be well compatible with other software systems like messaging tools, networking systems and web servers.
- It should be browser compatible. It should support all the available browsers till date like Netscape, internet explorer, Firefox, Google chrome, safari etc.

The above aspects together make up a proper and efficient computing environment to carry out compatibility testing for the software and hardware products.

Compatibility testing comprises many other small testings like peripheral compatibility testing, browser compatibility testing etc. browser compatibility testing is also known as user experience testing.
It involves the checking of web and network applications on all the available different browsers.

It is done to ensure the following:
- The application under testing should respond exactly in the same way with all the different browsers. It should exhibit same features and functionalities when run under different browsers.
- The visual experience for the users must be the same irrespective through which browser they are using the web application.
- The applications should be backwards compatible i.e., it should work with the older versions of the browser as well.
- The applications should be carrier compatible i.e., data transformation should be same no matter which carrier is being used. Some famous carriers include orange, sprint, Verizon, Airtel, O2 etc.
- Apart from software compatibility, the application should be hardware compatible.
- The applications should be compiler compatible i.e., there should be no difference in compilation by different compilers. All the compilers should compile the source code correctly.
- The applications should be able to run on emulators.


Saturday, December 10, 2011

What are different characteristics of Compliance testing?

Compliance testing perhaps sounds a very rare kind of testing, less often heard about. It can be defined as the audit of a software system or application which is carried out against well known criteria.

There are many kinds of compliance testing and some are even developed as per the requests of the customers or the clients. Basically the compliance tests are of the following types:

Systems in Development
It refers to the compliance testing in which the verification of the fact that the intended software system or application under development meets the lock down standards, configurations and specifications as requested by the client or the customer is done.

Operating systems and applications
- It refers to the compliance testing in which the verification of the fact that an operating system and software system or applications have been configured and designed appropriately and properly as per the requirements, specifications and lock down standards given by the clients and the customers is done.

- Thus, this kind of compliance testing provides robust, adequate and efficient controls to ensure the availability, integrity and confidentiality of the software system or application is not affected during its normal usage and is maintained throughout the whole working process.

Management of IT and enterprise architecture
- It refers to the compliance testing in which the verification of the fact that the all the in-place IT management infrastructure aspects of the software system or the application have been put in their appropriate place is done.

- This is generally done to ensure that the audit, change in controls, security procedures and business continuity have been documented, formulated and put in their proper place and remain effective.

Inter- connection Policy
It refers to the compliance testing in which the verification of fact that the business continuity controls and adequate security measures that govern the connection of the software system with other systems like the systems for tele- communication, extranets, intranets, internet and so on, have been put in their appropriate place, have been cross checked with the specifications and requirements stated by the clients and the customers and have been fully documented is carried out.

These were some standard compliance tests.
Apart from these there are some normal compliance tests which encompass either a few or all of the compliance tests mentioned above.
- Some lockdown policies are applied to the underlying applications or software systems and operating systems.
- Some of these policies are passed by the clients or the customers and some by the concerned parties.
- These policies can be referred and can be used as a guidelines as and when required by the customers or clients when the software testers or developers have already performed a compliance test.
- They can also be referred after the penetration testing and vulnerability assessment of the software system or application so that more security measures can be applied to the system’s enterprise in order to improve its security.

The national security agency or NSA as it is often abbreviated has provided a number of lock down policies and guidelines to increase the awareness of the security affairs that are affecting our operating systems, software systems and applications etc.
The policies cover the following:

- Database servers
(a) oracle 10g
(b) oracle 9i
(c) Microsoft SQL server

- Operating systems
(a) Apple server operating systems
(b) Apple Mac OS
(c) Microsoft Windows NT
(d) Microsoft windows XP
(e) Microsoft windows 2000
(f) Sun Solaris 8
(g) Sun Solaris 9
(h) Microsoft windows server 2003


- Routers
- Switches
- Web servers and browsers
- IP and VoIP telephony
- SQL Server 2000
- BIND
- Novell eDirectory


Friday, December 9, 2011

How lack of compatibility causes software failure?

A software system or application is said to be compatible if it is able to execute efficiently on all the models available in that particular family of the gadget of computers. Different types of computers vary less or more in different aspects like reliability, resilience, performance and so on. These differences or variations affect the execution of the software system. Also the outcome or the result of the program is affected.

- Software compatibility can be defined as the compatibility that particular software system, application or program has when it runs on a particular CPU or central processing unit architecture.

For example, Intel, Pentium, power PC etc. software compatibility is not only confined to different kinds of computers.

- It extends over a vast area.
- Software compatibility can also be defined as the ability of a software system, application or program to run on different operating systems.
- It happens very rarely that a fully compiled software system or application is able to run on many different CPU architectures.
- Usually a software system or application is developed and compiled for various different CPU architectures as well as operating systems to allow the software system to be compatible with the different kinds of operating and CPU systems.
- In contrast to compiled software system or applications, interpreted software, can easily run on many different operating systems and CPU architectures.
- But, this can happen only if the interpreter is available for that particular CPU architecture or the operating system as the case may be.

Software incompatibility is quite common with fresh releases of any software system or application and generally occurs quite a many times for the new software system released for a newer version of CPU architecture or an operating system which is quite incompatible with the older version of that particular CPU architecture or operating system.


- This is because the software system or application might be lacking some of the features and functionalities required to make it compatible with the wanted CPU architecture or operating system.
- There’s another concept that comes into scenario of the compatibility of software called “backward compatibility”.
- The software systems and applications which are able to work with the older versions of CPU architecture or operating systems are said to be backward compatible.

Apart from software compatibility we also have hardware compatibility.
- It can be defined as the compatibility of the hardware components of a computer system which has a particular CPU architecture, operating system and other things like bus; mother board etc. compatible hardware doesn’t necessarily gives its optimum performance as stated.
Best example is given by RAM chips.

- Hardware can be compatible only with those operating systems for which kernel drivers and devices are available. For example, hardware components for Mac Os do not work with Linux operating system.
- Compatibility is essential and useful but it is very difficult to keep the extraneous features and functionalities with the software and the hardware system for a very long term and that too just for the sake of compatibility.

- Compatibility is the ability of the software system or device to work with another system or device.
- Compatibility is concerned with various degrees of partnership among the software and hardware components of a system.
- Two devices or programs are said to be compatible if they respond to the software and hardware commands exactly in the same way.
- Some components achieve compatibility by making the software system believe that they are some different machines.
- Such a process is called emulation. It’s important to note that the hardware compatibility not always considers expansion slots.


Thursday, December 8, 2011

What are different characteristics of usability testing?

Usability testing can be defined as a technique which is used in the interaction design centered around the user for evaluation of a software system, application or product by testing it out on the software product users. This can be thought of as an irreplaceable practice of usability. It is considered as irreplaceable usability practice because it generally gives direct report on how the real users use the software system or application or the product.

From the mentioned things, we can say that the usability testing is highly in contrast with inspection methods for usability.

- In usability inspection methods the experts seem to use several different methods to evaluate and check a GUI or graphical user interface without the involvement of the real users.
- The main focus of the Usability testing is on measuring a software system’s or product’s capacity of achieving its aim and objectives made by humans.

Best examples of the software systems or products that are commonly benefited from usability testing are the web sites and the web applications, the computer user interfaces, documents, databases and devices.

- The Usability testing usually measures the usability i.e., to say the ease of use of some specific object or group of many objects.
- However, the general human- computer interaction is different from this in the way that the human – computer interaction makes the attempts to formulate the principles of the universe.
- Usability testing is basically a black box testing methodology.
- It aims at observing people using the software system, product or application in their own way and discovering new errors and bugs.
- They can also find some ways for the improvement of the software system or the application.

Usability tests determine how well the output comes from the following 4 fields when tested with usability testing test cases:

1. Efficiency or the performance
It includes how much time the software system is taking to respond? In how many the software system is completing the assigned task?

2. Accuracy:
It includes determining the number of mistakes the users make? And were they recoverable or they caused the software system failure?

3. Recall:
It includes determining what the user remembers afterwards i.e., after the time of usage or after the periods of non- usage.

4. Emotional response:
It involves determining the level of satisfaction of the users about the completed tasks? Does the confidence level of person has increased or decreased? Is the user stressed? Is the user confident enough to recommend this software application to others?

Some people have a misconception of usability testing.
- They think that simply collecting opinions on some software system or documentation is called usability testing.
But this is absolutely wrong.

- Gathering of opinions on some object is simply called qualitative research or market research.
- It’s important to clear up the concept of the usability testing.
- It can be though of as a testing involving systematic observation under certain conditions which are controlled by experts.
- This is done basically to have an understanding about how people use that particular software system or application.
- But usually usability testing is used in combination with qualitative research. - This helps in better understanding of user’s expectations.
- It involves watching people use that software system or the product for its intended objective.
- For carrying out a successful usability test, one needs to create a realistic environment where the user or the tester is required to perform the tasks which can be performed by the software system and the observers watch the testing and record the results and the observations.
- Other forms of feedback are also gathered like pre and post test questionnaires, scripted instructions, customer feed back and paper prototypes etc.


Wednesday, December 7, 2011

What are different characteristics of software performance testing?

Software performance is indeed an important part of software engineering and software development plan. It can be defined as the testing carried out to determine the quality and standard of the response and stability of the software system under a certain work load. Sometimes it an also be used to examine other qualitative aspects of the software system like scalability, reliability, security, stress, and resource usage.

In actual, the software performance testing is essentially a part of performance engineering. This is a very crucial testing methodology and is gaining popularity day by day. It is a testing methodology which seeks to raise the standards of the performance factors of the design of the software system. It is also concerned with the architecture of the internal structure of a software system or application.

Performance testing tries to build excellent performance into the architecture and design of the software system before the actual coding of the software application or system. Performance testing consists of many sub testing genres.

Few have been discussed below:

- Stress testing
This testing is done to determine the limits of the capacity of the software application. Basically this is done to check the robustness of the application software. Robustness is checked against heavy loads i.e. above the maximum limit.

- Load testing
This is the simplest of all the testing methods. This testing is usually done to check the behavior of the application or software or program under different amounts of load. Load can either be several users using the same application or the difficulty level or length of the task. Time is set for task completion. The response timing is recorded simultaneously. This test can also be used to test the databases and network servers.

- Spike testing
This testing is carried out by spiking the particular and observing the behavior of the concerned application software under each case that whether it is able to take the load or it fails.

- Endurance testing
As the name suggests the test determines if the application software can sustain a specific load for a certain time. This test also checks out for memory leaks which can lead to application damage. Care is taken for performance degradation. Throughput is checked in the beginning, at the end and at several points of time between the tests. This is done to see if the application continues to behave properly under sustained use or crashes down.

- Isolation testing
This test is basically done to check for the faulty part of the program or the application software.

- Configuration testing
This testing tests the configuration of the application software application. It also checks for the effects of changes in configuration on the software application and its performance.

Before carrying out performance testing some performance goals must be set since performance testing helps in many ways like:
- Tells us whether the application software meets the performance criteria or not.
- It can compare the performance of two application soft wares.
- It can find faulty parts of the program.

There are some considerations that should be kept in mind while carrying out performance testing. They have been discussed below:

- Server response time:
It is the time taken by one part of the application software to respond to the request generated by another part of the application. The best example for this is HTTP.

- Throughput
It can be defined as the highest number of users who use concurrent applications and that is expected to be handled properly by the application.
A high level plan should be developed for performing software performance testing.


Facebook activity