Subscribe by Email


Wednesday, January 28, 2009

Properties of a test / QA / QE Manager

Testing is a vital and critical part of the overall software development process, and is very important that the overall testing environment have the right mix of aggression and thoroughness. A large amount of this attitude comes from the person who leads the testing team. So, what makes a good QA or Test manager?
There are many attributes that a good test, or QA manager should have. Here are some of them:
• The test manager should be very familiar with the software development process. This is the only way that the rest of the testing team can develop the feel for when they should be doing what activity.
• The test manager has be able to ensure that the overall enthusiasm of the team remains high, and promote a positive atmosphere, despite what is a somewhat 'negative' process (e.g., looking for or preventing problems). People should be made to feel that they have an important role in ensuring that customers get a software that works well.
• The test manager should be able to promote teamwork to increase productivity. Teamwork between the members of the testing team is critical, given that each of them may handle a separate area, and may have several elements of intersection. In addition, each person can have a different field of specialization, and together they can cover a large area.
• The test manager should be able to promote cooperation between software, test, and QA engineers. This is not so easy sometimes, but is very critical. It is a close interaction between dev and QE that results in a deeper understanding of where software can go wrong.
• The test manager have the diplomatic skills needed to promote improvements in QA processes. Sometimes software and hardware can be expensive, and management may not really understand or appreciate the need for such, and it is in such cases that the test manager can better explain.
• The test manager must have the ability to withstand pressures and say 'no' to other managers when quality is insufficient or QA processes are not being adhered to. It is the test manager who is responsible for quality.
• The test manager must have people judgement skills for hiring and keeping skilled personnel
• The test manager must be able to communicate with technical and non-technical people, engineers, managers, and customers.


Tuesday, January 13, 2009

Details (sub-parts) of a test plan !!

A software test plan is a document that is critical to any software project. This is the document that is the proper reference to the process that QE follows during the testing process. Having a software test plan is a critical requirement of quality processes, and the document by itself takes some time to prepare (given the importance of the document for the overall process). So let us talk in some more detail of what a test plan should be like:
The software project test plan document describes properties of the testing effort such as the objectives, scope, approach, and focus. One advantage of the process of preparing a test plan is that it provides a useful way to think through the efforts needed to validate the acceptability of a software product, and the consequence of this effort is that such a completed document helps people outside the test group understand the 'why' and 'how' of product validation. One major need for such a document is that it should be thorough enough to be useful, and succinct enough at the same time so that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project:
• Title
• Identification of software including version/release numbers: Which is the software for which this document captures the test process
• Revision history of document including authors, dates, approvals: A document could go through many versions, so this field captures the current version number
• Table of Contents: Very useful, and needed for most documents
• Purpose of document, intended audience
• Objective of testing effort: What should be the goal of the testing effort
• Software product overview
• Relevant related document list, such as requirements, design documents, other test plans, etc.: A test plan by itself is not complete, since a project has different areas covered by different documents
• Relevant standards or legal requirements
• Traceability requirements: Traceability determines how the various requirements are mapped to the different test plans
• Relevant naming conventions and identifier conventions: Very useful for people not involved in the preparation of this test plan
• Overall software project organization and personnel/contact-info/responsibilties: Such a section provides contact details of the key people in the group
• Test organization and personnel/contact-info/responsibilities: The same as above, but covers people in the testing organization
• Assumptions and dependencies: Assumptions can make a lot of difference to the success and failure of a project (and its test plan) and need to be carefully validated
• Project risk analysis: Risk analysis provides a good list of items that could cause risk to the project
• Testing priorities and focus: Any testing process has certain areas of focus (high risk areas, high impact areas, high change areas), and these need to be highlighted
• Scope and limitations of testing: There are some parts of the testing that may not be able to be covered, and this section attempts to cover this part
• Test outline - a breakdown of the test approach by test type, feature, functionality, process, system, module, etc. as applicable
• Data structures - Outline of data input, equivalence classes, boundary value analysis, error classes
• Details of the Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems; all these items should be detailed so that anybody picking up the test plan will have enough information
• Test environment validity analysis - differences between the test and production systems and their impact on test validity.
• Test environment setup and configuration issues
• Software migration processes
• Software CM processes: CM stands for Configuration Management
• Test data setup requirements
• Database setup requirements: If the software requires a database, then these instructions will be needed
• Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs
• Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs
• Test automation - justification and overview
• Test tools to be used, including versions, patches, etc.
• Test script/test code maintenance processes and version control
• Problem tracking and resolution - tools and processes
• Project test metrics to be used
• Reporting requirements and testing deliverables
• Software entrance and exit criteria
• Initial sanity testing period and criteria
• Test suspension and restart criteria
• Personnel allocation
• Personnel pre-training needs: For some cases, people doing the testing may need some special training
• Test site/location: Where will the testing be done. For some specialized equipment, the testing would need to be done at the location where the equipment is based.
• Outside test organizations to be utilized and their purpose, responsibilties, deliverables, contact persons, and coordination issues
• Relevant proprietary, classified, security, and licensing issues - These are very important from a legal point of view
• Open issues
• Appendix - glossary, acronyms, etc.
All of these sections, when completed, will provide a test plan that should be the single document that will guide the testing process


Tuesday, January 6, 2009

What to do when there is not enough time for thorough testing ?

What if there isn't enough time for thorough testing? The first answer (and applicable in a world that is more ideal) is to not be in such a situation. If you are releasing a software that is not thoroughly tested, then you are potentially sitting on dangerous ground, not knowing when and where the software could fail. However, there are cases when you are not able to spend as much time as you would like in testing the application. The rest of the article is about getting the right questions to answer if you need to determine what you should do when there is not enough time for thorough testing.
Use risk analysis to determine where testing should be focused. If the failure in a specific area could lead to a higher impact, then testing that area becomes more important.
In an ideal world, you have the time and ability to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, but in a ream world, there are constraints. Given this, risk analysis is appropriate to most software development projects. Risk analysis however is not so simple, and is a specialist field by itself requiring judgement skills, common sense, and experience. (There are training courses and other formal mechanisms that can train people to become more knowledgable in the area of risk analysis). Some of the factors or considerations that are useful when trying to estimate the procedure when you do not have enough time for thorough testing:
• Figuring out which functionality is most important to the project's intended purpose? This is a business decision and would need consulting from business analysts.
• Which functionality is most visible to the user? Users tend to be more concerned about things they can see, and those should seem to work perfectly.
• Which functionality has the largest safety impact? In this world, with legal worries about impact of software, it is helpful to make sure that areas of the application that can impact safety (human, structure, financial information) need to be tested in more detail.
• Which functionality has the largest financial impact on users? Users get real worried if their is a perception that a software flaw can have a financial effect.
• Which aspects of the application are most important to the customer? From the above points, it should be more clear as to which are the points that a tester should focus on, things that matter to consumers.
• Which aspects of the application can be tested early in the development cycle? In any development cycle, there will be parts that will be developed first, and there will be parts that will be developed later on in the development cycle. It is necessary to understand which are the items that will be available early.
• Which parts of the code are most complex, and thus most subject to errors? A part of any design and development process is the estimation of which workflows and areas of code are complex, where there is a greater chance of errors.
• Which parts of the application were developed in rush or panic mode? This is not a scenario that most people would like to see, but in rushed projects, there are parts which are developed faster (could be parts that are developed near milestones, could be components that are based on existing code and where there is a perception that this area need not have much focus).
• Which aspects of similar/related previous projects caused problems? Heuristics is a big help. Working on the basis of previous cycles and evaluating problems that came in similar situations gives a good overall pointer.
• Which aspects of similar/related previous projects had large maintenance expenses? There are many projects which are in maintenance mode, and for which there will be a large amount of data regarding problems that have cropped up, and error prone zones of the application.
• Which parts of the requirements and design are unclear or poorly thought out? Such a situation is more difficult to evaluate; the common thought in such cases is to deny that any implementation has not been well thought out, and so on. One needs to probe much deeper to find answers.
• What do the developers think are the highest-risk aspects of the application? Typically, the people who have actually developed the application are the ones who have the greatest idea about which area of the application is the most risk-prone. The dev and QE need to spend time in discussion so as to make sure that the most risky parts of the application are identified and tested.
• What kind of problems would cause the worst publicity? This is a difficult area. A part of the application that prints names improperly could be a trivial task, but could cause the application to become a laughing stock.
• What kinds of problems would cause the most customer service complaints? Customer service complaints cost money due to the need to have an active customer service, and can quickly eat away into revenue.
• What kinds of tests could easily cover multiple functionalities? Such a situation is most welcome. If tests can be tweaked in a way that the same test or series of tests can help in the testing of multiple areas of the application, that is great.

If you can add to this article, please add a comment.


Facebook activity