Subscribe by Email


Wednesday, November 30, 2011

What are different characteristics of non-functional testing?

Non functional testing can be defined as the testing of the software system or the application for its all non functional requirements. It happens many times that the names of the non functional requirements get overlapped during the non functional testing.

Therefore, the name so the non functional requirements are used interchangeably many times. Non functional testing includes various testing aspects. Few have been given below:
- Compatibility testing
- Baseline testing
- Compliance testing
- Endurance testing
- Documentation testing
- Internationalization testing
- Localization testing
- Load testing
- Recovery testing
- Performance testing
- Security testing
- Resilience testing
- Scalability testing
- Usability testing
- Volume testing
- Stress testing

In non functional testing, typically the software system or the application is tested against the specifications and conditions listed by the client or the customer.
Non functional testing is entirely based upon the test cases and specifications and the requirements stated by the customer or the client.

Apart from the above listed non functional requirements aspects there are several other aspects covered by the non functional testing. They have been listed below:
- Ergonomics testing
- Migration testing
- Data conversion testing
- Penetration testing
- Installation testing
- Operational testing
- Readiness testing
- Application security testing
- Network security testing
- System security testing

Non functional testing is carried out only after the functional testing has been successfully completed on the software system. Earlier, non functional testing was not considered important. But, eventually software engineers started realizing its importance and they began to concentrate on the extra external feature of the whole software and the hardware system like reliability, interface testing and configuration testing. So, the non functional testing came to the scene and has got a great importance in the field of software development.

Comparison between functional and non-functional testing
- Functional testing gives coverage and concentrates on the relevant functionality or part of the software system or the application whereas non functional testing focuses primarily on the irrelevant components or functionality of the whole software system. The best example is given by graphical user interface or GUI which is also known as look and feel testing.

Generally non functional requirements include performance, volume, stress, load, and security, recovery etc and so on.

- Manual testing as an aspect of non functional testing relates to the testing of functionality which are simply not mentioned in the functional requirements documentation. This is how manual testing and non functional testing is related to each other.

- Functional testing relates to the business documents, requirements documents, test plans and exact mapping of the variables. Non functional testing is in great contrast with the functionality testing.

- Functional testing deals with the stability of the software system or the application. For example, it checks how the software system or the application responds to the irrelevant data input for some function. Non functional testing is a kind of ad hoc testing. But it also checks the stability.

Non functional testing tests the requirements of a software system that do not relate to the functionalities of the software system or application. These non functional requirements include fail over and recovery, social-ability, performance, stress, security and so on.

Non functional testing is to test the features and specifications and the stability of the software system which don’t correspond directly to the business related functionalities i.e., functional requirements, but are equally necessary.
In non functional testing you can look and feel the objects. For example, the installation testing.


What are different objectives of software testing?

None of the processes in the world are carried out without a purpose. Every kind of software testing has got some objectives. This article discusses such objectives only.
Software testing has got several objectives. Objectives are decided on the basis of the expectations of the software developers.

- Software testing is expected to distinguish between the validations of the software system and defects present in the software system.
- It is necessary to describe the principles on which the software works and processes the data.
- It is also important for us to know the principles of different kinds of testing.
- Before creating the tests cases for testing one should decide for a proper strategy to follow so as to achieve the desired objectives.
- The tester should understand the characteristics and behavior of the tools that are being used for the test automation.
- Before testing any software system, the tester should know the problems that can cause the system to fail. Failures should be known otherwise it will be difficult to prevent the potential harm.
- When the software developer pens down the objectives of the testing, he should keep in mind the requirements of the customers.
- Apart from the requirements it is also necessary to keep knowledge about the non requirements of the users.

Both these requirements and non requirements form a major an important part of the objectives of testing. In fact, we can say that 95 % of the objectives of the testing are based on the requirements and non requirements of the user.

There is one more kind of requirements called the missing requirements.
- Missing requirements are the requirements that are needed but they are absent form both the customer’s requirements list and non requirements list.
- Only the software developer or the tester can figure out the missing requirements.
- These missing requirements also form a small part of the objectives of the testing.
- There are some requirements needed for the software system but they are almost impossible to implement.

Below mentioned are the objectives of software testing clearly and in detail:
- To check whether the system is working as required or not.
- To prove that the software system is free of any errors.
- To certify that the particular software system is correct to the best knowledge of the programmer and the tester.
- To certify that the software can be used without any fear of losing data or damage.

For achieving the objectives, testing can be done in following two ways:

Negative testing
- This testing tests for the abnormal or negative operations in the software system.
- This is carried out by using illegal or invalid data.
- In negative testing, the tester intentionally tries to make the things go wrong and checks what happens then.
- Based on the observation, improvement is made further. It checks if the program crashes or not, if the program does any unexpected thing?, whether the software system successfully achieves the target or not?

Positive testing
- In this kind of testing, the software system is operated normally with correct data input values.
- Proper test cases are used.
- This testing methodology includes testing of system software at the boundaries of the program.
- This is done to determine the correctness of the program. Actual result and the expected result is compared and it is determined whether or not the program is behaving normally?,results coincide with the expected results?, software system is still functioning properly or not?

Not to much surprise, the negative testing has got a positive side. It checks out all the flaws, errors and the discrepancies before they show up in the front of the user. After all, a testing is regarded good only if it fails the software system.


Tuesday, November 29, 2011

What are different characteristics of integration testing?

Sometimes abbreviated as "I&T", it is one of the most important testing in the world of software testing which tests units as a group. Means to say, in integration testing the units or modules are combined and tested. This is done to determine that the units are working in collaboration with each other or not.

- Integration testing is carried out after the unit level testing but before the validation testing.
- For integration testing units or modules that have been unit tested are supplied as input. Then they are conjoined to form groups and integration is performed over them and the output is given out.
- Integration testing is carried out to examine the functionalities, reliability, performance and requirements of those grouped modules under testing.
- Integration testing is implemented through the interfaces of the units with the help of black box testing techniques.
- During integration testing success and failures are overcome, inter process communication between the units is tested and using the input surface individual and distinct subsystems are implemented.
- Like other kinds of testing, integration testing also requires the aid of test cases to test various aspects of its input unit aggregates.
- Integration testing works on the idea of “building block” according to which verified and examined unit aggregates are implemented on a verified base which is the software system itself.

There are various kinds of integration testing techniques. Three major techniques have been discussed below:

- Big Bang:
In this type of integration testing all of the grouped units or modules are joined together to the form the complete and finished software system and then the integration testing is carried out for the whole software system in one go. The big bang technique is very effective when the software developer wants to save his/ her time. This technique is a true time saver package. But, it has a disadvantage which is that if the unit tests have not been recorded and carried out properly, the whole integration testing process may become more complicated and difficult to crack and may prove to be a hindrance between the integration testing and its goal.

- Big bang technique has a distinct testing method which is called “usage model testing”.
- This integration testing works for both hardware integration testing as well as for software integration testing.]
- This type of integration testing aims implement user like workloads in user like environments which are well integrated.
- This testing methodology involves the proofing of environment first and then later proofing the individual units by implementing their functionality or usage. - This can be thought of as an optimistic approach to integration testing.
- This technique requires more hard work as the problems are more. It is based upon the idea of “ isolate and test”.

But the big advantage here is that usage model testing provides good test coverage and thus making it more efficient than the other techniques.

- Top down integration testing:
This type of integration testing includes testing of the modules at the top most position first and coming down to the branches of those modules or units until the end of the program is reached.

- Bottom up integration testing:
This type of integration testing the modules or the units’ aggregates at the lowest level are tested first and then the upper modules are tested. This testing is carried on and on till the top most module is tested. This kind of testing is helpful only when all the modules have been developed and are ready for integration.

- Top sandwich integration testing:
This integration testing technique involves combination of top down and bottom up testing.


Friday, November 25, 2011

What are different characteristics of visual testing?

Visual testing is a frequently used testing technique for testing software. But, what it is actually?

Visual testing technique is categorized under non destructive testing. Non destructive testing includes several other techniques. As the name suggests, non destructive testing techniques do not involve vigorous checking of the software structure and so does visual testing or “VT” as it is abbreviated and commonly used.

Visual testing itself suggests that it has everything to do with visual examination of the program or the source code. Anything which is to be tested is first examined visually. Later the operations are carried out. Similarly for software systems and programs also the same procedure is used. They are first checked visually and later they are tested with white box testing or black box testing etc. even though visual testing sounds like an unsophisticated method of testing, it is quite effective.

Many errors and flaws in the source code and programs can be spotted during visual testing. It is more effective when a large number of professionals carry out the visual examination. Visual checks how sound the program or the software application is before it is brought in use.Visual testing sounds very simple but, it requires quite a lot of knowledge.Even though it is very primitive kind of testing it has got a lot of advantages.

Few have been listed below:

- Simplicity
Visual testing is very easy to carry out. One doesn’t require any complex techniques or software.

- Rapidity
Apart from being simple, it is faster in process when compared to other kinds of testing techniques. One doesn’t require any extra efforts.

- Low cost
Visual testing is priced very low. You are charged only for hiring professionals to examine your software and nothing else. If it was to be tested by the writer itself, then it would have been absolutely no cost.

- Minimal training
The individuals or the professionals testing the software visually need minimal training just enough to spot big blunders in the program.

- Equipment requirement
It requires no special equipment.

Visual testing can be performed almost anytime. You can visually examine a program simultaneously while making some modifications or manipulating or executing the program.

In contrast to these advantages there are limitations to visual testing. They have been discussed below:

- Visual testing can detect only the mistakes that appear on the surface of the program. It cannot discover the discrepancies hidden in the internal structure of the program.
- The quality of visual testing also depends on the resolution of the eyes of the tester.
- Extent of visual testing depends on the fatigue of the person who’s inspecting. Due to prolonged testing, the examiner may start getting headaches and pain in the eyes.
- Also during visual examination there’s a lot of distraction due to surrounding environment. It impossible for a person to devote himself entirely to the visual testing without paying attention to what’s happening around him.

Visual testing holds good when it comes to checking the size or length of the program, to determine its completeness, to make sure the correct number of units or modules are there in the program, and to inspect the format of the program; basically to ensure that the presentation of the program is good.

Visual testing spots the big mistakes. They can be corrected at a very low level of testing and this in turn reduces the future workload. Requirements include:
- A vision test of the inspector
- Measurement of light with a light meter.

The inspector only needs to set up a visual contact with the part of the program that is to be tested.Visual testing also gives an idea on how to make a program better.

Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation
The Art of Unit Testing: With Examples in .Net

Managing the Testing Process: Practical Tools and Techniques for Managing Hardware and Software Testing


Thursday, November 24, 2011

What are differences between verification and validation?

Verification and validation together can be defined as a process of reviewing and testing and inspecting the software artifacts to determine that the software system meets the expected standards.

Though verification and validation processes are frequently grouped together, there are plenty of differences between them:

- Verification is a process which controls the quality and is used to determine whether the software system meets the expected standards or not. Verification can be done during development phase or during production phase. In contrast to this, validation is a process which assures quality. It gives an assurance that the software artifact or the system is successful in accomplishing what it is intended to do.

- Verification is an internal process whereas validation is an external process.

- Verification refers to the needs of the users while validation refers to the correctness of the implementation of the specifications by the software system or application.

- Verification process consists of following processes: installation, qualification, operational qualification, and performance qualification whereas Validation is categorized into:
prospective validation
retrospective validation
full scale validation
partial scale validation
cross validation
concurrent validation


- Verification ensures that the software system meets all the functionality whereas validation ensures that functionalities exhibit the intended behavior.

- Verification takes place first and then validation is done. Verification checks for documentation, code, plans, specifications and requirements while validation checks the whole product.

- Input for verification includes issues lists, inspection meetings, checklists, meetings and reviews. Input for validation includes the software artifact itself.

- Verification is done by developers of the software product whereas validation is done by the testers and it is done against the requirements.

- Verification is a kind of static testing where the functionalities of a software system are checked whether they are correct or not and it includes techniques like walkthroughs, reviews and inspections etc. In contrast to verification, validation is a dynamic kind of testing where the software application is checked against its proper execution.

- Mostly reviews form a part of verification process whereas audits are a major part of validation process.

Verification, Validation, and Testing of Engineered Systems
Fundamentals of Verification and Validation

Verification and Validation in Computational Science and Engineering


Wednesday, November 23, 2011

What are different methods of verification and validation?

Verification and validation together can be defined as a process of reviewing and testing and inspecting the software artifacts to determine that the software system meets the expected standards. There are various methodologies for verification different kinds of data in software applications. The different methods have been discussed below:

- File verification
It is used to check the integrity and the level of correctness of file. It is used to detect errors in the file.
- CAPTCHA
It is a kind of device that is used to verify that the user of the website is a human being and not some false program intended to hamper the security of the system.
- Speech verification
This kind of verification is used to check the correctness of the spoken statements and sentences.
- Verify command in DOS.

Apart from verification techniques for software applications there are several other techniques for verification during the development of software. They have been discussed below:

- Intelligence verification
This type of verification is used to adapt the test bench changes to the changes in RTL automatically.
- Formal verification
It is used to verify the algorithms of the program for their correctness by some mathematical techniques.
- Run time verification
Run time verification is carried out during execution. It is done to determine if the program is able to execute properly and within the specified time or not.
- Software verification
This verification type uses several methodologies for the verification of the software.

There are several other techniques used for verification in circuit development. - Functional verification
- Physical verification
- Analog verification

Verification, Validation, and Testing of Engineered Systems
Fundamentals of Verification and Validation

Verification and Validation in Computational Science and Engineering


Tuesday, November 22, 2011

What are different characteristics of white box testing?

White-box testing or clear box testing, transparent box testing, glass box testing, structural testing as it is also known can be defined as a method for testing software applications or programs.

White box testing includes techniques that are used to test the program or algorithmic structures and working of that particular software application in opposition to its functionalities or the results of its black box tests. White-box testing includes designing of test cases and an internal perspective of the software system.

Expert programming skills are needed to design test cases and internal structure of the program i.e., in short to perform white box testing. The tester or the person who is performing white box tests inputs some certain specified data to the code and checks for the output whether it is as expected or not. There are certain levels only at which white box testing can be applied.

The levels have been given below in the list:
- Unit level
- Integration level and
- System level
- Acceptance level
- Regression level
- Beta level

Even though there’s no problem in applying white box testing at all the 6 levels, it is usually performed at the unit level which is the basic level of software testing.

White box testing is required to test paths through a source codes, between systems and sub systems and also between different units during the integration of the software application.

White box testing can effectively show up hidden errors and grave problems.But, it is incapable of detecting the missing requirements and unimplemented parts of the given specifications. White box testing includes basically four kinds of basic and important testings. These have been listed below:

- Data flow testing
- Control flow testing
- Path testing and
- Branch testing

In the field of penetration testing, white box testing can be defined as a methodology in which a hacker has the total knowledge of the hacked system. So we can say that the white box testing is based on the idea of “how the system works?” it analyzes flow of data, flow of information, flow of control, coding practices and handling of errors and exceptions in the software system.

White box testing is done to ensure that whether the system is working as intended or not and it also validates the implemented source code for its control flow and design, security functionalities and to check for the vulnerable parts of the program.

White box testing cannot be performed without accessing the source code of the software system. It is recommended that the white boxing is performed at the unit level testing phase.

White box testing requires the knowledge of insecurities and vulnerabilities and strengths of a program.

- The first step in white box testing includes analyzing and comprehensing the software documentation, software artifacts and the source code.
- The second step of white box testing requires the tester to think like an attacker i.e., in what ways he/ she can exploit and damage the software system.
He/ she needs to think of ways to exploit the system of the software.
- The third step of white boxing testing techniques are implemented.

These three steps need to be carried out in harmony with each other. Other wise, the white box testing would not be successful.

White box testing is used to verify the source code. For carrying out white box testing one requires full knowledge of the logic and structure of the code of the system software. Using white box testing one can develop test cases that implement logical decisions, paths through a unit, operate loops as specified and ensure validity of the internal structure of the software system.

Pragmatic Software Testing: Becoming an Effective and Efficient Test Professional
Search-Based Testing: Automating White-Box Testing

Software Testing Interview Questions You'll Most Likely Be Asked


Monday, November 21, 2011

What are different characteristics of dynamic testing?

Dynamic testing is also known as dynamic analysis. It’s a part of software testing which is basically used to explain the dynamic behavior of a software application or a program. Therefore, dynamic testing can now be defined as the testing the response or reaction (physical in nature) of the system to the dynamic variables ( the variables which keep changing with time and are not constant) that have been used in the program.

As the name suggests, dynamic testing should be carried out dynamically and not like static testing. To carry out a dynamic test for any program, one has to compile the software, run it and work with it. Working with the software involves inputting data values to the variables. It also involves checking the output if whether or not it is up to the expectations of the programmer.

The actual program output is checked against the desired program output. Input and output is checked for validation of the software. Many methodologies like unit tests, system tests, integration tests and acceptance tests have been developed for dynamic testing.

The idea of dynamic testing is typically based on testing the software during execution of the program and also during its compilation.
- Dynamic testing is totally opposite of static testing.
- The software application must be actually compiled and executed and tested as the dynamic testing is a part of validation process, which is again a part of verification and validation process.
- There are many methodologies that can be used for testing a program dynamically. If you are not comfortable with one technique, you can for the other one.

Functional Test Techniques
- These techniques are commonly known as black box techniques.
- These techniques help in designing test cases which are based on the functions of the software application under test and there is no need to consider the details of the software structure.
- These techniques are used to check for input and expected output.

Black box testing techniques:
There are many black box testing techniques available today. Few have listed below:
- Boundary value analysis
- Equivalent partitioning
- State transition testing
- Syntax testing
- Cause-effect graphing

White box testing techniques:
These are also known as structural test techniques. These are used to check the structural design of the software application for flaws and mistakes.There are many white box testing methodologies present today. They can be used effectively for dynamic testing. Following are some of the white box techniques:

- Branch decision testing
- Branch condition testing
- Branch condition combination testing
- LCSAJ testing
- Modified condition decision testing
- Random testing
- Data flow testing
- Statement testing
- Random testing

Here statement testing technique is one of the structural testing techniques and it is used to examine the components of software components decomposed into smaller parts or modules. Statements can be categorized as executable or non executable and they are tested accordingly. But for this technique you need to provide inputs for the software component, some sort of identification for the statements to be executed and you also need to specify he expected outcome for the module or component. Dynamic testing is carried out by domain experts and professionals.

Programming the Dynamic Analysis of Structures
Dynamic Testing: The Nature and Measurement of Learning Potential


Sunday, November 20, 2011

What do you understand by gray box testing?

Grey box testing or gray box testing can be defined as a combination of both white box testing as well as black box testing. Therefore, we can say grey box testing requires the full knowledge of algorithms and internal structure of the software system for the designing and production of test cases at the level of white box testing but, testing the software system at the level of black box testing.

It is not necessary for the person who is testing the software system to have full knowledge about the internal structure, source code and algorithms of the program.

Grey box testing according to one misconception is not only about modifying the input data values and modifying output information values because the system has to be tested as per the conditions of black box testing.

One needs to understand this when integrating two modules of the software system which have been written by two different programmers and whose thinking differ a lot. The two programmers only know about the interfaces of the both modules under test.

In contrast to this, the modification of a data repository is well considered as grey box testing because in this case only the internal data is modified and the external data remains unaffected. Apart from integration testing, grey box testing also includes reverse engineering.It is required to determine error messages and the boundary values for data.

Grey box testing includes the benefits from both sides i.e., it has the advantages of both white box testing and black box testing. This may seem confusing but it can be clarified with the following example. A person who knows the concepts of software mechanism and its internal structures as well as he/ she knows how the system should work and can determine the output, will design better test cases while externally testing the software system. However, a person with only the knowledge of black box testing won’t be able to make better test cases as compared to the former.

Despite all the added advantages, grey box testing is limited to intelligent test cases and the testing is to be performed with limited information, exception and data type handling etc. grey box testing is implemented to find flaws in the design and implementation of the software system.

The developer who uses grey box testing is well versed with the knowledge of system design and system implementation and designs test cases based on that knowledge itself.

The test strategy in grey box testing is based on the limited knowledge of internal structure of the software system. The best examples of this can be architectural model, UML design model and state model.

Grey box testing techniques are the best techniques for applications related to internet and web. It is ideal for modules that are connected through well defined user interfaces or which are loosely integrated. A major part of grey box testing is under white box testing and only a minor part follows the black box testing.

During grey box testing, initially the tester implements a limited number of test cases to test the internal structure of that particular software system. Later the tester follows black box testing techniques and feeds inputs to the test cases and checks whether the output is obtained as expected or not. In this kind of testing, inspection of internal structure is fully allowed but, the extent of modification is limited.

In addition to all the benefits of grey box testing, this testing methodology is a great time saver. It’s very useful especially when you have a shortage of time to test the software system.




Saturday, November 19, 2011

What is meant by code coverage? What are different types of code coverage?

- Code coverage can be defined as a measure to measure the extent to which the source code of a software system has been tested.
- Code coverage is categorized under white box testing techniques since the inspection of the code is carried out directly.
- Code coverage methodology was initially developed for systematic testing of software system. It was developed by two researchers:Miller and Maloney in year of 1963.
- Code coverage is regarded as one of the important considerations concerning the safety certification in the field of avionics equipment.
- Code coverage is not centralized on one idea. There are several different criteria or types to choose from. The criteria or type is chosen as per the requirement of the extent of testing. At a time one or more coverage criteria (types) can be used.

Different Coverage types have been discussed in details below:


Basic coverage type:
This category again has following criteria:
- Statement coverage: It determines whether or not each node in the program has been executed.
- Function coverage: It determines whether or not each function or sub routine in the program has been called.
- Decision coverage: It determines whether or not each edge in the program has been executed and it also ensures that requirements of every branch have been met.
- Predicate coverage: It is also known as condition coverage. It determines whether or each and every Boolean expression in the program has been evaluated to either true value or false value.
- Predicate/ decision coverage: It is combination of both predicate and decision coverage and determines whether or not both the types of coverage are satisfied. Fault injection technique becomes necessary here. It is done to ensure each and every part of the software system has got sufficient coverage.

Modified predicate/ decision coverage: It is usually abbreviated as MC/ DC. Some applications like avionics software applications are safety critical and they require satisfying the modified form of predicate/decisions coverage. The modification here is that each and every individual criterion for condition and decision should have a distinct affect on the outcome and it should be separately visible.

Multiple condition coverage: \
As the name suggests, this type of coverage has two or more conditions. The group of conditions is then tested. Multiple condition coverage determines whether or not all the combinations of the conditions in each decision are tested.

The above mentioned 3 code coverage types are frequently and mostly used coverage types. There are other types which are not so in use. They have also been discussed below:
- JJ path coverage:
It determines whether all the jump to jump kind of paths has been executed.
- Linear code sequence and jump coverage: It determines whether or not all LCSAJs have been executed.
- Entry coverage:
It determines whether or not every possible call has been executed.
- Exit coverage:
It determines whether or not every possible return of the functions has been executed.
- Path coverage:
It determines whether or not every possible path through a given unit has been executed.
- Loop coverage:
It determines whether or not all the loops have been executed.

Testing often achieves 100 percent of code coverage. This is verified through safety critical applications. Attaining full one kind of coverage is practically impossible. Most parts in a program are such that they cannot be accessed easily and therefore remain void of code coverage.There are certain things that are needed to be considered while implementing a code coverage type:

- Requirements of the code coverage for the certification of the finished product.
- Level of code coverage required.
- Testing of code coverage against the tests to verify the requirements.
- Direct generation of object code.




Friday, November 18, 2011

What is static testing and what are its components ?

What is static testing? Static testing can be defined as a kind of software testing methodology in which the software is tested without actually being compiled and executed. Therefore, static testing is just the opposite of dynamic testing where the software is compiled and executed and then tested. Static testing does not goes into the detail of the software application but, checks for the correctness of the program or source code, document and algorithm. Basically syntax of the program is checked in static testing.
It is done manually by a team of professionals and qualified individuals in the concerned field. Major errors can be found out using this methodology. The writer of code himself/ herself can review the source code to check out for errors. There are many techniques followed for static testing. But the most commonly used ones are the following:
1. Code inspections
2. Code walk through and
3. Code reviews
From the idea of black box testing techniques, we can say that the review of specifications and requirements forms an important part of static testing. This is also done manually, though this is a tough task. Static testing forms the verification part of verification and validation. Nowadays there are some methodologies available by which static testing can be made automatic. This an be done by passing the software code through a static testing test suite that consists of a compiler or an interpreter which checks the software code for verification and syntax errors. Errors found during static testing are much easier to correct than the ones that will be found later. Professionals handling static testing are typically testers and application developers. Static testing is only concerned with verification activities.
While carrying out static testing for software applications certain standards are followed which are namely, standards for integration, deployment and coding. Usually static testing nowadays is done by some automated program or a tool. The analysis done manually by professionals is commonly known as program comprehension or sometimes it is called program understanding. The quality of analysis varies from professional to professional. Some analyze software part by part while some take the whole software code into consideration. Nowadays reverse engineering and software metrics are considered to be methodologies of static testing. Static testing is popular when it comes to the field of verification of the software code, computer systems needing high safety and location of potentially harming code.
Static testing also involves some formal methods for analysis. In formal methods, the analysis result for a software code is usually obtained by carrying out some rigorous mathematical calculations and methods. Following are some mathematical methodologies that are used under formal static testing:
1. Axiomatic semantics
2. Denotational semantics
3. Operational semantics and
4. Abstract interpretation: this technique is based on the idea that every statement executes based on its mathematical properties and values of its declaration. Out of all this technique can be regarded as the best technique.
There are some other techniques apart from mathematical techniques that can be used for formal static analysis. They are called implementation techniques and have been listed below:
1. Model checking: this technique takes into consideration the finite state of the system. If the system is not in finite state, it is made finite by using the technique of abstraction.
2. Use of assertions
No matter how many methodologies we may use for static testing, there will always be some uncertainty of the execution of the program and flaws. It cannot be said that after static analysis the program will execute 100 percent properly.

Some related books:




Thursday, November 17, 2011

What is a review and what is the role it plays in the software development process?

Reviews give support to a software whether it is good or bad and thus, making it easier for people to judge whether the program is the right choice for them or wrong. As far as a program is concerned, it is improved with only those new extensions which have been reviewed properly. Otherwise, these extensions are marked as “unsupported”. If the extensions have been reviewed they are marked as “stable and supported” and added to the official directory. Reviewing is an important method or strategy for making a software application or a program even more dependable and secure. Reviewing can be defined as a process of self regulation and evaluation by a team of professionals and qualified individuals in that particular field. Reviewing is essentially needed to know pros and cons of the program and to improve it’s performance, and to maintain the standard of the software application. Reviewing also provides the program with some sort of credibility.
Reviews can be classified into many types depending upon the field of activity and the profession involving that particular activity. Peer review is commonly known as software review in the field of computer science and development. Software reviewing is a process in which a software product like a source code, program or a document is first examined and checked by its author and then by his/ her colleagues (who are essentially professionals in that field) and qualified individuals for evaluating the quality of the proposed software product. According to the capability maturity model, its purpose is to spot and correct flaws and errors in software applications or programs and thus preventing them from causing any trouble during operation. Reviewing is a part of software development process and it is used as tool to identify flaws and correct them as soon as possible so as to avoid potential errors. Reviewing is necessary as it saves a trouble by identifying problems early during requirements testing otherwise which would have been a hectic problem to fix during software architecture testing.
Software reviews are different from other kind of reviews. Software review processes involve the following activities:
1. Buddy checking (unstructured activity) and formal activities like:
2. Technical peer reviews
3. Walk through
4. Software inspections.
Software reviewing is now considered as a part of computer science and engineering. If there are reviewers; less difficult it becomes to solve a problem. But even though, there may be many reviewers and researchers, it is still difficult to find out every single and small flaw in a huge work piece. But reviewing always improves the work and identifies the mistakes. Reviewers and the review process are in demand because of basically three reasons.
Firstly, the workload of review cannot be directly handled by the team of developers. Even if each individual contributes his/ her all time it won’t be enough.
Secondly, even though the reviewers work as a team to find out mistakes, they put out their own opinions about the program.
Thirdly, a reviewer cannot be considered equal to an expert in all the fields concerning that program.
So having more reviewers to review a software artifact becomes necessary. The names and identity of the reviewers is kept secret to avoid unnecessary criticism and cronyism. Reviewing leads to great improvement in the quality of the software product, readability of the program code, identification of missing and incorrect references, identification of statistical errors and also the identification of scientific errors. Software reviewing is like a filter which filters out the program in its best form to the benefit of the users.

Some great books explaining software reviews:
1. Best Kept Secrets of Peer Code Review: Modern Approach. Practical Advice
2. Software Engineering Reviews and Audits
3. Peer Reviews in Software: A Practical Guide


Tuesday, November 15, 2011

What is Performance testing for software applications?

Performance testing is required in every field. Without doing some validation for performance testing, quality and success cannot be said to be achieved. Similarly in the field of computer science and engineering, performance testing in software applications is of great importance. Performance testing is done to find out the execution speed and time of the program, and to ensure its effectiveness. Software performance testing basically involves some quantitative tests that can be performed (in a computer lab for example), number of millions of instructions per second (MIPS) and measurement of response time. It also involves some tests for qualitative aspects such as scalability, interoperability and reliability.
Stress testing is carried out simultaneously with performance testing. So finally we can define software performance testing as a testing in software engineering that is done to find out the measure of some qualitative or quantitative aspect under a specific workload. Sometimes, it is also used to relate other quantitative and qualitative aspects such as usage of resources, scalability and reliability. Software performance testing is a concept of performance engineering which is very essential to build good software.
Performance testing consists of many sub testing genres. Few have been discussed below:
1. Stress testing: This testing is done to determine the limits of the capacity of the software application. Basically this is done to check the robustness of the application software. Robustness is checked against heavy loads i.e., to say above the maximum limit.
2. Load testing: This is simplest of all the testings. This testing is usually done to check the behavior of the application software or program under different amounts of load. Load can either be several users using the same application or the difficulty level or length of the task. Time is set for task completion. The response timing is recorded simultaneously. This test can also be used to test the databases and network servers.
3. Spike testing: This testing is carried out by spiking the particular and observing the behavior of the concerned application software under each case that whether it is able to take the load or whether it fails.
4. Endurance testing: As the name suggests the test determines if the application software can sustain a specific load for a certain time. This test also checks out for memory leaks which can lead to application damage. Care is taken for performance degradation. Throughput is checked in the beginning, at the end and at several points of time between the tests. This is done to see if the application continues to behave properly under sustained use or crashes down.
5. Isolation testing: This test is basically done to check for the faulty part of the program or the application software.
6. Configuration testing: This testing tests the configuration of the application software application. It also checks for the effects of changes in configuration on the software application and its performance.

Before carrying out performance testing some performance goals must be set since performance testing helps in many ways like:
1. Tells us whether the application software meets the performance criteria or not.
2. It can compare the performance of two application soft wares.
3. It can find faulty parts of the program.
There are some considerations that should be kept in mind while carrying out performance testing. They have been discussed below:
1. Server response time: This is the time taken by one part of the application software to respond to the request generated by another part of the application. The best example for this is HTTP.
2. Throughput: Can be defined as the highest number of users who use concurrent applications and that is expected to be handled properly by the application.

Good book on performance testing on Amazon (link).


What does Scalability in software applications mean ?

Scalability can be defined as the ability of a software application, network, process or program to effectively and gracefully handle the increasing workload and carry the specified tasks properly. Throughput gives the best example for this ability of a software application. Scalability as such is very difficult to define. Therefore, it is defined based on some dimensions. Scalability definition and handling is very much needed in communication areas (like in a network), in software applications, in handling large databases and it is also an important concept in routers and networking. Software applications and systems having the property of scalability are called scalable and they improve throughput tremendously after addition of new hardware devices of need in and in proportional to the workload. Such systems are known as scalable systems.
Similarly if a design, network, systems protocol, program or algorithm is suitable and efficient enough and works well when applied to greater conditions and problems in which either the input data is in large amount or the problem or situation has got several nodes, then it is more scalable. If, while increasing the quantity of input data the program fails, the program is not said to scale. Scalability is much needed in the field of information technology. Scalability can be measured in various dimensions. The dimensions in which scalability can be measured effectively are discussed in detail below:
1. Functional scalability: in this measurable dimension new functionalities are added to the software application or the program to enhance and improve its overall working.
2. Geographic scalability: this measurable dimension deals with the ability of the software application or the program to maintain its performance and throughput, and usefulness irrespective of distributing of working nodes in some geographical pattern.
3. Administrative scalability: this measurable dimension deals with the increment of working nodes in application software, so that a single difficult task is divided among smaller units making it much easier to accomplish.
4. Load scalability: this measurable dimension can be defined as the ability of a divided program to divide further and unite again to take light and heavy workload accordingly. In other words, the program can change itself according to the changing load.
There are numerous examples present for scalability today. Some are listed below:
1. Routing table of the routing protocol which increases in size with respect to the increase in network.
2. DBMS (data base management system) is also scalable in the sense that more data can be uploaded to it by adding new required devices.
3. Online transaction processing system can also be stated as scalable because one can upgrade it and more transactions can be done easily.
4. P2P or peer to peer transfer protocols are scalable. Load on each peer increases in proportion to number of peers available and thus dividing the work. Best example of such kind of system is bit torrent system.
5. Domain name system is a distributed system and it works efficiently even when the hosting is World Wide Web. It is largely scalable.
Scaling or addition of resources is done basically in two ways. These two ways have been listed below:
1. Scale out or scale horizontally: this method of scaling involves addition of more nodes or work stations to an already divided or distributed software application. This method has led to the development of technologies like batch processing management and remote maintenance which were not available before the advent of this technology.
2. Scale up or scale vertically: scaling up or scaling vertically can be defined as the addition of resources to any single node of the system. The resources can either be CPUs or memory. This method of scaling has led to the improvement of virtualization technology.

Amazon: Book on scalability


Thursday, November 10, 2011

What is reliability in terms of software engineering ?

Reliability is one of the most important aspect when it comes to discussions about a software application or a program. But what does it exactly means? Reliability can be defined as the strength or the solidity or the rigidity of structure of the software application. It is also a measure of resilience of the software application. Measurement of reliability shows us how much risk there is with the software application; it also measures the number of failures that can happen due to the present internal defects of the software application. A software application is tested for reliability to know the probability of failure of the software application as well as crashes so that the errors and defects can be reduced and corrected to the best level possible. There are some aspects of reliability of which care should be taken while testing. They have been listed below:
- Application practices
- Structure and complexity of the algorithms
- Programming practices
- Coding practices
Software Reliability can be defined as the probability of software operation free from failure for a certain period of time in defined and controlled conditions. Software Reliability overall affects the reliability of the whole system. Many people get confused between software reliability and hardware reliability. Software reliability differs from hardware reliability in the sense that it shows the perfection limit of the software and application design as compared to the hardware reliability which focuses upon manufacturing perfection.
The highly complex structure of the software is the major part of problems of Software Reliability. Till date no proper qualitative and quantitative technologies or methodologies have been designed to measure Software Reliability without any problems. There are several approaches that can be taken in account to improve the reliability of the software application even though it is difficult to balance all the factors of development effort, money and time with improvement of software reliability.
Software reliability if measured properly can be of great help to software application developers. There are many interactions between Software reliability and other aspects of the software application. The other aspects include the structure of the software program, and the number of tests that the software application has gone through. Through several reliability tests, data in the form of statistics can be obtained and true measure of reliability can be known. From the statistical data it can be easily known where improvement is needed in order to achieve greater reliability.
Different researchers and scientists and developers have their own way of testing software for reliability. This leads to conclusion that software reliability testing depends a lot on the one who is performing the test. This makes it a kind of art for which one has to practice a lot and come out with more creative, new, innovative and practical ideas to test reliability of a software application. Since the software reliability testing techniques and methodologies are weak, one can never be sure and confirmed that the software being tested is truly reliable in all kinds of conditions and environment.
Testing software for its reliability is not less than solving a real life hard problem. It requires a lot of efforts and it is time consuming. It also demands a lot of money. Even if we are using some other system to verify one software application for reliability, we cannot be sure that the system being used for testing and comparison is truly verified and cent percent correct. Faults are always present in each and every software application. There are always bugs and the software cannot be tested against infinite conditions. That’s impossible.


Wednesday, November 9, 2011

Backward compatibility for software applications, what does this mean ?

Backward compatibility can be defined as the quality or ability of a device to work well with input generated by a device of older technology. For example if the latest version of a music player can still play music of old formats and types, the music player is said to be backward or downward compatible. The best examples of backward compatibility are given by communication protocols. Forward compatibility is just the opposite of backward compatibility.
In the context of programming languages, a programming language is said to be backward compatible if it’s compiler of version “a” is able to read and execute the source code or programs written in the older version of the same language compiler i.e., in the version “a – 1”. A technology or an IT product can be called a backward compatible device if it properly and equally replaces the older device of the same kind. Even a particular kind of data format can be stated as backward compatible under the condition that the program or message written in that format is still valid under the improved version of the data. For example, the newest version of Microsoft Word should be able to read documents created by previous (could be many years previous) versions of Word.
Backward compatibility can be looked upon as a relationship between two devices having similar attributes. In a layman’s language we can say that a device is called backward compatible if it exhibits all the functionalities of the older device. In the relation of backward compatibility the new or modern version device is said to have inherited all the attributes of the older one. If it does not have those qualities, it cannot be called as a backward compatible device. There are two types of backward compatibility. They have been discussed below:
Binary compatibility or level- one compatibility: It can be defined as the ability of a program to work well directly with the new version of the compiler of the language in which it has been written without its recompilation or modification.
Source compatibility or second level compatibility: it can be defined as the ability of the program to work well and effectively with the new version of the compiler of the language in which it has been written, but, with recompilation of the source code and also with condition that the source code should not be changed.

Many programs and devices use various technologies to achieve backward compatibility. “Emulation” is one such technology. In emulation, the platform of the older software is simulated into the platform of newer software and thus providing backward compatibility. There are so many examples of backward compatibility available. A few have been listed below:
Blu ray disc players can play CDs and DVDs.
Newer video game consoles are able to support games which were created for preceding video game consoles like Atari 7800 and Atari 2600, game boy advance and game boy systems, Nintendo DS and Nintendo DS lite, Nintendo 3DS and Nintendo Ds and Nintendo DSi, play station 2 and play station 3 and play station, PSP and Psone, PS vita and PSP, Xbox 360 and Xbox, wii and Nintendo etc.
Microsoft windows are backward compatible with shims, where the software of the newer version is tweaked specifically to work with already released products. For example, when Microsoft would have released Windows 7, they would have tested existing software applications that works with Windows XP and Vista and made changes inside Windows 7 to ensure that it works with these software as well.
Mac OS X Intel 10.4 versions have been made backward compatible with Mac OS X 10.6 Intel versions through Rosetta which is a binary translation application.
Microsoft word 2000 is backward compatible with Microsoft word 97 and Microsoft 2008 is backward compatible with Microsoft 2007. The other applications of Microsoft office follow the same backward compatibility pattern.
Even some cameras show backward compatibility like Nikon f mount and Nikon DSLRs, canon EF mount and canon APS-Hs etc.


What are non-functional requirements in the field of software development ?

Non functional requirements is a concept of requirements engineering and systems engineering; it can be defined as a requirement that specifies the criteria or criterion which can be used to judge the operations of a systems and it’s behavior. There is great line of contrast between functional and non functional requirements - a functional requirements state the purpose or aim of the system or the software whereas a non functional requirement states how a system should be. Non functional requirements are none other than the qualities of a software or system. Non functional requirements are also called by other names such as given below:
1. Quality attributes
2. Quality of service requirements
3. Quality goals
4. Constraints and
5. Non behavioral requirements

Non functional requirements can be divided into 2 main types as discussed below:
1. Execution non functional requirements: this category includes non functional requirements such as usability and security i.e., it includes all those types of non functional requirements which one can observe at run time.
2. Evolution non functional requirements: this category includes non functional requirements such as maintainability, testability, scalability and extensibility etc i.e., it includes non functional requirements which are present in the static structure of the software of the system.

Numerous examples can be given for non functional requirements. Few have been listed below:
1. Sufficient network bandwidth
2. Audit
3. Accessibility
4. Control
5. Back up
6. Availability
7. Capacity
8. Forecast
9. Back up
10. Current
11. Deployment
12. Dependency on other parts
13. Compliance
14. Configuration
15. Certification
16. Disaster recovery
17. Error recovery
18. Documentation
19. Efficiency
20. Effectiveness
21. Emotional factors
22. Escrow
23. Environmental protection
24. Failure management
25. Legal issues
26. License
27. Extensibility
28. Maintainability
29. Interoperability
30. Network topology
31. Modifiability
32. Operability
33. Open source
34. Response time
35. Platform compatibility
36. Performance
37. Privacy
38. Portability
39. Price
40. Quality
41. Recoverability
42. Reporting
43. Reliability and resilience
44. Constraints from resource like memory availability, speed of the processor
45. Horizontal scalability and Vertical scalability
46. Robustness and safety
47. Standards compatibility with software and tools.
48. Stability
49. Supportability and testability

Because of the use of suffix “ility” with many non functional requirements they are also called “ilities”. Functional requirements are such requirements that they can either be met or not met. But, non functional requirements are measurable. They can be measured up to what extent they have been achieved or not achieved. This helps to measure the properties of the final software which otherwise would have been declared as a failure. This also makes it easy to know where the software is lagging in quality and can be improved upon. There is a very important thing about requirements that one has to take care of.

This is the balance between different non functional requirements. It’s very important to strike a balance between all the non functional requirements to keep the system working well. All non functional requirements are measurable in terms of quality and quantity both. There are some important and compulsory aspects of non functional requirements which have been stated below:
1. Documentation
2. User interface and human factors
3. Performance characteristics
4. Hardware considerations
5. System interfacing
6. Error handling and extreme conditions
7. Quality issues
8. Physical environment
9. System modifications
10. Security issues
11. Resources and management issues
Apart from these aspects there are several other questions that follow these like who is responsible for back up of the system and who is responsible for the maintenance of the system.


Monday, November 7, 2011

What is the difference between functional and non-functional testing ?

Functional testing is a kind of testing whose cases of tests are based on the specific conditions of the component of the software that is under test. Inputs are fed to the function and the resulting output is examined. In this type of testing, the internal structure of the software source code or the program is not known. Functional testing can be considered as a kind of black box testing. Functional testing checks a program against specifications of the program and design of the documents. This kind of testing involves basically 5 steps.
The first being the identification of the functions expected to be performed by the software.
The second step involves the creation of data for input according to the specifications of the functions.
In the third step the output after processing of data is determined according to the specifications of the functions.
The fourth step deals with the execution of the cases of the tests.
The actual output is compared with the expected output in the last and final fifth step of the functional testing.

Functional testing forms a crucial part in software processes. What all does functional testing do ?
Functional testing is used to check performance.
Functional testing tests the graphical user interface (GUI) of the program.
Functional testing requires checking the whole program from one end to the other end.
This helps to ensure performance and quality.
It helps in developing programs faster.
Errors also result during functional testing which should be corrected with supporting unit tests.
Later they should be checked once more and verified with a functional test.
Functionality tests have got a great benefit. Many correct tests means the program is correct and working.
Functional testing increases the probability of automatically checking the program.
Functional testing is used to validate the whole input output conversion process.

To make functional testing more effective a process called “continuous integration relentless testing” should be implied. Functional testing is done according to business requirements as stated by client’s specifications. The various aspects of functional testing are listed below:
1. Smoke testing
2. Sanity testing
3. Unit testing
4. Top down testing
5. Bottom up testing
6. Interface testing
7. Usability testing
8. Regression testing
9. Alpha and beta testing
10. System testing
11. Pre user acceptance testing
12. White box testing
13. Black box testing
14. User acceptance testing
15. Globalization testing
16. Localization testing
In contrast to functional testing, the non- functional testing of a software application tests it for non-functional features and requirements. Sometimes there is an overlap of scope of various non functional tests amidst many non- functional requirements. Therefore, in such cases the names of the non functional tests are interchanged to avoid confusion. There are various aspects of non functional testing which have been listed below:
1. Documentation testing
2. Compliance testing
3. Baseline testing
4. Compatibility testing
5. Recovery testing
6. Localization testing
7. Performance testing
8. Endurance testing
9. Load testing
10. Internalization testing
11. Volume testing
12. Stress testing
13. Usability testing
14. Resilience testing
15. Scalability testing and
16. Security testing
In addition to the above mentioned aspects, the non- functionality testing also covers the following additional features:
1. Load testing
2. ergonomics testing
3. migration testing
4. penetration testing
5. data conversion testing
6. operational testing
7. network security testing
8. system security testing
9. installation testing
There’s a basic difference in both types of testing which is that the functional testing tells one what software does whereas non functional testing shows one how well the software executes.


Sunday, November 6, 2011

Different licensing situations in software development - Part 7

As a part of this series, I am writing on software licensing models (software licensing and open source), particularly when you use software components inside your software product. For example, you could be using an open source software for parsing XML, instead of writing your own software for this purpose. The same thing could be repeated for any such open source software product that you may want to use.
Now, if you want to use a service (such as providing an email service, or having an online photo editor, or some other similar service), then it is easier to use open source software than if you are distributing your product as a shrink wrapped software. So, in the previous post, we talked about how using a CopyLeft license put terms on the final software that any derivative work based on the copyleft license would also need to be released on the same license as the copyleft software. Further, these rights cannot be subjected to revocation at a later point. So, if a free software does not ensure that derivative works be distributed under the same license, then it is not a copyleft license.
In addition, copyleft itself has weak and strong provisions. A weak copyleft provision allows that if a commercial software uses a copyleft software component, then only if the component is changed, then the changed section of code will need to be available for re-distribution. The concept of a weak copyleft provision is provided under the GNU Lesser General Public License and the Mozilla Public License (so you can find commercial software that actually incorporates component software that is licensed under the LGPL and MPL); while there are strong licenses such as the GNU General Public License that enjoins the entire incorporating software to be released under the same terms as the component.
What this also means is that if you are working with a commercial software company, then any use of licenses which have terms such as copyleft, GPL, LGPL, etc needs to be looked at very carefully before you go ahead with using such software. Using GPL would in almost all cases be totally ruled out.
More in the next post ...


Saturday, November 5, 2011

Different licensing situations in software development - Part 6

In the previous post, I was writing about easy availability of various items (software licenses), and ended up with a listing of the different kinds of open source licenses (the various names of the open source licenses were written down). In this post, I will continue on the subject of open source licenses, as well as the difference between closed source licenses and open source licenses. And open source software is a slightly vague term, which is in fact, further explained into free software and open source software (with the difference between them being that free software focuses on getting the software without charge into the hands of the user, while open source means that the focus is on ensuring that people have access to the source of the software and can make changes).
Here is a listing of all the open source licenses: http://www.opensource.org/licenses/category
Some of the terms used in the licensing debate cannot make sense unless you have read some explanations of them. For example, there is a term called "Copyleft". Copyleft is something that is very critical for makers of commercial software, since Copyleft not only insists that the software should be open source for anybody using that software, but if anybody uses the software and modifies it or extends it, these modified versions should also be made open source. So, if somebody uses that open source software, integrates it into their software, then the licenses compels them to release the modified version of the software as well. This restriction prevents most people who make commercial software to avoid the use of Copyleft software in their own software. If you are thinking of using Copyleft software, please be sure to check the exact terms of the license and also check with your legal team (most company legal teams I know are very hesitant in using copyleft software).
In Copyleft software, the initial owner of the software has not waived their rights, instead has ensured that there are some restrictions on the usage of that software in terms of commercial software. Read more at this Wiki: http://en.wikipedia.org/wiki/Copyleft

Read more in the next post ..


Friday, November 4, 2011

What is Earned Value Analysis (EVA) in Project Scheduling?

Earned Value Analysis(EVA) provides a quantitative indication of progress. The project manager uses this method to track the project. Proponents of the earned value system claim that it works for every software project irrespective of the kind of work. As a part of this, there is an initial estimation of the total number of hours of doing this project, with every task being given an earned value which is based on its estimated percentage of the total effort. In simpler terms, a project manager will be able to determine, through a quantitative analysis, how much of the project is actually complete.

As a part of this process, the following steps are needed to be done:
- The BCWS (Budgeted Cost of Work Schedule) is evaluated for each task that is included in the project. This estimation is done in terms of person-hours or person-days(if the effort is much more than a few hours). So, for a given work task its BCWS is the effort.
- All the BCWS that are calculated for all the tasks are summed up to get a value called BAC(Budgeted Completion).
- The next variable BCWP(Budgeted Cost of Work Performed) is calculated. The method to calculate BCWP is by taking BCWS value for all the tasks that have been completed(at any point in the schedule).

In simple terms, the difference between BCWS and the BCWP is that BCWS is the estimate for all task that was supposed to be done, while BCWP is the summary of all the activities that were completed.

EVA compares the planned amount of work with what has actually been completed, to determine if cost, schedule, and work accomplished are progressing as planned. Work is earned or credited as it is completed.

Earned Value Analysis
- compares like terms and is quick to apply in practice.
- requires the ongoing measurement of the actual work done.
- tasks that have not been started or that have been completed are relatively easy to quantify in terms of earned value.


Thursday, November 3, 2011

Different licensing situations in software development - Part 5

In previous posts (License conditions), I have been writing about the need for having the proper software licenses when you are developing a software product and you are using some external software components inside your software.
There are a large number of software components apparently available free for you to start using. Suppose you want to have a database inside your software and don't want to pay the large amounts needed for MS SQL Server, or Oracle Database. Or you could need a XML parser, or some software for running regular expressions, or many other such needs, and when you search for these software, you will find plenty of such software available in the market. When you are starting a software product, there is plenty of temptation to pick up a component that meets your needs, and just start using it. I hope that some of the previous posts have explained that there are a number of problems with going ahead with such an approach.
So, if you do want to pick up an open source component to use in your own software, you need to understand the terms under which the software has been released, and if you don't have a legal person to help you, then you can be in some problem. At the very least, you should be able to look at the type of license with the software component, and then be able to say whether the license meets your needs. Here is an explanation of what open source software is, and what are some of the common types of software licenses used in open source software:
Open source software means software that is available in the form of source code; also the license available with the software gives users the permission to modify / change / compile / and in some cases, distribute the software source code.
Using open source software (either in the form of the application, or using the source code inside your application) has resulted in a huge amount of savings for consumers, but there are some legal implications of using this code.
Some of the open source software in use are:
Apache License,
MIT License,
BSD license,
GNU Lesser General Public License,
GNU General Public License,
Eclipse Public License and
Mozilla Public License

I will continue on this subject in the next set of posts ..


Wednesday, November 2, 2011

Define the tracking progress for an object oriented project?

For a object oriented project, tracking becomes really difficult and establishing some meaningful milestones is also a difficult task as there are many things that are happening at once. For tracking an object oriented project, following milestones are considered to be completed when the below mentioned criteria are met:

Milestone for Object Oriented Analysis is considered completed when the following conditions are satisfied :
- Every class is defined and reviewed.
- Every class hierarchy are defined and reviewed.
- Class attributes are defined and reviewed.
- Class operations are defined and reviewed.
- Classes that are reused are noted.
- The relationships among classes are defined and reviewed.
- Behavioral model is created and reviewed.

Milestone for Object Oriented Design is considered completed when the following conditions are satisfied :
- Subsystems are defined and reviewed.
- Classes are allocated to sub systems.
- Classes allocated are reviewed.
- Tasks are allocated.
- Task allocated are reviewed.
- Design classes are created.
- These design classes are reviewed.
- Responsibilities are identified.
- Collaborations are identified.

Milestone for Object Oriented Programming is considered completed when the following conditions are satisfied :
- Classes from design model are implemented in code.
- Extracted classes are implemented.
- A prototype is built.

Milestone for Object Oriented Testing is considered completed when the following conditions are satisfied :
Debugging and testing occur in concert with one another. The status of debugging is often assessed by considering the type and number of bugs.
- The correctness of object oriented analysis and design model is reviewed.
- The completeness of object oriented analysis and design model is reviewed.
- Collaboration between class and responsibility is developed and reviewed.
- Test cases designed are conducted for each class.
- Class level tests are conducted for each class.
- Cluster testing is completed and classes are integrated.
- Tests related to system testing are established and completed.

Each of the milestone is revisited as object oriented process model is iterative in nature.


Tuesday, November 1, 2011

How important is the job of issue tracking for a Project / Program Manager ?

In the course of work, I have had several interviews, and also interviewed a number of people, mainly for the post of project manager or program manager. One of the most significant questions that is posed is about the SDLC, and whether people understand what the Software Development Life Cycle is all about.
For me, the life blood of a project (and the issue that can cause the most harm to a project) is about effectively tracking the open issues that happen in a project. You know them, right ? You have a project where there are a lot of open ends, a lot of design and architectural meetings where questions are posed and answered, and then these are not captured in some sort of minutes. Or you have an important new concept, and there are a lot of questions, and then a few months later, you find that that the same questions are again being asked, and people have to scramble to answer these questions. Further, if it turns out that some people in the group have changed, then new people have to answer these questions, and there may be changes that may happen in the specs or design because of these questions. Consider an example, where a design is done in a specific manner (and not in the blatantly obvious way) because of some limitations. Later, when a design review is done, you can be sure that the same question would be asked (and this could happen months later), and then, there are no easy answers to these questions.
Now consider the case where you presented a design concept, there were some important queries that were raised, and then, in the excitement, nobody captured these queries (or there were queries regarding some part of the schedule, or about some follow up with a vendor or even the customer), and nobody followed up to get these queries answered; it is only later that it is realized that these were important questions, and the lack of follow up meant that some problems that were detected via the queries were never really caught.
So, how do you solve this ? For a Project Manager or a Program Manager, it is critical that the tracking of open issues and queries be set as an ongoing process, and that there are no failures in this area. For this purpose, whatever tool, whatever process, all these need to be decided and set in stone. For example, I know people who have bought issue tracking tools, other use Excel or even a notebook to capture important issues, still others use flagging inside Outlook to track open items. And people set aside dedicated time to look at their issues and follow these up.


Facebook activity