Subscribe by Email


Thursday, February 28, 2013

How to determine the Operating System support for your product - Part 8

In this series of posts, I have been talking about the Operating System support provided in your applications. In the previous post (Operating System support for your software application - Part 7), I talked about support for 32 bit vs. 64 bit and also talked about how support provided by components can be a huge factor in the operating systems that you support. I took the example of a video application which depends on a large number of external components for codecs, encoders and decoders, for writing to DVD and Blu-Ray and for other parts of the application. If some of these components drop support for an operating system and that component is a critical part of the application, then it is time to take the decision to drop an Operating System. I know of a number of software applications that finally dropped support for Windows XP because 1) Microsoft is on its way to dropping support for Windows XP, with this support ending in 2014. 2) A number of external components dropped support for Windows XP and these were critical enough that the management team of the application finally bit the bullet and dropped support for Windows XP as well.
What happens when you cannot afford to drop support for an application such as where components have dropped support, but the customer profile is such that there is still revenue to be had from customers on this operating system ? Well, that is not a very nice place to be, but you still need to take a stand. If the revenue is important, then you will need to support that specific operating system. So what do you do ? There are a number of steps that you can take to ensure that your product remains on that operating system.
1. Well, you will need to make another effort to ensure that the external component retains support for that specific operating system. If the company or group providing the external component is not willing to provide full support, ask whether they are willing to maintain it to the level that was previously supported. If this is another group in the company, then the revenue potential provides some leverage to ensure that escalation can happen and support is maintained, even if it is at a lower level (only critical bug fixes are provided rather than all bug fixes).
2. The most risky approach. You take a chance and go with a component that is not supported by the provider on that specific operating system. The problem in this case would be that if there is some critical problem that has emerged, things can go out of control very easily, and lead to a situation where there are no good options.
3. Look for alternatives. There are very few functionality items that would not have multiple providers, even if the alternative is a less than perfect functionality. If using another component provides a solution, then you should evaluate the other component and use it if it meets your purpose (even if less than perfect).
4. Prepare for a reduced functionality. I have seen many products using such an approach. When there are no alternatives, and it is decided that support for the specific operating system needs to be continued, then it may be something as easy as dropping the component which has dropped support for the operating system, and having the product without the functionality provided by that component. This needs to communicated to customers as well so that they know that there will be reduced functionality on that specific operating system.


Wednesday, February 27, 2013

Explain TestOptimal - Web Functional/Regression Test Tool



About TestOptimal Testing Tool

- TestOptimal provides a convenient way for the functional/ regression/ load/ stress testing of the web–based applications in an automated way. 
- It also works for the java applications. 
- The technology behind the TestOptimal testing tool is the MBT or model based testing and some mathematical optimization techniques. 
- It generates as well as executes the test cases directly from the model of the application. 
- Actually, TestOptimal is itself a web – based application. 
- It has facilities for integrating it with the JUnit. 
- Furthermore, it can be run along with the NetBeans and Eclipse.
- Another striking feature of TestOptimal apart from the technology is that it uses is the application modeling with graphs.
- Example of such graphs are the state chart XML or in short SCXML. 
- These charts have drag and drop user interface that are capable of running  on  the standard browsers. 
- TestOptimal has a number of test sequencers that effectively meet the testing needs of different users. 
- Mscript or java is used for the automation of the tests i.e., the XML – based scripting. 
-TestOptimal provides statistical analysis of the virtual users and test executions required for load testing. 
- TestOptimal can be integrated with other tools such as QTP, quality center etc. with the help of its web service interface. 
- TestOptimal supports multiple browsers on a number of platforms such as Unix, Linux and Windows.
- The following constitute this model – based test automation suite for load and performance testing:
  1. Test optimal basic MBT
  2. proMBT
  3. enterprise MBT
  4. runtime MBT
- Model based testing and DDT (data driven testing) are combined together by the TestOptimal so as to provide a sophisticated and efficient test automation and test case generation tool. 
- With MBT, one can find the defects in the early stages of the development cycle, thus enabling a quick and efficient response. 
- TestOptimal animates the test execution that provides the user with an insight in to the testing. 
- This also enables the user to validate the model visually. 
- It also lets you track the requirement coverage.
-The test cases can be visualized with the help of various graphs. 
- For achieving the desired test coverage, there are a number of algorithms capable of generating requires test sequences. 
- The same automation scripts and models can be re–purposed if the user wants to perform load and performance testing.
- TestOptimal helps you cut down the length of the development cycle and at the same time achieving desired test coverage. 
- This in turn improves you response to the frequent changes and makes you confident about your software.
- With TestOptimal, it is sure that over 90 percent of your coverage requirements will be met and release turnaround time will be improved. 


Features of TestOptimal

Below we state some unique features of this excellent testing tool:
  1. Finite state machine notation via MBT modeling.
  2. Superstate and sub-model: With this feature a larger model can be partitioned in to smaller library components that are reusable.
  3. Graphs: It provides various graphs such as the MSC (message sequence chart), coverage graph, model graph, sequence graphs and so on.
  4. Model import and merge: It offers various modeling formats based up on XML and UML XMI such as the graphML, graphXML and so on.
  5. Test case generation: It comes with many sequencers such as the optimal sequencer, custom test case sequencer, random walk and so on.
  6. Scriptless data driven testing
  7. Scripting offered in mscript and java.
  8. ODBC/ JDBC support: Relational databases can be accessed and operations such as reading, writing, storing and verifying test results can be performed.
  9. Integration with REST websvc, JUnit, java IDE specifically netbeans and eclipse, remote agents and so on.
  10. Cross browser testing support on browsers such as chrome, IE, opera, firefox and safari.






How to determine the Operating System support for your product - Part 7

In the previous post in this series (Operating System support for your software application - Part 6), I focused on the different roles and responsibility of stakeholders in the team, primarily the Product Management, the QE, and the developers. The Product Manager has to take the decision taking into account the various pros and cons of such a move, and also while evaluating revenue impact; the QE and development team would do their contribution to this discussion keeping in mind the impact on their effort, and any technical factors that could also influence the decision. In this post, I will talk more about the dependency and also briefly touch on the 32 bit or 64 bit discussion.
For some years now, there has been an ongoing discussion about the need to move applications onto a 64 bit architecture and stop support for 64 bit architecture. Most people will not understand this discussion, and the reason why it is a top item of discussion for many teams. In near layman's terms, when you state that your application is now 64 bit, it would mean that it can take inherent advantage of the benefits posed by the new wave of 64 bit operating systems, being able to allocate more memory, and numerous other technical advantages. Also, most Operating Systems that are now available are 64 bit. So why not go ahead and convert your application to 64 bit ? Well, converting your codebase to offer native 64 bit support is a project by itself, requiring a large amount of development and testing time. For teams that have limited resources, making such choices is not easy (and most teams cannot claim to have unlimited resources). In addition, you also need to realize that you would no longer be properly supporting consumers who still have 32 bit operating systems (and where the hardware has been supporting 64 bit for a long time now), so this is a decision that needs to be taken.
The other aspect of this post is in terms of the various components that your application would use. In today's world of software development, it is hard to think of a large software that the development team has totally written. Consider that your product is a large video manipulation application. Even though a lot of the workflows will be written by your team, the functionality of a number of sub-areas are better handled by using external components (which could be built by other teams within the company, or by other companies which specialize in such areas). For example, if you are looking at an application that allows users to organize, edit and manipulate videos, you would need support for the different video formats, you would need access to different encoders and decoders, you would need components for creating DVD's or Blue Ray discs as part of the end process. In all such cases, it is far more efficient and effective to use specialized software rather than trying to replicate all of them.
And this is where you dependency starts to dictate matters for you in terms of the operating system support. The external components that you use are created by companies that in turn have to take the same decision for operating system support as you do, and they would also have a large number of customers, based on whom they need to take decisions. It is entirely possible that you would end up in a scenario where some key component that you are using is dropping support for an operating system, and given its criticality in your own application, you are forced to also stop support for the same operating system.


Tuesday, February 26, 2013

Explain Tellurium - Web Functional/Regression Test Tool


- Tellurium is an automated testing framework developed exclusively for the web–based applications.
- It is an open source tool and based up on user interface module.
- All UI elements grouped together are called as the UI module. 
- Usually, a composite UI object is what that a UI module represents in a format similar to that of the basic nested UI elements. 
- Building of UI locators is actually possible because of the UI module. 
- This tool comes with a Firefox plug – in called the TrUMP (tellurium UI model plugin) using which one can create UI modules automatically.
The duty of the framework is to perform object to locator mapping or OLM. 
- It does this at the execution time automatically so that the user finds it easy to define the UI objects just by their respective attributes. 
- Another technique used by tellurium is the GLC or the group locating concept for the exploitation of the information contained in the UI components. 
- This information is used for locating the locators.
- It defines a complete new language for web testing known as the domain specific language or DSL. 
- UI templates are one of the powerful features of tellurium. 
- These templates can be used for representing many identical UI elements with their dynamic size at the run time. 
- These elements are quite useful for testing dynamic web elements like for example a data grid.
- Another characteristic feature of tellurium is that it can composite the UI objects very nicely in to tellurium widget objects. 
- The tellurium widgets thus obtained can be packed as a jar file. 
- Each of these widgets can be used as a single tellurium UI object once included with the jar file. Java and groovy wrote the code for the tellurium core. 
- However, tellurium trump and engine can be implemented with jQuery and JavaScript. 
- Users are free to test cases in any of these languages: groovy, pure DSL, or java. 
- Tellurium is kind of a portable framework and runs up on selenium. 
JUnit and TestNG testing frameworks are both supported by tellurium. 
- Tellurium is hosted on Google code as an open source project.
- It comes with Apache license 2. 
- The credit for the creation of this tool goes to Jian Fang of the Georgia institute of technology.
- The major flagship project of the tellurium team is the tellurium core. 
- The UI modules act as a backbone to the tellurium tests since they define the html content of the page that is being tested. 
- But it does not make use of CSS and xpaths for defining the UI elements as selenium do. 
- It uses a much better thing i.e., DSL that also defines the relationships among those modules. 
- JUnit or TestNG can be used as testing container as preferred by you. 
- It is required to write the UI module DSL for writing a selenium test case. 
This can be either be done using TrUMP or manually. 
- The robustness of the tests are further improved by the tellurium’s santa algorithm. 
- This algorithm locates the UI module at the execution time DOM. 
- But this feature is only available in 0.7.0 and higher versions.
- The lower versions require that the run – time locators are generated by the tellurium core based up on the definition of the UI module. 
- The selenium commands are then passed by to the tellurium for locating individual UI elements. 
- Using trump UI module DSL can be generated just by clicking on the page elements. 
- The DSL then can be saved to a groovy class and can be used by the tests any time. 
It is the use of UI modules that have made the tellurium so expressive.



How to determine the Operating System support for your product - Part 6

This blog has seen a series of posts on deciding the Operating System supported by your application. The previous post (Operating Systems supported by your application - Part 5) talked about the kind of constrains that there are for an operating system that is not supported - whether these prevent the user from installing the application on that operating system, or just give a warning and let the user install on the specific operating system. This post will talk in more detail about the process for the various stakeholders in the team that come to a decision about the operating systems to support.
The most important stakeholder is the Product Manager. It is the product manager who is responsible for the final state of the product, the system requirements for the product (which includes the operating systems to be supported in the application). The Product Manager is also the one who is responsible for the revenue requirements for the product, and supporting or dropping an operating system can make a difference to the revenue generated by a product by a few percentage (and these few percentage can make a huge difference in terms of whether targets are met or missed). Hence, it is for the Product Manager to take a final call on whether the product should drop a specific Operating System or not. However, it is perfectly fine for team members to be able to provide a lot of updates and constraints to the product manager.
Another important stakeholder is the QE team (the testers). During the testing phase, the team needs to draw up a plan of which are all the operating systems that need to be supported, need to decide the amount of effort to be spent in each operating system, and then actually put in the effort. Suppose a team supports Windows XP, Windows Vista, Windows 7 and Windows 8. In such a case, the team would use data about the approximate number of users on each operating system in order to prioritize the testing effort on each operating system (no testing team has enough resourcing to do all the tests and spend the effort that they would like to do). But you would still expect that if there are 4 operating systems, then the team would spend around atleast 15% on each operating system testing. In some cases, the testing for an operating system might take more time because there are more defects to be found on such an operating system (for example, we found a lot more problems on Vista, many of them related to the security issues because of the user security accesses control introduced in Vista).
So, the testing team would want that if there is a possibility of reducing an operating system because of a lesser number of users on such a system, then they would hold a number of discussions with the Product Manager on this topic, to ensure that their voice is heard and if they have any data, that is also passed onto the Product Manager (such data could be the increasing number of bugs that they are finding on older operating systems).
The development team are also important stakeholders. They are responsible for ensuring that the code is there for functionality to work the same on all the supported operating systems, something that can be problematic sometimes because the operating systems behave differently. In addition, there may be components in use that are not supported as well on older operating systems.
All these are stakeholders and opinions that need to be factored in before taking a decision on whether to drop a specific operating system. The decision needs to be taken after factoring in a lot of points.

In the next post, I will add more points on this particular topic (Operating Systems support for your application - Part 7)


Monday, February 25, 2013

What is meant by Software Process Improvement?


About Software Process Improvement

- SPI or Software Process Improvement is a program that has been developed to provide guidance for the integrated long – range plan for the initiation and management of the SPI program. 
- SPI is based up on a model called the IDEAL model which has the following 5 major stages:
  1. Initiating
  2. Diagnosing
  3. Establishing
  4. Acting
  5. Leveraging
- These 5 major steps form a continuous loop. 
- However, the time taken for the completion of one cycle varies from one organization to other. 
- Depending on the available resources an organization must be able to decide whether or not it would be able to commit to software process improvement. 
SPI requires many activities to be carried out in parallel to each other. 
- Some part of the organization may take care of the activities in one phase while others take care of the other phase activities.
- Practically, the boundaries of the various stages in a software process improvement are not clearly defined. 
- The infrastructure also plays a great role in the success of the SPI. 
- The value added to SPI by infrastructure just cannot be underestimated. 
- It provides a great help in understanding its roles.

About Initiating Phase

- As the name indicates this is the starting point of the process. 
- This stage involves setting up of the improvement infrastructure. 
- Then the infrastructure’s roles and responsibilities are defined. 
- The resources are checked for availability and assigned.
- Finally, an SPI plan that will guide this initiating phase as well as the other higher stages. 
- It is during this stage that the goals of the software process improvement are defined and established based up on the organization’s business needs. 
- During the establishing phase these goals are further refined and specified.
Two components are typically established namely:
Ø  A software engineering process group or SEPG
Ø  A management steering group or MSG 

About Diagnosing Phase

- In this stage, the organization as per the SPI plan starts. 
- This stage serves as foundation for the stages that will follow. 
- The plan is initiated keeping in view the vision of the organization along with its business strategy, past lessons, current business issues and long term goals. 
- Appraisal activities are carried out so that a baseline of the current state of the organization. 
- The results of these activities are reconciled with the existing efforts so as to be included in the main plan.

About Establishing Phase

 
- In this stage the issues to be addressed by the improvement activities are assigned priorities.
- Also, the strategies for obtaining a solution are also pursued. 
- The draft of the plan is completed as per the organization’s vision, plan, goals and issues. 
- From general goals, measurable goals are developed and put in to the final SPI plan. 
- Metrics essential to the process are also defined.

About Acting Phase

 
- Solutions addressing the improvement issues discovered in the previous stages are created and deployed in and out of the organization. 
- Other plans are developed for the evaluation of the improved processes.

About Leveraging Phase

 
- This stage is led by the objective of making the next pass through the process more effective. 
- By this time the organization has developed solutions and metrics concerning performance and achievement of the goals. 
- All this data obtained is stored in a process database that will later serve as source information for the next pass. 
- Also, this information would be used for the revaluation of the strategies and methods involved in the SPI program.
- Software process improvement activities work with two components namely, the tactical component and the strategic component. 
- The former is driven by the latter that is based up on the needs of the organization. 


How to determine the Operating System support for your product - Part 5

This particular series of posts talks about how to determine the support for previous Operating Systems provided in your application (Operating System Support in your application - Part 4). In the previous post, I talked about one major factor - when the maker of the Operating System (whether that be Microsoft or Apple) decides to drop support for the Operating System and will not provide any more bug fixes or other support. This gives a problem where even if you decide to support such an application, you will not get any bug fixes from the makers of the Operating System, which can be a huge potential problem given the interactions of the application with the Operating System.
In this post, I will talk about the process of cutting off support for an Operating System. There are 2 different methods which I have seen about how to cut off support for an Operating System. One of the ways is to provide a hard constraint, which means that the user will not be able to install on that specific Operating System, and the other is a soft constraint which means that the user is given a warning when trying to install on that version of the Operating System.
Consider the variation about using a hard constraint that prevents the user from installing on such an Operating System. What this means is that when the user tries to load the application on that specific version of the Operating System, the application determines the specific Operating System loaded on the computer, and then checks with the supported list of Operating Systems. If the Operating System is not to be supported, then the application installer will give an error to the user and prevent any installation on the user machine. 
Putting a hard constraint is needed when the makers of the software have made a determination that the user should be prevented on that Operating System. This can be when there is a high deal of uncertainty about whether the application will work well on that Operating System without any defects, or when the makers have decided that the version of the Operating System is not in wide user anymore. The hard constraints are also used when the Operating System installed on the user machines is controlled, such as in the case of higher end or specialized software.
The soft constraint means that the user will get a message during the installation process about the version of the Operating System not being supported, and will get an option about whether to proceed or not. If the user decides to go ahead, then the application will get installed. This is normally done when there is an expectation of very few problems on that specific Operating System, and the company does not really want to force the users of that specific Operating System to try alternative solutions. There will be need to be some testing of that specific Operating System, but not at the same level as that of the supported Operating Systems.


Sunday, February 24, 2013

Explain Fabasoft app.test - Web Functional/ Regression Test Tool


About Fabasoft app.Test

- Fabasoft distribution GmbH has developed a web testing tool called the fabasoft app. Test.
- It can be used to create tests based up on the patterns that are effective in reducing the complexity of the java and HTML applications.
- These tests do not contain any CSS and XPath expressions but they do contain statements that are meaningful and can be easily understood. 
- The features of this tool can be extended by connecting with it the eclipse plug–in that is a point–and-click editor. 
- This plug–in can be used for designing effective patterns for web sites. 
Fabasoft app. Test supports web browsers such as Mozilla firefox, internet explorer, and safari and so on. 
- In all these browsers, tests can be recorded using the above point–and–click recorder.
- The recorded test scripts than can be replayed immediately after recording in the other browsers and there is no need for modifying those scripts.
- The tool is even capable of generating various reports in formats such as PDF or html, documentation of the errors consisting of various dumps and screen shots. 
- It supports various platforms such as:
1.   Win
2.   Linux
3.   Mac OS – X

Features of Fabasoft app.Test

 
Some of the characteristic features of this web testing tool are:
1.   Recorder
2.   Multi – browser support
3.   Script and dialog error handler
4.   Documentation of the errors along with screen shots
5.   Various dumps
6.   Several reports in formats such as pdf and html.

- If you have opted for using fabasoft app. Test, you can just sit back and relax and let the tool take on the testing process. 
- The tests that are created with fabasoft app. Test are easy to understand and there is no requirement of the programming skills.
- It can be considered to be the next generation web testing. 
- You can teach fabasoft app. Test about your application in just a few clicks. 
The fabasoft app. Test can test the application further for omissions and errors. 
- The tool is quite easy to install and use.
- Large scale web sites are developed through standards such as content management systems (CMS) including red dot, liferay, joomla and so on. 
- Since these web sites represent the reputation of the companies, it is important that they should remain highly accessible all the time and free of errors. 
- The web developers often end up looking for GUI testing tools for testing their sites. 
- Fabasoft app. Test can be used for the automated testing of the web applications no matter what technology has been used for building these applications such as ASP, PHP, flash, JavaScript and so on. 
- The fabasoft app. Test can be taught about the structure of the application to be tested. 
- Once done with this, you can start with the recording of the test and obtaining reports. 
- It ensures that your application is can run on all the platforms and web browsers. 
- Stay automatically informed about the status of your application testing. 
- You get to know about a problem as soon as it is encountered and solve it well in time.
1.   Fabasoft app. Test studio
2.   Point and click recorder
3.   Multi browser support
4.   Control specification designer
5.   Internationalization of the tests
6.   Test player
7.   Console player
8.   Remote agent
9.   Commander
10. BIRT report support
11. Html reporting



Saturday, February 23, 2013

Explain FuncUnit and QUnit - Web Functional/Regression Test Tool


FuncUnit and QUnit are both tools developed for web functional and regression testing. In this article we first discuss about the FuncUnit testing tool.

About FuncUnit Tool

- It is an open source framework and therefore free to use. 
- The API that this tool uses is based up on the JQuery.
- FuncUnit can either be used as a standalone application or a part of the JavaScript full stack framework. 
- This JavaScript framework is commonly known as the JavaScript MVC. 
Almost all the modern browsers are supported by FuncUnit on almost all the platforms such as mac, linux etc.
- The tool also offers you a choice of running the tests through selenium. 
- An integrated development environment or IDE called funcIT is also available for this tool. 
- Another component is there using which synthetic events can be created and default event behavior can be performed.
- It is known as the Syn and it can simulate events such as dragging the mouse, clicking, typing and so on. 
- Using syn complex JavaScript functional testing can also be carried out. 
FuncIT lets you do the testing directly in the browser. 
- Writing, running and debugging of the tests takes place in the web browsers. - FuncUnit makes this all so easy. 
- You just need to open a page and start the debugger. 
- FuncUnit uses syntax similar to that of the JQuery. 
- This is the perfect web testing tool for you if like short and flexible test cases.  - You can run the tests in any of the browsers and later automate them via selenium. 
- FuncUnit supports the following versions of the modern browsers:
  1. Internet explorer version 6 and above.
  2. Firefox version 2 and above.
  3. Safari version 4 and above.
  4. Chrome
  5. Opera version 9 and above.
- Platforms supported are mac, PC and Linux. 
- In other words you have a complete automated testing suite.
- Some of the features are:
1.   Functional testing
2.   High fidelity
3.   Automated
4.   Free
5.   Multi – system
6. Easy to write


About QUnit Tool

 
- QUnit is based on the similar lines as that of the junit which is a unit testing framework. 
- This testing tool works on the JavaScript test suite (that is used by the JQuery project) for testing the code. 
- It comes with various plug – ins which render it the ability to test any generic JavaScript code. 
- Though, QUnit is somewhat similar to Junit, it does uses features of JavaScript for helping with the testing chores in the web browser like starting and stopping the facilities to carry out tests for the asynchronous code. 
- QUnit is quite a powerful unit testing framework used in JQuery mobile projects, JQuery UI and so on.
- You can get QUnit from JQuery CDN and the latest release id version 1.11.0. 
John Resig is the person behind QUnit.
- Earlier in 2008, QUnit was completely dependent up on JQuery but now it can be used as a standalone application. 
- Common JSunit testing specifications are followed by the assertion methods of QUnit.
- For automated testing writing your own testing framework involves a lot of work since you have to cover all the requirements of JavaScript code. 
- QUnit makes this all easy. 
- You need to include only two QUnit files on the page namely:
QUnit.js: This is the test runner file and
QUnit.css: This styles the test suite so that the results can be displayed.

- Assertions are essential elements of all the unit tests and the QUnit provides 3 namely:
Ok()
Equal()
Deepequal()





How to determine the Operating System support for your product - Part 4

In the previous post of this series (Determining Operating System Support for your application - Part 3), I wrote about the process of determining the number of people in your customer base who are using the Operating System in question. There are ways to do surveys and look at industry data, but there is some amount of variability involved even when analysing the data and some amount of assumptions need to be made. Of course, trying to make such decisions without trying your best case on how to get the data required for such analysis is something that organizations should avoid at all costs. Such decisions could cost money that the organization could ill afford, and hence such decision making should be done with a lot of deliberation.
In this post, let us consider another factor that is of great importance in deciding when to drop support for an Operating System from your application. This is related to the drop in support for a particular Operating System by the makers of the operating system. So, if you could consider the case of an operating system such as Windows NT or Win 2000, the support for all of these have been dropped by Microsoft, and if you were to try to get resolution for a problem on these operating systems with Microsoft, they would decline to provide you any support and ask you to upgrade to the newest operating system.
Now you are developing an application that will run on the operating system. Any application, especially those that accesses files on the local machines or that accesses devices on the local machines such as printers (and most applications give a print interface) have a dependency on the files of the Operating System. From time to time, there are problems that crop up where you need to work with the makers of the Operating System (typically Microsoft or Apple) and even expect them to make some fixes for you. When the makers of the Operating System withdraw support, they stop supporting such problems and no longer want to provide fixes for such problems.
So what do you do ? You could still provide support for the Operating System even when the maker of the Operating System is no longer providing any support, but there is an inherent risk in this decision. During the development process, you could run into a problem that could cripple your system and yet you don't have a solution, or a solution on your end is expensive and time consuming  In such cases, it will cause you significant problems; on the other hand, small problems that really are not problems could be all that are caused. And you have to consider that the maker of the operating system would also have thought a lot about dropping support, and there would have been some factors that went into such a decision.
Apple makes it even easier. As and when Apple releases new operating systems, new machines that are released are packaged with these new systems, and they even stop supporting older operating systems on these machines. Deciding on dropping older versions of the Mac is easier than that of Windows, also because the customer base using the Mac Operating System would be less than that of Windows.

Read the next post in this series (Operating System support for your application - Part 5)


Friday, February 22, 2013

Explain SlimDog - Web Functional/Regression Test Tool


About SlimDog

- SlimDog is a web application testing tool that is based on simple script i.e., the httpunit. 
- The tool comes with a range of commands that help you to work with forms, navigating between the web pages and checking the table contents. 
- The hard task of writing lengthy xml files and JUnit test cases, the slimdog allows the users to create simple text scripts. 
- Each command is a test node that is contained in every line of the script.
- As such all commands contained in a file are treated as a test case and processed as the same. 
- Every command has a syntax that is quite simple as well as easy to learn. 
- If you want, you can form a test suite by combining several test scripts. 
- The results of the tests executed are written as an html page or file to the console. 

How to use SlimDog?

- To use slimdog, firstly you need to download its latest version from its web site. 
- The application will be in zipped form. 
- You need to extract the files to a directory of your choice. 
- Next step is to create a test directory. 
- After this you can start creating tests and save them in the test directory you just created. 
- After you have created the test, your next step is to get the html content. 
You can run the tests from the command line. 
- After obtaining the results save them to a file. 
- Be careful that the file in which you are saving the result should be defined using the –o argument. 
- You can even navigate from one page to another. 
- All files in the test directory can be run as a test suite. 
- Every test case file must end with .test extension so that it becomes recognizable.
- You can even use a proxy. 
- The slimdog commands can even be used with the JUnit test cases.

- However, running a test case is the easiest way. 
- Firstly, the web tester needs to be instantiated and all commands need to be added through the parse line method.
- You do not have to worry about the syntax since it is the same as that of the script files. 
- After this runtest() method can be called and web test results can be obtained. 

General SlimDog Commands

Below we shall mention some general slimdog commands:
  1. Get_html: This command is for establishing a connection with the given URL and thus reading its content. The read content then can be used later. The parameter to this command is the URL itself and it supports variables.
  2. Check_title: Parameter for this command is the required title and it is of the type test. The title of the page that you specify is checked against the given title. There is one thing about this test which is that if it fails, the entire test will fail.
  3. Set_proxy: The parameters for this command are the proxy port and host. It is of the type command.
  4. Check_link (missing): The parameter for this command is the text within the tag and this one is also of the type test.
  5. Check_text: The text to be found is passed as a parameter to this command and it is of the type test.
  6. Click_link: The argument for this is the text within the tag. It is of the type command.
  7. Seturlprefix: The parameter here is the URL prefix. This URL prefix is used as the base URL for other tests. It is of the type command.
  8. Enable java_script: Parameter is either true or false. This command disables and enables the JavaScript based on the argument passed.


Facebook activity