Index Up Site Map Latest News Working Practices Discussion & Review Glossary Module Testing Non-Func. Testing Links Test Papers Feedback Administration

Compatibility - Desktop

This item is scheduled for review.   Received comments are at the end of this page.

Please send any comments you have on the example using the Feedback Form.

0         Configuration

Title        Compatibility – Desktop

Author   Grenville Johns

Version  0.2

Date        24 March 2003

1         Introduction

This example is for Compatibility testing, which was a part of a final release test stage used immediately prior to live deployment

As background the following definition was used: -

Computer systems are said to be compatible when they can run in the same environment without affecting each other's behavior.

The organisation that this example is based upon used a release strategy where a common software build was installed on all of the desktop systems, of which there were many thousands.  The workstation had all of the business software installed upon it through the use of a self installing CD ROM built as part of the process of issuing a release to the live environment.  The end user was given a profile that determined at logon time, which specific applications (and thus business functions) could be performed from the desktop.  The objective being to enable any member of staff to conduct the business they were responsible for, from any desktop the user logged on to.  Conversely the desktop system had the ability to support all business functions.  A part of the release test stage was to compatibility test the various applications and the operating system, as to subsequently address any incompatibility issues on the many thousands of workstations in the live environment, would risk major business impact.

The approach to establishing the software installed on the workstations relied heavily upon a centralised configuration management system.  This system was used to ‘collect’ together all software components that would be installed in the current release upon the desktop.  This collection process included the components of the operating system so that the subsequent testing embraced operating system upgrades and fixes (service packs) as well as new applications and enhancements to existing applications.  The end product was a self-installing build or release CD ROM that could be sent to any where in the country.  It was the contents of this CD that was installed and compatibility tested.

2         Requirement of System under Test

The compatibility requirements of the software were: -

·         No adverse impact between applications when loaded on the workstation

·         No adverse impact to the functionality of the applications as a result of their co-residency on the workstation

·         No impact to any application through the deployment of operating system fixes and upgrades.

·         All candidate applications for inclusion in the release had to have successfully completed system and user acceptance testing.

3         Test Design

The tests were designed to reflect the most common business activities performed in the live environment.  The design capitalised upon a large dedicated testbed that fully reflected all the hardware configurations found in live.  This enabled tests to be incorporated that used specific hardware components (e.g. printers) where different drivers would be used.

The design also used multiple scripts for different applications.  In this way the order of application execution could be varied, reflecting more realistically use in live.  These scripts were then mixed and executed on several workstations simultaneously, so that interoperability as well as compatibility was being tested for.

4         Actual Test Cases

The actual test cases were assembled from the functional system test cases, effectively using these as templates for the various business scenarios that would be executed.  Some changes were inevitable as the environment had to employ a set of integrated data items to facilitate the test, and these ‘parameters’ needed to be reflected in the scripts.

The test cases were also subjected to continued maintenance so that they reflected the most common business functions and the changes that were made to these through new and changed applications.

The tests were planned to be executed in three cycles.

5         Evaluation

Each test case was evaluated against its expected result.  Any deviations were logged as incident reports for subsequent analysis and review.  If following the second cycle a problematic application or component was identified, then the usual decision was to remove this from the build and recreate a CD without it for the final test cycle. 

Following the third test cycle, the software build was evaluated using the incident reports to ascertain the risk to the live environment of deploying the release.  This could lead to the need for further test cycles, including the elimination of further components until the risk was deemed to be low. 

Once a low risk had been achieved a gradual rollout process was employed as a way of cross checking that the impact from deployment did not cause business problems.  This also addressed the test limitation of not being able to guarantee the identification of all issues during all of the previous test stages.  The rate of roll-out was controlled, starting slowly at first, and only ramping up after a limited, business determined period, to allow for any issues to come to light early in the roll-out programme so limiting the scope of any remedial actions that arose.

6         Scope for Automation

The size and complexity of the test environment, which employed mainframes, network servers and workstations, each with various forms of business data, was such that it was difficult to exploit automation.  Some of the limitations were: -

·         Test tools did not support all of the platforms

·         Control and business data was changing (so impacting expected results in an automated tool)

·         Data dependent processing issues e.g. the mainframe date could not be set to a test date as there were multiple partitions, some of these other partitions supporting live operations.

7         Conclusion/Summary

The experience of this approach was that the live environment did not suffer from unacceptable periods of downtime and compatibility issues were avoided.  It was possible to plan well in advance the content of releases so that deliveries to the business could be scheduled, and new functionality arrive on time, commensurate with business marketing activities.

Comment from Marco Giromini “Introduction”: I suggest to add the following Comment, from Neil Hudson, to the Compatibility testing definition: “Compatibility checks for un-intended interactions that disrupt normal operation” or decrease any other funcional or non-functional attributes of the product.

“Scope of automation”: I suggest to specify what kind of testing tools have been tried and what have been used effectively used. E.g. Test Design Tools, GUI Test Drivers, Capture/Replay Tools.