As no draft techniques appear to be available, various local definitions have been included in this document.
Within the SIGIST Interoperability is defined as: “the ability of 2 systems or components to pass information in one or more directions and to use the information that has been received.”
E-business is shorthand for electronic business. Technology is a significant and necessary element in the execution of the business transaction[1]. E-business is often between two organisations, rather than from a consumer. Currently e-business is assumed to use web-based protocols.
For a given business transaction the two systems or components will have distinct roles. The roles can be simplified to two: the requestor[2] and the fulfiller[3]. The requestor makes a request of the fulfiller, which may send a response, if appropriate.
We will limit our view to considering the role presented to us e.g. from the pure requestor, the central system / component appears to be a fulfiller. In a similar fashion, the pure fulfiller views the central system / component as a requestor. The intricacies of the interaction within the combined requestor and fulfiller are outside the scope of this example.
To determine whether 2 systems or components will interoperate when in use for the expected range of requests and responses, in a variety of operational conditions.
Most interoperability testing techniques are derived from functional testing techniques. The differences and variations are described within each technique.
The key variation is that data is being passed between systems or components as either messages or other structured requests and responses. There are two types of data:
The data may include optional elements, repeating groups, etc.
Are the message-protocols orthogonal and complete? I.e. for every combination of request and condition has an appropriate response been defined?
A formal process, similar to Inspections, is likely to be most suitable to analysing the message-protocols.
The request/response process may be synchronous or asynchronous. Create test cases where the timing of the request/response process can be altered and see whether the systems behave correctly. Systems should recover gracefully from timeouts.
Interoperability assumes there is an interface between the two systems or components. Often interfaces are implemented with multiple layers, similar to the ISO 7 layer network protocol stack. Heterogeneous systems may interpret information differently, sometimes unintentionally. Some requests may be interpreted correctly, and others not, between the same pair of heterogeneous systems, other pairs may not interoperate at all.
Potential issues include:
![]() | Mismatches or errors at the data-link layer [layer 2] e.g. messages passed between little-endian and big-endian systems. |
![]() | Issues at the presentation layer [layer 6] e.g. character sets, encoding techniques |
![]() | Issues at the application layer [layer 7] where a documented protocol has been implemented in both systems or components but where differences in interpretation may exist |
Two e-business e-procurement software applications need to communicate a large corporation with an external supplier of stationery products. While the two systems both use XML formatted messages and claim to comply with the open ebXML[4] specifications this is the first time the two products have been linked together. All communication will be transmitted over a secure link. The applications will use and expect unencrypted messages.
Which messages will the two systems use? Are there any restrictions on the types or content of the messages e.g. for compatibility reasons? What is the frequency of likely requests and responses and what are the business rules that will be implemented in both systems e.g. how would a request for an out-of-stock item be handled?
The key line-of-business transactions were identified as being most crucial and test cases will be required to exercise these under various conditions.
Data-flow analysis was used to determine that the two systems supported a common protocol for all of the key line-of-business transactions.
The conditions for a purchase order include:
![]() | A request for a product in stock |
![]() | A request for an out-of-stock product where a back-order is acceptable |
![]() | A request where the product is no longer available |
![]() | The corporation’s credit account is on ‘stop’ at the supplier so no new orders will be accepted |
![]() | A request which contains an ‘unknown’ product code |
The systems will be closely coupled and both will assume the other is online and available, however either system could queue requests or responses for up to 60 seconds. After 60 seconds requests that require a response should be rejected, as either the request or response may be invalid. For example another supplier may have received and accepted the order, or the supplier may have sold the same goods to another customer.
A test harness was created to allow delays to be interjected during the transmission of requests and the responses returned by the fulfiller.
Before the tests were executed both systems were configured to record all requests and responses in log-files. This would enable the test results to be captured more easily so they can be analysed to check for correctness and for any un-trapped errors.
After executing the tests a number of important issues were found:
After retesting the system both the corporation and the supplier accepted the implementation and started using it fully.
[1] Do not confuse with transactions in a software environment: a business transaction typically involves the sale or purchase of something.
[2] Also known as: initiator, client
[3] Also known as: server
Comment from Isabel Evans | 1. I wondered if more of
the terminology should be explained, or a
reference to comms terminology - am in two minds - we could either take the view that if people don't know what the comms terminology means they shouldn't be reading it, or we could take the tack that people may need to know about the subject, and need some help with the specific terminology. I'm not sure - guess it would be good to discuss it... 2. do the examples need to have test cases "spelt out" to show inputs, expected outcomes? 3. the two techniques mentioned in "Static techniques" are techniques that can be used to design dynamic tests. How about dividing this part of the example to show static analysis, review, as you have, and then the techniques used to design the tests? for example you could: - change name of section to static analysis - use tool to carry out dfa as part of static analysis - ref to bs7925-2 - show example - second section - static test using insp/technical review - in preparation for the review analysis of spec / code using state transition graph - then in a third section - dynamic tests design - use state trans and dfa to design these tests, plus if appropriate other techniques? Maybe show a part of the state diagram and related tests? - then in a fourth section - dynamic measurements - running tests designed earlier to exercise the interfaces, make measurements you suggest 4. is there a place for EQP and BVA for example to show the change in outcome as the back-order acceptable length is approached and exceeded? 5. in the operational profile section, would it be good, having asked the questions to give the answers for this example, and use these to drive the tests in the rest of the example? 6. in "use test results to drive decisions" is there a word missing in point 1 - "and request...."?
|
Comment from Neil Hudson | Some general comments on
Interoperability issues:
(1) What is the difference between Interoperability and Compatibility? (2) Does Interoperability include? (a) Ensuring that the set of systems deliver specific overall services. (b) Ensuring that internally (to the overall set of systems) they are interacting as intended and not in an unintentional (but hidden manner). (c) Ensuring that the operation of one of the systems is not threatended with degradation / termination due to the beahviour of other systems in the group. (3) Should interoperability address? (a) Consistency of information replicated across systems. (b) Adjustment of representation of information exchanged. (c) Operation of data / signal exchange mechanisms. (d) Operation with / recovery from infra structure issues such as loss of communications. (e) Operation with / recovery from failure of one or more of the systems. (f) Ability of the overall system to support workload. (g) Basic end2end operation of the services provided by the overall system. (4) Interoperability testing should address threats that individual system testing in isolation are likely to miss. Example - If two separate organisations / teams develop & test two systems then any inconsistency re the format of data is also likely to be present in the system tests and so could be missed. (5) Interoperability testing should address threats that can only be checked when the real systems are interoperating. Example system A resets when interacting with system B due to unexpected behaviour of system B triggering a latent weakness in system A. (6) Interoperability testing should address aspects of the emergent operation that can only be tested when interoperating. Example re-synchronisation of the overall system following communications loss or reset of one system.
|
Comment from Neil Hudson | (1) Definition - What is
Interoperability
(a) Should there be a limit of 2 system? (b) Should the definition refer to the group of systems delivering a (set of) services through their interoperation. (c) Refers to components as well as system. Is Interoperability intended to relate to populations of separately specified / developed systems as opposed to the integration of components within a single system where the development is more coordinated. Component already has an allocated meeting, there is the Component Testing Standard. Inter system integration (ie. Interoperability) has a different set of problems and failure mdoes to address to intra system integration (generally integration of closely coupled components) and should probably be addressed as a separate topic. (2) Roles (a) What is the difference between a Requestor / Fulfiller relationship and a Client/Server relationship? (3) Objectives (a) Could an example set of Request/Responses be provided? (4) Data Flow Analysis (a) What form does this analysis take? (b) Is it looking for compatibility in information that can be represented? (5) State-Transitions (a) Is this ref to application specific protocols or to generic internet e-Business protocols? (6) Timing (a) How does the synchronous / asynchronous nature of the request / response alter the tests required? (7) Interpretation of Requests and Responses (a) What activities are used to address the issues listed in this section?
|
Comment from Peter Morgan | a) First line of TEST
EXAMPLE,where thereseems to be a missing word: "need tocommumicate []
a large ......".Should perhaps have 'between' in the [non-existant]
square brackets.
b) Paragraph 1) of "USE TEST RESULTS TO DRIVE DECISIONS". Here "to 'time-out' and ........" should be to 'time-out' any ........" |
Comment from Marco Giromini | After “Define the
Operational Profile”:
I suggest to add a short section, titled “System Requirements” Inside “Plan and execute tests”: I suggest to gather the first four paragraphs, evidencing them as “Interoperability Requirements”. According to them, I also suggest to detail the “Interoperability Testing Requirements”, as well as to add something about the “Configuration Requirements” used to define the Test Cases. |
Comment from Grenville Johns | At the end of the section on Dynamic techniques a list of potential issues is given This makes reference to 'little-endian' and 'big-endian systems'. I am uncertain of the meaning of these terms and suggest that they are either replaced with some plain english or they are defined in a glossery. I would prefer the former. |