Interop Testing, Observations and Suggestions for the Next Step.
E-mail posted by Brian Kowalski [email@example.com] 29 July 1999 on IPIG LIst
Background for IPIG in Stockholm, Agenda Item #12
A short history of how the testing worked for us:
At first we tested single apdu's just to see if we could get them back and forth.
Then Alan Rykhus composed a chart that helped to keep track of what APDU's were transmitted, received and what transport method was used. This was a big step toward keeping track of all the testing. Nicolas Sprauel then expanded the Answer to show the different types of Answer.
Next, Richard Wilson came up with a chart that was transaction based, which became my base for testing with Lyse Pérusse. I say base because some of the options are not implemented by both parties, so a modified chart was negotiated to meet the needs of both parties involved.
In most cases, not ALL the data types were filled in, and certainly, you would have to do *extensive* testing to hit all the possible CHOICEs and other OPTIONAL data. And you can not skip a data type that is the same as another just because you code them the same. Example, the SystemId in, ILLRequest.DeliveryService.EDeliveryDetails.EDeliveryId is not the same as the SystemId in ILLRequest.RequesterId, even though the same code is used to BER encode them.
They could wind up with the wrong tag number or the wrong data.
Now, an APDU comes in an blows up because a MANDATORY tag is missing. When I looked at the BER, I could see the tag was there but had the wrong tag number. This is precisely the kind of problem you hope to find in testing and fix.
However, when I was decoding the BER by hand, I found some OPTIONAL data that also had the wrong tag number. If this would have been the only bad tag, the APDU would have gone into Library.Request (the protocol says ignore anything you do not understand). The test partner would think I received that bit of information, but in fact I did not! I can see the arguments now, "I told you that" ... "No you did not!";-)
Also, if you have the wrong information in an OPTIONAL tag, testing lets you just omit that tag and resend it. Now, your code does not work insofar as putting the correct data into a tag, but it will still put an APDU into the other system, so it seems to have passed the test.
Warning, just because you can interact with one (or more) implementations, does not mean that your APDUs are correct. I experienced a case where an APDU was being rejected by Library.Request because of incorrect data. It was assumed that the incoming APDU was correct, because it was sent to other systems and had not been rejected by them.
The incoming APDU was not correct. The data Library.Request was rejecting the APDU for was just not being looked at by their other test partners. Implementers should be aware that this happens. Once I located the problem (I was looking at both the APDU and my code), it was difficult for me to communicate it to my test partner.
You also must send a Repeat for every service that allows a Repeat because the StateMachine responds differently depending on if it receives a Repeat or Original. (As we are discussing now about an Original Shipped when you are in the Shipped state. It can not happen!)
Some very positive things came out of this level of testing that I did not expect. We were able to exercise the StateMachine by doing things like sending a Received when the other party did not issue a Shipped, a valid state transition on the Requester side, but not on the Responder side! (This prompted a debate on the list :-)
And when Nicolas sent me an Answer-Retry. I did Retry with another ILLRequest with the same TransactionGroupQualifier but a different TransactionQualifier and it worked great in my application and his. That was cool.
Suggestions for the Next Step:
I think that we need to come up with some transaction based charts (maybe based on the Richard Wilson charts) that make us send each service (both Original and Repeat where applicable). Data must be in all the elements that you wish to say you support in the PICS. And the charts must specify what CHOICEs to make so that all the data elements (again, that you support) are coded at least once. Obviously there must be enough transactions to exercise all the services.
We must not forget to exercise things like CancelReply.Yes (the transaction is terminated) and CancelReply.No (the transaction keeps going). Even though it is just a different value for a trivial data type (CancelReply.Answer), it makes sure your StateMachine/ProtocolMachine works.
But in going down each of these routes (CancelReply.Yes/No) you could exercise different CHOICEs in the ILLRequest and Answer.
The charts should be negotiated between the parties because not all options are going to be implemented by everyone.
Another thing to think about is the sending of INVALID data, even asking your test partner to do it to you, so you can see if your system properly rejects anything bad. I doubt that this can/will be done because a good system will not let you send bad values! But in any case, something to think about.
Along with the suggestion above, sending INVALID services is a good workout for your StateMachine. But again, a good system will not let you do this!
There must be a level of commitment on both sides to check the data elements the other implementor says they are going to send. Let them know what is bad and let them send it again, and check it again. This can be very time consuming, but, how else do you tell your customers exactly what your product will do (let alone, fill out the PICS honestly!)
Product Development Programmer
The Library Corporation
Research Park, Inwood, WV 25428
email: firstname.lastname@example.org <mailto:email@example.com>
(304) 229-7803 voice (304) 229-0295 FAX
Solutions that deliver!