This case study reflects on one of my experiences as a QA working on a project where we had the objective of developing a mid layer application which gets messages from JMS queues and topics (briefly explained below), stores them in different databases and publishes to multiple clients. The project involved a number of specific technologies that I had never worked with before, and these are my learnings.
JMS Queues and Topics: differently from what I initially thought, they are not the same. The first time I have heard the term Topic it was not intuitive and this is where the case study begins, our first challenge was to test the loader. Before I dive into the details, let's briefly explore the Queue and Topic concepts.
JMS (Java Message Service) is a layer where different applications can communicate by sending messages to these services. A queue is where one service will post a message (an object that contains the data being transferred between JMS clients) and other services can subscribe to. Once a message is in the queue, one (and only one) of the subscribers will consume it. The topic differs from a queue in that all its subscribers will get a copy of the published message (I won't dive into the details of how JMS works, but you can easily find more around).
So, back to the first testing piece: the loader is basically a service which listens to a specific queue, parses the message that comes in an XML format, transforms into objects that can be stored and persists the data in three databases. The testing challenge is to validate this service without an user interface. The tools chosen to do the validation were a database viewer (SQL Developer) and a small java class which was written to simply send messages to a configured queue called JmsSender. The strategy at this point was to pick some message examples from the data source and save them in the application codebase, within the test/resources directory. The Java class is placed within the same file structure and reads messages from the resources. It was initially conceived as an unit test, but had to be transformed in a standalone java class to not influence the build results.
Now, when stories reached the “In QA” stage, we could deploy the loader to a server and use the JmsSender class to validate it. I have found that it is not hard to write a few lines in Java to interact with the queues and topics. At this point we had enough infrastructure to do the testing, which basically consisted in deploy and run the loader, point the JmsSender to the environment the service was running on (specific information about the environment at this point was hard coded in the JMSSender) and hit the run button within Eclipse. The selected message (and XML containing all of the data to be stored, in this case) would then be sent to the bridge where the loader was listening and processed.
The next step in the testing effort was to open the SQL Developer and connect to the database for the environment the service was running in. From there, we could query for the specific data we had just sent and capture whether there were any problems. Since there was no UI, the other resource to identify if the functionality was working as expected were the logs, which we manually connected to through an SSH terminal.
After a few stories were through the pipeline the number of different messages reached a good number to create scenarios. In the end, what defined the functional test cases was the sequence of messages sent to the loader and the changes from one to the other. These were the first days of the application, and many requirements were coming our way.
The diagram above displays the flow of data entering the JMSSender, flowing through the queue, and finally being captured by the loader. The eye on the bottom represents the tester watching the results coming out in the databases and the logs. At this point, we were just validating a group of airport data, and each Step XML represents a state of a given airport. Say that on Step 1 the airport has 10 operating gates and Step 2 will have 10 extra gates for that same airport. When the second step is sent, the result expected by the tester is to see log entries that tell the message was successfully persisted, and in the tables relative to gates there shall be one new record for each new gate added. If step 3 contains invalid data (say a runway length with chars instead of an integer), the tester will see the log entry reflecting it could not persist the message and will therefore discard. In the case of invalid message, the DB should not display any change to any table as the loader could not persist to it.
This is the initial phase of the testing, and the evolution of the application added new functionality that required the tests and the JMSSender to evolve as well. Part two will cover these aspects.