In the Name of Quality – check, re-check and double-check
What use is the best idea, if it does not withstand practice? Nothing, we say. So instead of spending a lot of time and energy on building castles in the air which collapse when you first enter, we prefer to ensure the quality of our products already during the development process. That way we can guarantee that at the end of the process, instead of hot air there is a finished product ready to be used.
Our costumers’ satisfaction and the quality of our products are important parts of the eguana corporate philosophy. That’s why we spend a lot of resources for these processes . For this reason, we dedicate today´s blog post to quality assurance and performance optimization with a special focus on automated testing as one of its most important components.
The eguana development team works according the Agile approach – fast iterations ensure fast prototyping of customer requirements. The advantage for the customer is that new functions and features can be added in a relatively short time span, when compared to traditional development cycle lengths (weeks vs. months). The downside is that using this approach, traditional testing strategies are not applicable anymore. The short time between iterations does not allow for a classic testing setup, as new functions might introduce problems in existing parts of the application.
Since eguana is committed to delivering high quality software to its clients in a timely manner, the testing approach had to be adapted to suit the agile paradigm and deliver a quality experience to its customers.
eguana uses automated testing to ensure fast innovation and reduce the probability of errors in the SCALES application.
The main goal of automated testing is to automatically check certain representative aspects in the software, which are set by the development team. The idea is to select a representative set of navigation paths through the user interface and automatically navigate them, just as a human user would; any errors or problems encountered will be reported by the automated testing application to the development team, making it easier to pinpoint the source of the problem.
This way, errors resulting from changes in the system can be identified faster; in turn, this shortens the time span between the customer initiating a change request and the deployment of the implemented functionality on the production system. Of course, the automated testing process is supplemented by in-depth manual testing by the development team, since not all aspects can or should be automatized.
To automate the testing process, the eguana development team has implemented a testing application that performs the same actions in the SCALES user interface that a human user would do – click certain buttons, insert appropriate inputs, etc. The client performs multiple test cases; in each test case, it uses the simple actions of clicking, typing, etc. as building blocks to navigate the user interface to certain report download pages. It then downloads reports with specific parameters, and compares them with stored control versions that act as “ground truth”.
The output of the testing tool highlights problems within the navigation itself (“pressing button <OK> on page X causes an error”), as well as differences between the test report (=version of a certain report, which was downloaded during the testing) and the control report (=version of the same report, which has been manually downloaded beforehand and saved as part of the testing application). For instance, a change in an underlying algorithm might cause the test report to contain a value for a volume of 15.7 liters, while the control version states that the value for volume should be 13.5 liters. Another example would be generating a test report with German localization, and incorrectly using period instead of comma for decimal values.
Both CSV and PDF reports are compared, using different techniques. CSV files can be compared directly based on their content, since they are already in a format which can be understood by a text comparison program. In case of differences, the program can identify on which line of the CSV file they occur and notify the development team accordingly.
For PDF files, it is a bit more complicated. The approach is to convert the PDF files to images, and compare, page by page, the test version and the control version. If there are any differences between the test version and the control version, a “difference image file” is created. Elements which appear in one version, but not the other, are highlighted in red in the difference image file, as shown in the image below:
The automated testing tool acts as the “first line of defense” before deploying to development and then the production servers. When a new version of the SCALES software is ready to be deployed, the testing tool is used to check the predefined test cases. Any obvious problems will be identified quickly and thus, can be solved faster by the development team. When the automated testing does not identify any rough, obvious errors, the manual testing takes place. Finally, when both testing processes are completed successfully, the new SCALES functionalities are made available to the customers on the production server, allowing for a fast and high-quality delivery.
And if our customers are happy in the end, so are we!
To quote Mahatma Gandhi (who would have thought …):
“The customer is the most important visitor in our house.”
Right he is.
Picture credit: Gerd Altmann auf Pixabay
When the sense of direction was distributed, Anna had just gotten lost. So much the better that her work has only peripherally to do with construction sites – she would probably never find her way back to the office. Instead, the studied journalist diligently writes texts for our homepage and our blog, as well as short personal descriptions with all the potential of being nominated for a Nobel Prize for Literature.