“Pepe, your testing has failed,” reports the team leader.
Pepe checks again but can’t figure out where the error is. Due to lack of time or who knows what, he proceeds to deploy the function to QA as it is: raw and lacking a unit test. Unsurprisingly, everything breaks down —it’s scary even to look at the application. What happens next is that the quality analyst begins to test and gets an appalling 500 error.
The aim of unit testing is to do an early check of how a certain area of the code works in relation to other parts, so as to enable further testing stages shared by developers and quality analysts alike. While the quality analyst is responsible for the quality of the product itself, the developer is responsible for the quality of the code and for its optimization.
This testing stage, more than a requirement or a mandatory field during the development cycle, is a standard set by the team in order to work with a code that’s more reliable and easier to maintain. This convention may or may not be established. Sometimes it happens.
Why is it that unit testing is a convention and not a habit?
Pepe would say that “we have deadlines to meet,” others would claim that it wasn’t included as a requirement. The truth is that the dynamics of most projects (in the real world, no one can afford to code or test the way they would like to) makes us consider best practices as something that we’d be happy to implement if we had the time.
Does that add value to the product and the team? From the programmer’s point of view, there are already-know advantages: that it helps to better understand the code and the business logic, reduce integration problems in the future and promote refactoring and optimization, among others. But specifically, how does this process affect the work burden of the quality analyst, the delivery and even the team’s organization?
An Embedded Quality Analyst
A programmer who carries out a series of tests, including unit tests, and makes sure that the code has undergone some important polishing guaranteeing a certain stability. That means that the code will work as predicted, despite the fact that it probably lacks a few validations.
After this process, the code falls into the hands of the quality analyst, who runs a test called pre-integration (not in the strict technical sense of the word). It’s called that way because the testing is done in the development environment, before the code is committed into a unified branch.
The quality analyst who follows this methodology is what in our Quality Brigade have come to call an “embedded quality analyst”: someone with access to the code who knows how to use Git to test a functional unit and avoid as many bugs and nasty last-minute surprises as possible, which usually happen when doing the demo.
How Can We Help Pepe?
The quality leaders at intive-FDV’s Quality Brigades stand by this procedure because it helps to find critical issues in the early stages of the cycle but also to “clean and polish”, obtain the application’s first feedback, prioritize and devote time to solve complexities. In sum, thanks to it, the team can get an early diagnosis in order to better handle the organization’s and the client’s expectations.
Pre-integration is somewhat similar to the Test-Driven Development methodology because it oversees the quality of the software from a general, agile, practical, and management perspective instead of monitoring the technical aspect only.
If Pepe happens to be your colleague, please urge him to read this because he may not be aware of the multiple benefits that this paradigm has to offer or may not know how to implement them. Or maybe he just needs to be part of the intive-FDV crew.