Blog

How to Organize the Testing Process and Ensure the Highest Product Quality

Technology
Nick Motaev
8min

Throughout each stage of a company's growth, the requirements for quality assurance and testing can vary significantly due to differing tasks, goals, and methods of achievement.

In this article, we aim to share Wizart’s experience in organizing quality assurance and testing processes. We will delve into how we structured these processes and the tools we selected.

The testing process at different company stages

Companies undergoing developmental phases can traverse various stages: startup, small business, medium business, and large company.

During the initial stages of product development, there isn't always a need for substantial testing costs. The primary focus is to deliver a functional prototype and a product presentable to investors, facilitating sales commencement.

For medium-sized companies, distinct requirements for quality and stability emerge. As the client base expands, ensuring the current product's quality becomes increasingly critical. Customers seek a reliable product, and any downtime, regardless of the cause, is linked with potential profit loss.

As the code base expands, tracking all possible relationships between system modules and monitoring feature performance gets more challenging. The lack of tests impacts the potential for refactoring. Code refactoring becomes notably more costly, and predicting where issues may arise or how functionality might be affected becomes problematic.

This is the right moment to consider different strategies for enhancing product stability and streamlining refactoring processes, making them less burdensome. The aim is to decrease the time allocated to feature checks while providing customers with a highly dependable service. Especially since any service malfunction equals customer frustration and potential disappointment in the product.

Developing the strategy

So, you need to conduct comprehensive testing of your system. To start, we must determine how we plan to ensure the stability and quality of our product. Here, we have specific resources: programmers, QA specialists, and DevOps experts. Each department will be accountable for specific processes.

Programmers’ tasks

Let's begin with programmers.

During the development stage, it's crucial to ensure that new functionality doesn't conflict with existing code and that refactoring doesn't disrupt the system's integrity. We require unit tests, functional tests, and UI tests for this purpose. For development, we utilize the Laravel framework, which robustly supports all test types. We also use the PHPUnit framework for testing.

In our company, we decided that prioritizing functional tests over unit tests would be more efficient. This strategy enables us to save time by testing all components simultaneously through their public interfaces. We reserve unit tests for scenarios where extensive incoming data and their invariants must be validated.

To facilitate this approach, we fully exploit the test database's potential by utilizing the seed mechanism. With this method, we reset the database, execute all migrations, and populate it with test data, all encapsulated in separate test Docker containers. This approach has proven quite advantageous, allowing us to test against data closely resembling real-world usage and to assess the migrations' performance.

QA tasks

For QA specialists, our goal is to maximize automation in UI, API, and acceptance testing.

To reduce routine tasks and speed up testing, we opted for full API test automation. The Postman service played a pivotal role, serving as the foundation for scripting all API tests, which are then automated by our DevOps team.

Moreover, we wanted to completely cover our UI section through Selenium-driven automation. However, a dilemma emerged. While we must ensure our UI works stably, our dynamic development pace results in frequent UI changes. Recognizing the challenge, we made a pragmatic choice: we focused automated testing efforts on key elements and components that undergo minimal changes. This included tasks such as opening the main dressing room page, room selection, and overlay display.

DevOps tasks

Within the DevOps department, we have assigned responsibilities for launching test automation, test queue management, and pattern application across all environments.

Test decomposition

As we described above, we have a number of different tests that we need to automate. For clarity of processes, we will give them symbols that we will use in the future.

  • T1 - Unit / Functional tests. Developed in-house by programmers within services, incorporating seeds and content generator mechanisms.
  • T2 - UI tests / autotests. Automated tests of UI components and simulation of user actions through the UI.
  • T3 - Acceptance Public API tests. Tests to verify the health of our public API.
  • T4 - Acceptance Private API tests. Tests to check the performance of our private API.
  • TCP - Critical path tests. Quick tests that check the main (critical) points for the fitting room performance. In this category of tests, we have included tests that are responsible for the most key moments of our PIM system and fitting room from T2, T3, and T4.
  • MD - mess detector. Identifies problematic areas within our codebase.
  • CSC - code sniffer checkers.

Environments

To thoroughly test our services, API, and fitting room, we have established three environments at Wizart:

  1. Development Environment - A closed space exclusively for developers. It hosts the most recent feature code. Adhering to the Git Flow approach, we automatically deploy all changes committed to the origin/develop branch within this environment. By 'upload,' we refer to the process of continuous delivery (CD) release.
  2. Staging Environment - Designed to closely resemble our production setup, this environment is essentially a complete replica of our live system. Here, we deploy changes from either the origin/release/v{release version} branch or the origin/hotfix/v{fixed release version} branch.
  3. Production Environment - We exclusively deploy highly stable code from the origin/master branch.

Test management

Now, let's delve into the most intriguing aspect: how did we structure our testing processes at Wizart?

Naturally, the foremost requirement was the complete automation of procedures via CI/CD. For this purpose, we opted for the Jenkins automation server, as it resonated best with our team's understanding. We organized the entire framework in alignment with our environments, as depicted in the image below:

изображение-20220704-080639.png

With this in mind, let's delve into each environment using our PIM service as a case study. However, before we proceed, let's introduce some shorthand notations for the deployment process:

  • DD: Deploy to Develop
  • CD: Check the Develop state
  • DS: Deploy to Staging
  • CS: Check the Staging
  • DP: Deploy to Production

The development environment

And so, in theory, the deployment process in Development should follow this outline:

  1. Developers work on features in distinct origin/feature/{feature name} branches, accompanied by the concurrent development of tests (unit tests, functional tests).
  2. After this, a merge request (MR) is initiated for the Code Review phase.
  3. Simultaneously with MR creation, a build is triggered, subjecting the code to scrutiny by the Mess Detector and Code Sniffer.

We created this mechanism to account for instances when individuals inadvertently disable the checkbox before committing to the repository. Regrettably, such situations occur occasionally. Thus, this build system aids in promptly identifying such feature branches, facilitating swift resolution.

testing_organization-MD CS.jpg
MD/CS pipeline

After the code review process, all modifications are integrated into the origin/develop branch, starting the automated project build process in Jenkins. The project's build procedure is illustrated in the image below:

develop.jpg
The Develop Build pipeline

As you can see, we initiate the process by running all internal tests on the build. These tests are conducted within isolated Docker containers, populating the database with fabricated data. This step ensures that the code remains free from bugs and that the logic governing module interactions functions as intended.

After successful completion of the T1 and T2 stages, we can confidently progress to the project's build phase and its deployment to our Develop environment. The initial three stages are extremely important and are indicated in red, signifying that a failure in any of these stages will stop the build process.

And then, we move to the final step of conducting TCP critical path tests, which evaluate the fully assembled system's stability including all the internal services. These tests assess critical functionality. This stage is designated in green since its failure doesn't result in assembly cancellation; rather, it indicates the need for further investigation. In a distributed system, a failure can stem from any service and isn't necessarily tied to the current build. However, such a setback still signals the presence of a bug requiring resolution.

Given that T3 and T4 level tests tend to be time-intensive, taking about 30 minutes in our case, we incorporate these tests into a single scheduled build. This approach sufficiently and timely highlights potential issues in the services, considering the extended runtime of these tests.

The Staging Environment

As we mentioned before, this environment closely mirrors the production setup. To achieve this, we employ comparable server and configuration settings for our internal services.

According to the Git Flow approach, we integrate code here from the origin/hotfix/{build_version} and origin/release/{build_version} branches. These branches already contain code that has been tested and validated.

Nonetheless, we must ensure that our code functions seamlessly within the production environment configurations. The potential exists for variables to be inadvertently omitted or for environment-specific nuances to be overlooked. In light of this uncertainty, we've concluded that running tests for style errors is unnecessary. Such errors should ideally have been identified and solved during the development phase. However, we do perform manual runs of T3 and T4 tests. This manual approach stems from the fact that the Staging environment isn't continuously operational. Here is the build plan for our Staging environment:

testing_organization-Staging.jpg
The Staging build pipeline

As you can see, the process isn't significantly different from the build in the Development environment. In this scenario, each stage must be completed without errors for the build to be deemed successful.

Once the build is compiled, we manually initiate the T3 and T4 tests. Only after these tests are successfully done and the QA team thoroughly verifies all features, can we confidently assert our readiness for a new release.

testing_organization-T3-T4-QA.jpg
Actions after deployment

The Production Environment

Having completed the staging phase, tested the code, and checked features, we can move forward to the application's release preparation.

All the modifications are merged into the origin/master branch and marked with a release tag. After this, an automated release build starts.

This stage holds paramount significance and demands a high level of responsibility, while also requiring swift execution. Given these factors, we have eliminated the preliminary testing stages. After all, we have already run our application through testing using production-like settings during the staging phase. Our primary goal is to deploy our services with minimal downtime. Presented below is a schematic representation of our production build process:

testing_organization-Production.jpg
The Production Pipeline

As you can see, our process starts by verifying the Staging environment's build status. Upon a successful staging environment build, we proceed to initiate the service build for production, followed by the execution of all subsequent tests. You can note, the initial two steps bear critical importance; a malfunction during these stages will result in a halted build process. 

The following stages, however, have no impact on the build outcome; they serve to inform us whether we should be investigating potential issues within the system or celebrating the successful new release.

Errors encountered during the final build stage typically indicate the need for a hotfix. Nonetheless, the thorough testing conducted during the Staging environment phase has thus far helped us avert issues during the concluding build stages.

testing_organization-All.jpg

 If we bring together all our environments, then we get the diagram above. This approach provides us with a high degree of confidence in the testing process and significantly minimizes the risk of critical bugs.