BlogTesting Strategy at JULDITEC: Guaranteeing Over 80% Coverage
Testing Strategy at JULDITEC: Guaranteeing Over 80% Coverage
General

Testing Strategy at JULDITEC: Guaranteeing Over 80% Coverage

Miguel Ángel Júlvez

Miguel Ángel Júlvez

Equipo técnico

April 16, 2026
8 min lectura
Compartir:

In enterprise software development, quality is non-negotiable. Production errors not only generate additional costs, but they erode customer trust and can compromise critical business operations. That's why at JULDITEC we have built a solid testing strategy that ensures every line of code we deploy meets the highest quality standards.

The absence of a robust testing strategy inevitably leads to a vicious cycle: recurring bugs, emergency patches, accumulated technical debt and, ultimately, an unstable product that generates frustration for both the development team and end users. Our approach is radically different: testing is not a final phase, but a practice integrated into every stage of development.

Our Testing Philosophy: Quality from the First Commit

At JULDITEC we understand testing as an investment, not as a cost. Each automated test is an insurance policy against future regressions, living documentation of the system's expected behavior and a tool that allows our developers to work with confidence.

Our fundamental principles are:

  • Testing as part of development: We don't separate development from testing. Each feature is delivered with its corresponding tests.
  • Total automation: All our tests run automatically on every integration, without manual intervention.
  • Measurable quality: We use objective metrics (code coverage, cyclomatic complexity, duplication) to continuously evaluate and improve our code.
Software quality metrics dashboard

Test Coverage: The 80% Standard

All our projects at JULDITEC exceed 80% test coverage. But what does this metric really mean?

Code coverage measures the percentage of code lines that are executed during automated tests. 80% coverage means that four out of every five lines of code have been validated by at least one test. However, it's not just about reaching a number: it's about ensuring that critical parts of the system are exhaustively tested.

We maintain this standard through:

  • Mandatory code reviews: No merge request is approved without corresponding tests.
  • Continuous analysis: Our CI/CD pipelines automatically reject code that reduces overall coverage.
  • Quality culture: We train our team in testing best practices and TDD (Test-Driven Development).

Types of Tests We Perform

Our testing strategy is comprehensive and covers multiple levels of validation, from the most granular logic to the complete user experience.

Unit Tests: The Base of the Pyramid

Unit tests validate business logic in isolation, without external dependencies. They are fast to execute and provide immediate feedback.

On the backend, we use JUnit to test services, utilities and domain logic. Each public method of our classes has at least one test that validates its expected behavior and edge cases.

On the frontend, we employ Vitest, an ultra-fast testing framework compatible with Vite, to validate helper functions, custom hooks and component logic. Vitest allows us to run thousands of tests in seconds, maintaining an agile feedback cycle.

// Example of unit test with Vitest import { describe, it, expect } from 'vitest'; import { formatCurrency } from '@/utils/format'; describe('formatCurrency', () => { it('correctly formats euros', () => { expect(formatCurrency(1234.56)).toBe('1.234,56 €'); }); });

Integration Tests: Validating the Complete System

Integration tests verify that different parts of the system work correctly together. We use Playwright, a powerful tool that allows automating real browsers (Chrome, Firefox, Safari) to simulate complete user flows.

With Playwright we validate:

  • Authentication and authorization flows
  • Interaction between frontend and backend through APIs
  • End-to-end business processes (registration, purchase, content management)
  • Behavior across different browsers and devices
Automated test code on screen

Interaction Tests: Simulating Real Behavior

Interaction tests focus on validating how interface components respond to user actions: clicks, text inputs, navigation between views. These tests run in a controlled environment and allow us to detect usability issues before they reach production.

We integrate these tests with our component system in Storybook, allowing us to validate each variant of each component in isolation.

Visual Tests: Detecting UI Regressions

Unintended visual changes are a common source of bugs. A CSS adjustment can break a page's design without any functional test detecting it. That's why we use Chromatic, a platform that captures screenshots of each component and automatically detects any visual changes.

Chromatic integrates with our development workflow:

  • Each commit generates captures of all components in Storybook
  • Visual changes are presented for manual review
  • Only after explicit approval are visual references updated

This validation layer has allowed us to detect subtle regressions that would have gone unnoticed in manual reviews.

Accessibility Tests: Inclusion by Design

Accessibility is not optional. All our projects must comply with WCAG 2.1 level AA standards at minimum. We integrate automatic accessibility validations into our testing pipeline:

  • Color contrast validation
  • Correct semantic structure (headings, landmarks)
  • Complete keyboard navigation support
  • Appropriate ARIA attributes
  • Screen reader compatibility

We use tools like axe-core integrated into our Playwright tests to automatically detect accessibility issues.

Developer reviewing code with multiple screens

Automation in GitLab: Continuous Testing

Our entire testing strategy runs automatically in GitLab CI/CD. Each push to any branch triggers a pipeline that:

  • Runs all unit tests (backend and frontend)
  • Validates code coverage and rejects the commit if it's below the established threshold
  • Runs integration tests in isolated environments
  • Generates and compares visual captures with Chromatic
  • Validates accessibility in critical components
  • Analyzes code with Sonar

No merge to the main branch is approved without all these validations passing successfully. This approach allows us to detect and correct errors in minutes, not days or weeks.

# Fragment of .gitlab-ci.yml test:unit: stage: test script: - npm run test:unit -- --coverage coverage: '/Lines\s*:\s*(\d+\.\d+)%/' artifacts: reports: coverage_report: coverage_format: cobertura path: coverage/cobertura-coverage.xml

Quality Control with Sonar

We complement our automated tests with static code analysis using SonarQube. This tool centralizes all quality metrics of our projects:

  • Test coverage: We validate that it stays above 80%
  • Technical debt: We quantify the effort needed to resolve quality issues
  • Code smells: We detect problematic patterns that could generate future bugs
  • Duplication: We identify duplicated code that should be refactored
  • Security vulnerabilities: We analyze dependencies and code for known security issues

Sonar acts as a Quality Gate: if a merge request introduces significant technical debt or reduces coverage, the pipeline fails automatically and the code cannot be integrated.

Code quality analysis dashboard

Tangible Benefits for Our Clients

This investment in quality translates into concrete and measurable benefits for our clients:

  • Fewer production errors: Bugs are detected and corrected before reaching end users.
  • Greater stability: Systems are predictable and reliable, even after major updates.
  • Long-term cost reduction: Fixing a bug in development costs a fraction of what it costs to fix it in production.
  • Confidence in every deployment: Our clients can launch new features without fear of breaking what already works.
  • Maintainability: Tests act as living documentation, facilitating the onboarding of new developers and system evolution.

Conclusion: Testing as a Competitive Advantage

At JULDITEC, testing is not a checklist to complete, but a philosophy that permeates every technical decision. Our comprehensive strategy—combining unit, integration, visual and accessibility tests, all automated and continuously monitored—allows us to consistently deliver exceptional quality software.

Over 80% coverage is not just a metric: it's a promise of quality, a commitment to technical excellence and a guarantee that every project we deliver is built on solid foundations.

If you're looking for a technology partner that not only develops software, but does so with the highest quality standards, let's talk. At JULDITEC, quality is non-negotiable.

Development team collaborating on quality project
Etiquetas:automatizacióncalidad de softwareplaywrightsonarqubetestingci-cdcobertura de código
Anterior

Developing in the AI Era: Automate, Parallelize, and Document or Fall Behind

Siguiente

Zustand in Liferay Client Extensions: Performance and Simplified Global State

¿Listo para llevar tu proyecto al siguiente nivel?

En JULDITEC transformamos ideas en soluciones digitales innovadoras. Trabajemos juntos.