read

Thanks to Andrew Amesbury for all his help on this article.

Introduction

Stuff recently began its journey to move the back-end content store away from its legacy content management systems. The broad goal is to develop a content platform that is scalable, reliable and with tooling that is fit for fast iteration so that we can build solutions more effectively.

It’s an exciting time - we’re innovating, designing a new domain model, developing tactical solutions to meet our current business needs and delivering toward longer-term strategic solutions. The result will be that we can move faster and deliver systems that evolve with changes in how New Zealanders want to see content.

Sounds big? It is. A decade ago the business had an outsourcing model that meant the product technology team was mostly here to conduct vendor management. The focus then moved to developing on a new-for-the-time CMS, which involved a close relationship with our sister company in Australia.

Unfortunately, over time it became apparent that there were significantly different requirements between Fairfax Australia and Fairfax New Zealand that led to a divergence of focus; since taking over system development and moving it to an in-house model based in New Zealand we’ve been improving our approach, refining our agile processes to product delivery and that’s where this story begins; the Agile test strategy.

Our agile test strategy

Working in a cross-functional team we all share our areas of expertise. As one of New Zealand’s busiest websites we need reliable, consistent systems that perform under pressure; we achieve this through collaboration (everyone is responsible for quality) and to enable fast-paced development iterations our strategy is automation first.

Test Strategy Goals

  • Deliver solutions fit for our business need
  • Deliver business value
  • Ensure our software is straightforward to maintain
  • Ensure our systems are scalable and can perform under load

Responsibilities

Quality is our joint responsibility as a team. There’s no fence here to separate the QA team from developers - we work together to find the best solution to our business’ requirements and deliver the tested solution to production.

Test Scope

Mike Cohn’s Test Pyramid became an inspiration for our testing strategy that is frequently cited as a good practice to follow. We used it as a guide for building our Test Automated Pyramid. It consists of the following four levels:

Testing Pyramid

It’s not realistic to cover everything; so we don’t hold ourselves accountable to goals such as 100% test coverage across all source code; if we did - we’d actually spend more time writing tests for frameworks than our own business logic. We do not put a lot on trivial unit coverage of our source code, like setters/getters, DTOs without logic. Instead, we aim for 100% non-trivial unit coverage.

Unit Testing Level

We aim to ensure that our component interface is covered for possible positive and negative test cases and that it works as specified. Using Spring Boot Test gives us the power to build them with an emphasis on checking that the API inputs and outputs are correct. Our component integration test tests a single artefact in isolation, so where we need to stub calls to other services we can use canned responses in WireMock. These tests are faster to write and execute, thereby giving us a quick feedback.

Component Integration Testing Level

The acceptance test suite as a whole both verifies that application delivers the business value expected by the consumers and guards against regression or defects that could break pre-existing functions of the application. They also pick up problems not found in unit or component integration tests. Automated acceptance tests give us and our consumers valuable feedback on the quality of key features.

Acceptance Testing Level

Beyond the component integration, our aim is to develop a system integration test. It’s known fact that establishing testing of system integration requires resources, like building and supporting environments to integrate with, running tests is a time-consuming process as well. Our system integration test will deploy real components (services and databases) in a docker-compose grid and then will execute a series of smoke tests to check that the components connect and integrate as expected. Ensuring a good quality coverage, performed for the levels described above, gives us put system integration level of testing at the top of the pyramid, over the acceptance level. This remains a work in progress as we can currently rely on strong integration tests, but it is certainly on our radar and the blueprint has been started.

System Integration Testing Level

Environment

Stuff has been hosted in Amazon Web Services for some time - so there was no challenge when we proposed using EC2 Container Service (ECS) to host a cluster for our applications to run in their own Docker images. We already have an Elastic Container Registry (ECR) to host our Docker images, and we could easily build using Relational Database Service (RDS) for our PostgreSQL databases.

In fact, as we already have a team that actively manage our cloud operations, we simply specified our application using CloudFormation, wrote a Dockerfile and began focusing on delivering features.

Being cloud forward gives us a huge amount of flexibility when we need to test our systems as we can develop, test and deploy everything we need from our workstation using Docker without affecting anyone else.

While we do have a staging environment for integrating with legacy systems, we have offset the challenge of handling inter-resource calls to it by defining standard message types for interfacing with our new services. We’re using Amazon SNS and SQS to ensure we have timely, reliable and scalable connectivity between our back-end systems in the cloud.

Tools

Having reduced our test scope to focus on unit testing, component level integration tests and some performance benchmarking, we decided upon the following test tools:

Unit tests

JUnit for unit testing; mockito for mocks and hamcrest for assertions

Component Integration tests

Spring-Boot-Test integration test framework, WireMock

Acceptance Tests

Spring-Boot-Test integration test framework, WireMock, Spock (for BDD), Groovy

Performance benchmarking

Gatling for targeted performance benchmarking

Application blueprint

As we develop more services to and expand our architecture, it’s important that we maintain a standard tech stack, providing consistency between projects, reducing the learning curve, so that new projects can get up and running following a good practice. To this end, we have developed our own template services with everything from development frameworks (we’re using Spring Boot for Java) to test frameworks with examples.

This means that when a new service is conceived - the process of generating a pipeline, developing a base project, initialising in git and creating suitable branches is all taken care of for us. This means we can focus on developing a solution instead of composing frameworks from scratch.

An added benefit of developing on a standardized tech stack is that developers can join or move between teams and the time to being productive is significantly lower.

Many services at scale

At the time of writing, we have not yet begun our foray into streaming technologies, so there remains a dependency on developing suitable point to point connectivity between some of our core services.

To combat the potential challenge of developing clients for each service we’re using Swagger specifications so that internal users of our services can interact with our systems directly by implementing the generated swagger client.

Looking to the future

We’re in a strong position, but we’re aware nothing is ever perfect; we regularly spend time reviewing new approaches and we’re currently investigating whether we can make use of Localstack so that we don’t need to rely on third-party tools such as Fakesqs. We have got the following areas to work on:

  • Ability to A/B performance test services locally
  • Establishing of release verification process
  • Building out acceptance tests using a domain modeling approach
Blog Logo

Yuliya Marholina


Published

Image

Making Stuff

Fairfax Media NZ Product Tech: coding at the cutting edge of media

Back to Overview