A Proposed Recipe for Designing, Building and Testing Microservices

Daniel Bryant,

Here at SpectoLabs HQ we have been working alongside our friends at OpenCredo to help several organisations build and deploy microservice-based applications. Most of these organisations are looking to migrate away from (or augment) their current monolithic application, and we’ve also been involved with a few greenfield prototypes.

We’ve learned lots along the way, and today we are keen to share our findings in how to design, build and test microservice-based systems.

Our approach

Broadly speaking we take the following high-level approach to designing and implementing microservice-based systems:

1. Design the system: Determine service boundaries

a. Often the completion of elements from step 2c (“Three Amigos”) and 6 (end-to-end acceptance tests based our core user journeys) are needed to drive the overall system design, as developing an understanding of the application/system user journeys is essential for building something that actually delivers business value.

i. On a related note, if you are migrating from a monolith please do ensure that you have a specification (ideally acceptance tests) before you begin the migration, as it is very difficult to build (or re-architect) something that establishes parity with an existing system (the phrase “just make it do what the old system did” always makes us shudder)

b. If we are working with a current monolithic application the first step is to identify the cohesive areas of business functionality within the existing system. Following domain-driven design (DDD) nomenclature, these areas of business functionality are called the bounded contexts

c. When working with a greenfield application the process used to identify the bounded contexts is similar to b, but with the added challenges of the business functionality/entities not yet being fully defined (or understood). Because of this, some people argue that you shouldn’t start building an application with microservices, but we’ll leave that argument for another day

d. Taking our cues from Simon Brown, we’re fans of just enough upfront design and therefore there is an argument that the entire system doesn’t need to be designed before the other steps below can begin. All of the steps presented here are typically worked on in an iterative fashion. For example, during step 2 we often discover that a proposed service’s initial scope is too big or too small, and so we change the system design accordingly

e. This step can take some time, but the output is typically a context map which represents the first pass at defining the application service boundaries

2. Design the service APIs: Determine service functionality

a. Once service boundaries have been defined we can now work with the relevant business owners, domain experts and the development team to define service functionality and the associated interfaces - the Application Programming Interfaces (APIs)

b. We can try and define all the service APIs upfront, but in reality the process of designing the services will often be undertaken in-turn (with the associated development occurring after each service is designed), in groups of related functionality, or in parallel with other steps in this list (in particular steps 3 and 1)

c. We like using the behaviour-driven design (BDD) technique named “the Three Amigos”, and see a lot of value in the ‘shifting left’ of the QA team to work alongside business stakeholders and developer to define requirements

d. The typical outputs from this step include: a series of BDD-style acceptance tests that asserts component (single microservice) level requirements, for example Cucumber Gherkin syntax acceptance test scripts; and an API specification, for example a Swagger or RAML file, which the test scripts will operate against

3. Build services outside-in

a. Now we have our API specification and associated (service-level) business requirements we can begin building the service functionality outside-in!

b. Following Toby Clemson’s excellent article on microservice testing, this is where we use both integration testing and unit testing (both social and solitary), frequently using a double-loop TDD approach

c. Frequently when building complex functionality you will have to integrate with other services, both internal (controlled by you) and external (owned by third-parties), and for this we typically use tooling like Tom Akehurst’s WireMock or our open source Hoverfly service virtualisation tool to simulate the associated service interface

d. Steps 3 and 4 often occur iteratively, but the output from this step is a (increasing series of) services that provide well-tested functionality

4. Component test

a. In combination with building a service outside-in we also work on component-level testing. This differs from the integration testing mentioned in 3b in that component testing operates via the public API and tests an entire slice of business functionality. Typically the first wave of component tests utilise the acceptance test scripts we defined in 2c, and these assert that we have implemented the business functionality correctly within this service

b. We also like to test non-functional requirements (NFRs), which we prefer to call ‘cross-functional tests’, within this step. Examples of these tests include:

i. Performance testing of a series of core happy paths offered by the service. We typically use JMeter(often triggered via the Jenkins Performance Plugin) or Gatling (often run via flood.io)

ii. Basic security testing using a framework like Continuum Security’s bdd-security, which includes the awesome OWASP ZAP

iii. Fault-tolerance testing, where we deterministically simulate failures and increased response latency from additional internal/external services using Hoverfly and associated middleware (and in the past, Saboteur)

iv. Visibility testing, which asserts that the service offers the expected endpoints for metrics and health checks, and can assert that logging and alerting has been configured correctly. We typically use tools like REST-assured to assert API endpoints are configured correctly

c. Referencing Toby Clemson’s work again, we like to test at the component level using both “in-process” (for quick iterations) and “out-of-process” (for more realistic deployment-style tests)

i. “In-process” testing means that the entire test and service under test is run in-process. In order to allow a complete slice of business functionality to be tested a service will often rely on some other external component, be that another internal service, a third-party external service, a data store or a messaging solution.

1. For internal services and external services we typically use our open source Hoverfly service virtualiser executed via the Hoverfly JUnit Rule

2. For data stores we typically use in-memory solutions like HSQLDB or Chris Batey’s Stubbed-Cassandra

3. For messaging solutions we use Apache Qpid for embedded AMQP, or many of the commercial offerings offer a ‘mock’ mode like AWS SQS or open source options like FakeSQS

d. The output of this step is a series of services that have both their business functionality and cross-functional requirements validated via a robust continuous delivery build pipeline

5. Contract test: Verify the component interactions

a. At this step of the process we are looking to verify the proposed interaction between components. We assume that services are correctly providing the functionality they offer via their APIs (which has been asserted in step 4)

b. A popular approach for this in the microservice world is by using consumer-driven contracts, and this can be implemented using frameworks like the Ruby-based Pact (and the associated Pact-JVM and Pact-Go), Pacto, or Node.js consumer-contracts

c. There is no denying contract testing is very valuable, but on some projects we have found the overhead of maintaining (and running) these tests too high in relation to the guarantees they provided over and above our E2E tests (defined in step 6).

d. Our suspicion is that contract testing will become more valuable as we deal with ever-more complex systems that have multiple cohesive/localised bundles of functionality, or as we experiment with multiple distributed teams implementing microservices based primarily via the API specification and acceptance tests

e. Outputs from this step include a series of supplier and consumer contracts (‘pacts’) that can be cross-validated as part of a continuous delivery build pipeline run

6. End-to-end (E2E) tests: Asserting system-level business functionality and NFRs

a. Work on this final step of the design and test process here can be started even before step 1, as E2E automated tests essentially assert core user journeys and application functionality (and prevent regression)

b. We typically also test non-functional/cross-functional requirements as defined on 4b above on core ‘happy path’ user journeys through the system. For example, asserting that all critical business journey as working, they respond within a certain time, and they are secure

c. When E2E tests touch external systems or internal systems that are not available or are unreliable (e.g. a mainframe-based service), then we often use Hoverfly to simulate the API. This has the added benefit of additional control, in that we can simulate increased latency or deterministically simulate failure scenarios using Hoverfly middleware

d. All the systems we have worked on also have some degree of manual testing, which ranges from verifying new functionality works as expected, that the UX of new UI or API elements is acceptable, or the practice of an in-depth penetration/security test

e. Outputs from this step of the process should include: a correctly functioning and robust system, the automated validation of the system, and happy customers!

Key assumptions: You have a fully-functional build pipeline

Finally, we’re also keen to mention that core practices like continuous deliveryautomated environment provisioning, and monitoring/alerting are essential for the successful delivery of microservices (as argued in Martin Fowler’s “Microservice Prerequisites”). We’ll also caution against the challenges of handling data within a microservice-based application, particularly when migrating from a monolith (see Christian Posta’s “The Hardest Part About Microservices: Your Data”).

Classic QA maxims like the test pyramid and agile testing quadrants should also be used to guide how many tests (and how much effort) should be put into each step, and we strongly recommend reading the great work of Lisa Crispin and Janet Gregory.

Parting thoughts

The approach documented above is very much work-in-progress, and was heavily influenced by Toby Clemson’s original microservice testing work. We wanted to publish this to share our ideas and start a conversation on whether this is the best approach.

At SpectoLabs we are working on creating both open source and commercial tooling to help organisations develop, test and manage microservice-based applications, and we would be very keen to hear about your current microservice testing challenges.

As mentioned in the article above, we have already released our open source Hoverfly service virtualisation/API simulation tool, and we are receiving some great feedback about how people are using this across the entire test life cycle. Please do drop us a line if you are using Hoverfly - we’re always interested in how people are using this application!