Is Service Virtualisation Useful for Simulating Data Stores and Messaging Middleware?
This is a good question, and the honest answer is that as a startup we have to choose carefully where to apply our development resources, and adding support for each additional protocol is non-trivial at the code level. However, this is only part of the story, and in the microservice systems that we have implemented alongside our early customers we have often found that the need to virtualise non-HTTP resources was not always as vital (or beneficial) as was initially thought.
Antipattern: When service virtualisation becomes stubbing
Many of the commercial service virtualisation (SV) tools offer support for JDBC and other data store protocols, but we have often seen the virtual request/response pairs effectively become stub data, as there is a simple strict matching policy enforced on the associated binary data.
The SV tools then act as (potentially costly) stubbing tools, and much of the power of SV is lost e.g. there is no partial request matching, no token replacement (e.g. date/time), and no dynamic generation of response. This can lead to the analogy of “using a Porsche to go grocery shopping”, where in reality it would be more practical to use a tool more suitable (and cost-effective) for the testing task at hand.
Obviously the test data management features of SV tooling used in this way is still powerful, as we definitely need to store, manage and track data changes, but it can still feel as though you are paying a lot for what is effectively a data management tool.
Antipattern: When ‘virtual’ becomes more costly than the real thing
Messaging technology like AMQP (e.g. RabbitMQ, Apache ActiveMQ, Apache Qpid) has been popular for many years, and has recently seen a resurgence in popularity due to the emergence of patterns like event-sourcing (ES) and ‘command query responsibility segregation’ (CQRS), and the use of messaging for asynchronous and fault-tolerant communication within distributed cloud-based microservice architectures.
Virtualising these systems can be difficult. The Specto team have bumped into at least one project where the team had attempted to virtualise a messaging system, but found the deployment (and data management) for the virtualisation solution was almost as complex as deploying an actual messaging broker. Another associated drawback is that SV tooling configuration can require specialist knowledge, which means that the QA team typically have to own all of the virtual assets rather than the development team being self-sufficient and capable of creating and running their own tests.
Best practice: The emergence of lightweight/in-memory datastores
As we mentioned in our previous post, within the microservice projects we have worked on we have used in-memory data stores for integration and in-process component testing rather than SV tooling (we do, of course, use SV for simulating internal services still under development and external services that are unavailable!).
Most of these projects contain JUnit rules for testing within Java projects, and can typically be easily adapted for other language test runners. The fact that these projects can be spun up by a test framework means that development teams can take full ownership of implementation, and can also own all of the necessary data - which conveniently brings us to the next topic…
Best practice: The power of ‘internal resource’ API endpoints
Many of us at SpectoLabs have been creating service-based projects before we had heard the term ‘microservices’, and accordingly we have long used a technique that Toby Clemson and the Thoughtworks team refer to as ‘internal resource’ API endpoints. In a nutshell this means that every service created must also expose internal endpoints that allow service-specific test data to be managed (e.g. setup and teardown). On a related note, these internal endpoints can also be used to expose service information and metrics to internal systems like metric collectors and load balancers.
We often find the internal resource endpoints we create for each service are simple CRUD-like APIs that expose the aggregates and entities within the internal domain model, which allows us to POST synthetic test data before beginning a test run. For example, when testing a user service using a Behaviour-driven Development (BDD)-style user journey:
- Given a standard user named ‘Jane’ exists in the system
- (Setup) Issue a POST to the service internal resource endpoint that creates a synthetic system ‘user’ account with name ‘Jane’
- When the email address is changed to “email@example.com”
- Conduct the test by calling the service API
- Then the account username remains ‘oldaddrprefix’
- Make the assertion by querying the service API
- (Teardown) Issue a DELETE to the service internal resource endpoint that deletes the system ‘user’ account with name ‘Jane’
The fact that the in-memory data stores run within the same context of the test they are operating means that data clashes and cross-testing collisions often seen when using a centralised SV solution are avoided.
Finally, it is worth mentioning that these data manipulation internal resource APIs must be hidden (via a load balancer or proxy) or disabled (via feature toggles) for a production deployment!
Parting thoughts - when all you have is a virtual hammer…
Hopefully this article has explained some of our rationale for not currently simulating protocols other than HTTP within our open source Hoverfly service virtualisation (SV) tool. We have also attempted to share our learnings from real world microservice projects, including patterns and best practices.
There is no denying that using SV to virtualise proprietary data stores within legacy projects can be very valuable, but due to the issues identified above - the fragility of captured request/response data and the configuration/maintenance effort of the SV tool required - combined with the emergence of highly functional in-memory data store equivalents and techniques like internal resource API endpoints, we recommend that new microservice-based projects take a different approach. This avoids the problem of “When all you have is a service virtualisation hammer, every database can look like a nail…”
Service virtualisation is an extremely powerful technique within modern microservice development, but here at SpectoLabs we’re keen to see it used where the greatest benefit can be found - we believe this is in generating API simulations as part of development scaffolding and functional tests, for virtualising unavailable or external services during component and E2E tests, and injecting latency and faults during non-functional testing.
At SpectoLabs we are working on creating both open source and commercial tooling to help organisations develop, test and manage microservice-based applications. Please contact us to learn more or discuss your challenges.