Different kind of testing strategies with ASP.NET Core: Using Docker to write Integration tests
Greetings once again, dear readers. It's been quite some time since my last entry, and I'm delighted to resume our discussion. In this ongoing series on ASP.NET Core testing strategies, today marks the unveiling of our third installment.
In our previous post, we ventured into the world of component tests, where we harnessed the power of TestDouble to replace HTTP calls and database interactions. If you happened to miss that piece, I highly recommend catching up, as it introduces a pivotal addition to your testing arsenal.
Now, let's turn our attention to the intriguing realm of integration tests. These tests serve as guardians, ensuring that your application interacts seamlessly with external components while upholding the integrity of your integration code. What sets this discussion apart is our incorporation of Docker, a tool that fortifies the robustness of the integration testing process.
But there's more in store for you. We'll also guide you through the seamless integration of these tests into your CI/CD pipeline, regardless of whether you prefer GitLab, GitHub Actions, or Azure Pipelines. So, stay with me as we embark on a comprehensive journey to integrate and test your ASP.NET Core applications with expertise and pragmatism.
Integrations with data stores and external components benefit from the fast feedback of integration tests. Tests in this style provide fast feedback when refactoring or extending the logic contained within integration modules. However they also have more than one reason to fail — if the logic in the integration module regresses or if the external component becomes unavailable or breaks its contract. - Toby Clemson
When to use integration tests
In the previous blog post, we have shown how we can use TestDoubles in order to test our components or microservices in isolation. We have seen that you can simply replace your database with an in-memory provider or replace your HTTP calls with a Mock in order to thoroughly acceptance test the behaviour. These component tests are low in build complexity and also execute relatively quickly. However, the thought of mocking entire external applications such as ElasticSearch with a TestDouble, is clearly too much work, and generally, these external applications come with the capability to run as a containerized version. So, in our XUnit unit tests, we are going to write some C# code, to startup some docker containers whenever we need one for an integration test. During some projects of mine, we used the following applications for integration tests:
- ElasticSearch (or OpenSearch if you use AWS)
- AWS DynamoDb
- Microsoft SQL Server or any other equivalen database provider such as Postgres, MongoDb, etc...
Our first integration test
So our requirements are simple. For each test we are running, there are a couple of steps we need to do.
- Pull container: First we need to pull the docker image. This can either be done manually, in C# code or as part of a script in your CI/CD pipeline. In this blog I'll show you how to do it in code, but I would recommend the latter as this will help the execution speed of your builds and tests. Logically, once your test suite expands to 100's or 1.000's of tests, you don't want multiple tests competing to pull a container.
- Create container: Secondly, once we have pulled the container, we need to create the container before we can start it with the required parameters.
- Start container: Once created, we can start the container. Depending on the application you are running as a container, it might need some port binding configuration or environment variables.
- Run integration test: Then, everything is prepared to run our XUnit test. We run our (now called integration tests), but we configure additional settings and services for our ASP.NET Core application using the TestHost helpers we have seen in our previous blogs.
- Cleanup: Finally, we need to cleanup our used containers. (Or not, one scenario we used was to reproduce a bug that we were not able to understand. By keeping the container, we could inspect the logs of the external application, in order to find the problem).
To achieve all of this in code, Docker provides a NuGet package for a .NET client to interact with your Docker daemon programmatically. And it would look something like this.
But, this looks like a lot of work right? There is an easier way, but we'll get into that very soon. This code is just to demonstrate of all the steps that we need to do. In the next paragraphs we'll deep dive in making it more reusable by utilizing XUnit IClassFixture
and we'll dive deeper into some more advanced scenarios
XUnit and Docker Containers
While the .NET client works great, it requires a lot of overhead in terms of code in order to write an integration test. Furthermore, in today's fast-paced software development landscape, efficiency and reproducibility are key. What we can do is to extract some of this code into a XUnit class fixtures. This offers a powerful combination that can improve your testing workflow. With Docker containers as XUnit class fixtures, you can effortlessly set up and tear down isolated environments for your tests, ensuring consistency across different test runs and environments. This approach not only simplifies the process of managing dependencies but also enhances the portability of your test suite, making it easier to share and collaborate with team members.
While developing this approach in our team, we found two significant challenges.
- Firstly, in a larger parallel test suite, when tests require the same dockerized application, you can get conflicts. This can either be when querying for the same database or containers trying to reserve the same port.
- Secondly, some containers have extended startup times of 20-30 seconds before it can be invoked by C# code. You can have a rudimentary solution as
Thread.sleep
, but this won't work relabily. We need to implement a container readiness checks, either custom or with Docker Compose health checks, ensuring that the container its services are operational before running the integration test.
To make all these problems a bit more managable, we are going to use a different NuGet package, which is called TestContainers. And thus we can ensure smooth testing while harnessing the capabilities of Docker.
Our revised integration test
Considering the above, your code could look somthing like this. We have made an abstract class called ContainerFixture
, with the responsability to make sure our container automatically is started, stopped, and cleaned up. Then, we can make multiple implementations which focus on just specifying the container. As you can see, we make a PostgresContainerFixture
which starts a postgress database for our orders API. This API is the one we have been using the last two posts.
The second thing we need to do, is to update our integration test to configure either an URL or a connection string to the started container. This is quite similiar how we have already done this in our component test in our previous blog when we were using the Entity Framework in-memory solution.
And thus we can run our tests and inspect what is happening in the background during an integration test. Below in the screenshot you can see 3 containers being run. One is the TestContainer "side car", which manages the containers during your tests. The two other containers are the postgres databases which are used by our integration tests that run in parallel. Here we can see and confirm that the Postgres port 5342 is randomly assigned on the host to prevent conflicts.
Waiting for ElasticSearch to be ready
Wonderful! Our test are running but there is more problems we need to address. As mentioned before, some containers take a while to startup (I'm looking at you ElasticSearch 👀). Now in our case, we were doing even more complicated stuff. We had two applications which integrated with ElasticSearch. To make a long story short, we had one application writing the data, and another one responsible for reading and search. While settings this up, we of course ran into the forementioned problems of conflicts. But, we were able to write a big chunk of reusable code in order to load the data and prepare the integration test.
However, the main problem was waiting for the container to fully start. Since it was taking atleast 20-30 seconds, we didn't want to start a new container for every TestClass. This would add a lot of overhead, in terms of compute resources but also execution time. One thing we did was to ping the ElasticSearch healthcheck endpoint every 5 seconds, until we got back a green status. While rudamentry, it was working for us. Fortunately, TestContainers provides a number of prebuild containers and they have a much easier way to validate if the ES is ready to receive requests. Look at this list of prebuild container builders from TestContainers.
Azure Cosmos DB, Azure SQL Edge, Azurite, ClickHouse, Couchbase, CouchDb, DynamoDB, ElasticSearch, EventStoreDb, InfluxDB, K3s, Kafka, Keycloak, LocalStack, MariaDB, MinIO, MongoDB, MySQL, Neo4j, Oracle, PostgreSQL, RabbitMQ, RavenDB, Redis, Redpanda, SQL Server, WebDriver
So plenty to choose from! If you want a bit more control over your container, I would recommend writting some code of your own.
Continuous Integration
Last but not least, we want to run all of this in our continuous integration/deployment tools in order to verify the behaviour of our application before we deploy them. To use TestContainers in your CI/CD environment, you only require Docker installed in your prefered agent. A local installation of Docker is not mandatory; you can also use a remote Docker installation. Below you'll find a couple of examples for three (or two if you look at it) different tools. TestContainers also prints some logs on what is happening.
GitHub
Microsoft-hosted agents come with Docker pre-installed, there is no need for any additional configuration. It is important to note that Windows agents use the Docker Windows engine and cannot run Linux containers. If you are using Windows agents, ensure that the image you are using matches the agent's architecture and operating system version.
Azure Pipelines
Both GitHub and Azure Pipelines use the same hosted build agents so the same remarks apply.
GitLab
And finally, to configure the Docker service in GitLab CI (Docker-in-Docker), you need to define the service in your .gitlab-ci.yml
file and expose the Docker host address docker:2375
by setting the DOCKER_HOST
environment variable. Below the output and the code.
Conclusion
In conclusion, our exploration of integration testing in ASP.NET Core has been enlightening. We've seen how Docker can enhance these tests by simplifying dependency management and improving portability.
We've also tackled common integration testing challenges, such as conflicts in parallel test suites and dealing with slow-starting containers, using the TestContainers NuGet package.
Additionally, we've discussed how to integrate these tests into your CI/CD pipeline, ensuring your applications are thoroughly tested before deployment, whether you're using GitHub, Azure Pipelines, or other tools.
In the world of software development, mastering integration testing with Docker and TestContainers is essential for efficient and reliable application development. Armed with these insights, may your tests run smoothly, and your applications thrive. Happy coding!