Testing

This project uses pytest along with testcontainers. The later is used to test the low level client implementation against a recent version of factcast.

This is done to ensure compatibility but requires some additional setup on part of the developer.

The project keeps a minimum coverage for contributions. The current minimum coverage for new code is the same as the minimum coverage of the complete project. You can look this up in the pyproject.toml file. Just look for cov-fail-under.

Testcontainers

To set this up you basically need docker on your machine. You might also want to install docker-compose but poetry will actually take care of that for you.

After you have installed docker all you need to do is run the test without further ado.

All tests that use containers are marked with containers so if you want to skip them locally because your machine takes a long time to spin up the containers you can filter them from your test run using the -m "not containers" option for pytest. Keep in mind that this might drop you under the minimum coverage needed to merge locally. In CI these tests will always run so the coverage there will be correct and that is what counts towards the merge.

Tip

If you run into startup issues with factcast on older machines it might be that postgres is not yet up. To work around this issue, you can set CI_JOB_STAGE in your environment this will force factcast to wait 30 seconds after the postgres container has started.

The Issues with that Approach

It is obnoxiously difficult to get dind to run together with pythons testcontainer implementation while simultaneously have everything running smoothly for the local development environment. To achieve this goal some decisions where made.

  • Maintain 2 compose files. This decreases maintainability by a bit but makes waiting for postgres possible without having to construct and maintain individual container from factcast as a base.

  • Have some decision logic in the fixtures to determine if running locally or in CI.

  • Stick with the python base images and install docker on build. An individual container would be better but again only waisting compute time here and reducing maintenance burden for the underlying image.

These difficulties stem mostly from networking issues in gitlab ci and their dind implementation. For those not familiar with the issues feel blessed. If you are now thinking, but hey why not use The official docker images and just do stuff in there, well they are based on alpine and until we get wheels for alpine, the build will take orders of magnitude longer. I tried.

For reference and potentially a future switch see this great article and also PEP-656.

This ticket from gitlab might also help in the future should they implement something sane here.