🚀 Executive Summary
TL;DR: The article addresses common `connection refused` errors in CI/CD pipelines when Dockerized API tests attempt to connect to a containerized database. It clarifies that `localhost` is isolated within each container and provides robust solutions like user-defined Docker networks or CI-native service containers for reliable inter-container communication.
🎯 Key Takeaways
- The `localhost` inside a Docker container is isolated from the CI runner’s host or other containers, causing `connection refused` errors if not properly addressed.
- Hardcoding `localhost` in application database connection strings is an anti-pattern; always use environment variables for connection details.
- User-defined Docker bridge networks are the robust solution for multi-container communication, enabling services to discover each other by their service names via built-in DNS.
- Implementing `healthcheck` for database services and `depends_on: service_healthy` for dependent services prevents race conditions, ensuring the database is ready before tests begin.
A senior DevOps engineer breaks down how to reliably run API tests against a containerized database in your CI/CD pipeline, avoiding common networking pitfalls and flaky builds.
From My Desk to Yours: Solving the CI/CD, Docker, and API Testing Nightmare
I remember it like it was yesterday. It was a Tuesday, a junior engineer—let’s call him Alex—was on his third day trying to get a simple CI pipeline to pass. The Go API tests kept failing with a dreaded connection refused. He swore the code was fine; it ran perfectly on his laptop. He showed me his Dockerfile, his Go test suite, everything. On the surface, it looked solid. But the pipeline was a sea of red. Everyone was getting frustrated, and we were about to miss a sprint goal over what looked like a trivial task. The problem wasn’t his code, his tests, or even his Dockerfile. It was a fundamental misunderstanding of how containers talk to each other inside a CI runner, a ghost in the machine that haunts almost every developer when they first step into this world.
The “Why”: You’re Not on `localhost` Anymore
Let’s get this straight. The root of 90% of these problems is a simple networking misconception. When you run `docker-compose up` on your local machine, things often “just work” because Docker does some magic with port mapping to your actual machine’s `localhost`. You can hit the API in your browser, and it can talk to the database, everything feels local.
But a CI runner is a different beast. Inside that pipeline, you typically have:
- The CI runner itself (the host environment).
- A Docker container for your database (e.g., `postgres-db`).
- Another Docker container for your API, where the tests are run.
When your API container tries to connect to localhost:5432, it’s not looking for the `postgres-db` container. It’s looking for a Postgres server running inside its own container. Each container has its own isolated network stack, its own `localhost`. Of course, the connection is refused—nobody’s home.
Warning: Never, ever hardcode
localhostin your application’s database connection string. Your future self, and your friendly DevOps team, will thank you. Always use environment variables for connection details.
Solution 1: The Quick (and Dirty) Fix – Host Networking
Sometimes you just need to get the pipeline green. I get it. The quickest way to solve this is to tear down the walls between your containers’ networks and have them all share the CI runner’s network stack.
In your docker-compose.yml file, you can set network_mode: "host". This tells Docker, “Hey, don’t bother with network isolation for these containers. Just latch them directly onto the host’s network.” Now, when your API container looks for `localhost:5432`, it’s looking at the CI runner’s `localhost`, where the Postgres container has also been attached and exposed its port.
# docker-compose.ci.yml
version: '3.8'
services:
postgres-db:
image: postgres:14-alpine
network_mode: "host" # The magic wand
environment:
- POSTGRES_USER=testuser
- POSTGRES_PASSWORD=testpass
- POSTGRES_DB=testdb
api-tests:
build: .
network_mode: "host" # This one too
depends_on:
- postgres-db
environment:
- DATABASE_HOST=localhost # Now this works
- DATABASE_PORT=5432
command: ["go", "test", "./..."]
Why it’s “dirty”: This is a hack. It throws away a key benefit of Docker (network isolation), can lead to port conflicts on the runner, and doesn’t work on all Docker platforms (like Docker Desktop for Mac/Windows). Use it to unblock yourself, but plan to replace it.
Solution 2: The Right Way – User-Defined Networks
The proper, permanent fix is to create a virtual network for your containers to live in. Inside this private network, Docker provides a beautiful thing: built-in DNS. Each container can find the others simply by using their service name as a hostname.
Here, we define a network called app-network. Both the `postgres-db` service and the `api-tests` service are attached to it. Now, from inside the `api-tests` container, the hostname postgres-db resolves directly to the private IP address of the database container.
# docker-compose.ci.yml
version: '3.8'
services:
postgres-db:
image: postgres:14-alpine
hostname: postgres-db
networks:
- app-network
environment:
- POSTGRES_USER=testuser
- POSTGRES_PASSWORD=testpass
- POSTGRES_DB=testdb
healthcheck: # Pro-tip!
test: ["CMD-SHELL", "pg_isready -U testuser -d testdb"]
interval: 5s
timeout: 5s
retries: 5
api-tests:
build: .
networks:
- app-network
depends_on:
postgres-db:
condition: service_healthy # Wait for the DB to be ready
environment:
# We connect to the service name, not localhost!
- DATABASE_HOST=postgres-db
- DATABASE_PORT=5432
- DATABASE_USER=testuser
- DATABASE_PASSWORD=testpass
- DATABASE_NAME=testdb
command: ["go", "test", "./..."]
networks:
app-network:
driver: bridge
This is the robust, scalable, and correct way to manage multi-container setups. It mimics how you’d deploy in production (e.g., in Kubernetes or ECS) where services discover each other by name.
Pro Tip: Notice the
healthcheckanddepends_oncondition. This prevents a classic race condition where your tests start before the database is actually ready to accept connections. It makes your pipeline way less flaky.
Solution 3: The “CI-Native” Approach – Service Containers
If you’re using a modern CI/CD platform like GitHub Actions or GitLab CI, you might not even need Docker Compose for your tests. These platforms have a first-class concept of “service containers”.
You declare your primary job container (where your code runs) and then list any dependent services, like a database. The CI platform automatically handles creating a network and attaching them. It then provides a predictable hostname for you to use.
Here’s what it looks like in a GitHub Actions workflow:
# .github/workflows/ci.yml
name: CI Pipeline
on: [push]
jobs:
test-api:
runs-on: ubuntu-latest
services:
# This creates a postgres container and attaches it
postgres:
image: postgres:14-alpine
env:
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass
POSTGRES_DB: testdb
ports:
- 5432:5432 # Map to host for debugging if needed, but not required for container-to-container
options: >-
--health-cmd="pg_isready"
--health-interval=5s
--health-timeout=5s
--health-retries=5
steps:
- uses: actions/checkout@v3
- name: Run API Tests
run: go test ./...
env:
# GitHub Actions makes the service available via 'localhost'
# because it maps the service's port to the runner's localhost.
# Other systems (like GitLab) use the service name 'postgres'.
# Always check your provider's documentation!
DATABASE_HOST: localhost
DATABASE_PORT: 5432 # The port we exposed above
DATABASE_USER: testuser
DATABASE_PASSWORD: testpass
DATABASE_NAME: testdb
This approach is often the cleanest because you’re using the patterns intended by your CI provider. It keeps your CI configuration self-contained and removes the dependency on a `docker-compose.yml` file just for testing.
At the end of the day, Alex and I switched his pipeline to use a user-defined network (Solution 2). The sea of red turned to a beautiful green. He learned a valuable lesson about container networking, and I got a decent cup of coffee. Don’t let the ghost in the machine win. Understand the network, and you’ll own your pipeline.
🤖 Frequently Asked Questions
âť“ Why do Dockerized API tests fail with ‘connection refused’ when connecting to a database in a CI/CD pipeline?
This typically occurs because each Docker container has its own isolated `localhost`. When an API container tries to connect to `localhost:5432`, it’s looking for a database *inside its own container*, not the separate database container.
âť“ How do user-defined Docker networks compare to host networking for Dockerized CI/CD tests?
User-defined networks are the recommended approach, offering network isolation and built-in DNS resolution by service name, mimicking production environments. Host networking is a quick, dirty fix that sacrifices isolation, can lead to port conflicts, and is not universally supported across Docker platforms.
âť“ What is a common implementation pitfall when running API tests against a containerized database in CI/CD, and how can it be avoided?
A common pitfall is a race condition where API tests start before the database container is fully initialized and ready to accept connections. This can be avoided by implementing Docker Compose `healthcheck` for the database service and using `depends_on: service_healthy` for the API test service.
Leave a Reply