SonarQube Docker Compose: Production-Ready Configuration
Production-ready SonarQube Docker Compose setup with PostgreSQL, HTTPS, backups, resource limits, and multi-instance deployment.
Published:
Running SonarQube in production requires more than pulling an image and running a container. A production-ready SonarQube Docker Compose configuration needs a properly configured PostgreSQL database, persistent volume mounts, resource limits, secure environment variables, HTTPS termination, automated backups, and a clear upgrade path. This guide provides a complete, battle-tested sonarqube docker compose yml that you can deploy with confidence.
Whether you are setting up SonarQube for a small team or managing multiple instances across environments, this guide covers every aspect of the sonarqube docker compose setup process - from the initial docker-compose.yml file through reverse proxy configuration, backup strategies, and troubleshooting common production issues.
If you are looking for a broader overview of SonarQube installation options beyond Docker Compose, see our guide on how to setup SonarQube.
Prerequisites
Before deploying SonarQube with Docker Compose in a production environment, ensure your host machine meets the following requirements:
- Docker Engine 20.10 or later and Docker Compose v2 (the
docker composeplugin, not the legacydocker-composebinary) - At least 4 GB of RAM available to Docker - SonarQube’s embedded Elasticsearch is memory-intensive and will crash without sufficient allocation
- SSD storage - spinning disks cause severe performance degradation for Elasticsearch indexing operations
- Linux host recommended - while Docker Desktop on macOS and Windows works for development, production deployments should run on Linux for kernel parameter control and native performance
- vm.max_map_count set to 524288 - required by Elasticsearch, set with
sysctl -w vm.max_map_count=524288on the host
To make the kernel parameter persistent across reboots, add the following to /etc/sysctl.conf:
vm.max_map_count=524288
fs.file-max=131072
Then apply with sysctl -p.
Production Docker Compose Configuration
The following sonarqube docker compose example provides a complete, production-ready configuration with SonarQube and PostgreSQL. This is the foundation that the rest of the guide builds upon.
services:
sonarqube:
image: sonarqube:2025.1-community
container_name: sonarqube
depends_on:
db:
condition: service_healthy
environment:
SONAR_JDBC_URL: jdbc:postgresql://db:5432/sonarqube
SONAR_JDBC_USERNAME: ${SONAR_DB_USER:-sonar}
SONAR_JDBC_PASSWORD: ${SONAR_DB_PASSWORD:?Database password is required}
SONAR_WEB_JAVAOPTS: >-
-Xmx512m -Xms128m
-XX:+HeapDumpOnOutOfMemoryError
SONAR_CE_JAVAOPTS: >-
-Xmx1024m -Xms128m
-XX:+HeapDumpOnOutOfMemoryError
SONAR_SEARCH_JAVAOPTS: >-
-Xmx512m -Xms512m
-XX:MaxDirectMemorySize=256m
-XX:+HeapDumpOnOutOfMemoryError
volumes:
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_logs:/opt/sonarqube/logs
- sonarqube_temp:/opt/sonarqube/temp
ports:
- "127.0.0.1:9000:9000"
deploy:
resources:
limits:
memory: 4G
cpus: "2.0"
reservations:
memory: 2G
cpus: "1.0"
healthcheck:
test: ["CMD-SHELL", "curl -sf http://localhost:9000/api/system/health | grep -q GREEN || exit 1"]
interval: 30s
timeout: 10s
retries: 5
start_period: 120s
networks:
- sonarnet
restart: unless-stopped
ulimits:
nofile:
soft: 131072
hard: 131072
nproc:
soft: 8192
hard: 8192
db:
image: postgres:16-alpine
container_name: sonarqube-db
environment:
POSTGRES_USER: ${SONAR_DB_USER:-sonar}
POSTGRES_PASSWORD: ${SONAR_DB_PASSWORD:?Database password is required}
POSTGRES_DB: sonarqube
volumes:
- postgresql_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${SONAR_DB_USER:-sonar} -d sonarqube"]
interval: 10s
timeout: 5s
retries: 5
deploy:
resources:
limits:
memory: 1G
cpus: "1.0"
reservations:
memory: 256M
networks:
- sonarnet
restart: unless-stopped
volumes:
sonarqube_data:
sonarqube_extensions:
sonarqube_logs:
sonarqube_temp:
postgresql_data:
networks:
sonarnet:
driver: bridge
Create a .env file alongside your docker-compose.yml to store sensitive values:
SONAR_DB_USER=sonar
SONAR_DB_PASSWORD=your_strong_password_here
This configuration differs from a basic development setup in several important ways. The port binding uses 127.0.0.1:9000:9000 instead of 0.0.0.0:9000:9000 to prevent direct external access - all external traffic should route through a reverse proxy. Resource limits prevent SonarQube from consuming all available host memory. Health checks ensure that dependent services are fully ready before SonarQube attempts to connect, and the start_period gives SonarQube enough time to perform database migrations during upgrades.
Understanding the Environment Variables
The sonarqube docker compose postgresql connection relies on three core environment variables. Getting these right is essential for a stable deployment.
Database connection variables control how SonarQube connects to PostgreSQL:
SONAR_JDBC_URL- the JDBC connection string pointing to your PostgreSQL container. The hostnamedbmatches the service name in the compose file.SONAR_JDBC_USERNAMEandSONAR_JDBC_PASSWORD- credentials that must match thePOSTGRES_USERandPOSTGRES_PASSWORDvalues on the database service.
JVM tuning variables control memory allocation for SonarQube’s three internal processes:
SONAR_WEB_JAVAOPTS- controls the web server heap. 512 MB is sufficient for most deployments. The web server handles the UI and API requests.SONAR_CE_JAVAOPTS- controls the Compute Engine heap. This process runs code analysis and is the most memory-intensive. Set it higher for large projects - 1024 MB handles projects up to 500K lines of code comfortably.SONAR_SEARCH_JAVAOPTS- controls Elasticsearch heap. 512 MB is the minimum for production. Increase for instances with more than 50 projects.
Never set JVM heap sizes larger than half the container memory limit. The remaining memory is needed for Elasticsearch off-heap buffers, thread stacks, and the operating system.
Volume Mounts and Data Persistence
Proper volume configuration is the difference between a production deployment and a disposable development instance. The sonarqube docker compose production setup uses four named volumes for SonarQube and one for PostgreSQL.
SonarQube volumes:
sonarqube_data- stores analysis results, project metadata, and Elasticsearch indices. This is the largest volume and grows with the number of projects and history retention.sonarqube_extensions- contains installed plugins. Survives container restarts so you do not need to reinstall plugins after updates.sonarqube_logs- SonarQube application logs. Useful for troubleshooting but can be safely cleared if disk space is tight.sonarqube_temp- temporary files used during analysis. Cleared on restart but should be a named volume to avoid writing to the container’s writable layer.
PostgreSQL volume:
postgresql_data- the database files. This is the most critical volume. Losing this volume means losing all project configuration, quality profiles, quality gates, and historical analysis data.
Named volumes are managed by Docker and stored in /var/lib/docker/volumes/ on Linux. For production deployments on cloud infrastructure, consider backing these volumes with network-attached storage that supports snapshots, such as AWS EBS or GCP Persistent Disks.
To inspect volume disk usage:
docker system df -v | grep sonarqube
Resource Limits and Performance Tuning
Setting explicit resource limits prevents SonarQube from monopolizing your host’s CPU and memory. The deploy.resources section in the compose file defines both hard limits and soft reservations.
For a team of 20-50 developers with 10-30 active projects, the following allocation works well:
| Component | Memory Limit | CPU Limit | Notes |
|---|---|---|---|
| SonarQube | 4 GB | 2.0 cores | Handles web, CE, and Elasticsearch |
| PostgreSQL | 1 GB | 1.0 core | Sufficient for most workloads |
| Total | 5 GB | 3.0 cores | Minimum host requirement |
For larger teams analyzing 50 or more projects, increase SonarQube memory to 8 GB and PostgreSQL to 2 GB. Adjust the JVM options proportionally - set SONAR_CE_JAVAOPTS heap to 2048m and SONAR_SEARCH_JAVAOPTS to 1024m.
The ulimits section sets file descriptor and process limits inside the container. Elasticsearch requires a high nofile limit to manage its index segments efficiently. Without these settings, you may see too many open files errors under heavy analysis load.
For more details on SonarQube’s resource requirements across different editions, see our SonarQube review.
Reverse Proxy Setup with Nginx
Exposing SonarQube directly to the internet is not recommended. A reverse proxy provides TLS termination, request buffering, and an additional security layer. Add an Nginx service to your Docker Compose stack:
nginx:
image: nginx:alpine
container_name: sonarqube-proxy
ports:
- "443:443"
- "80:80"
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- ./nginx/ssl:/etc/nginx/ssl:ro
depends_on:
sonarqube:
condition: service_healthy
networks:
- sonarnet
restart: unless-stopped
Create the Nginx configuration file at ./nginx/conf.d/sonarqube.conf:
upstream sonarqube_backend {
server sonarqube:9000;
}
server {
listen 80;
server_name sonar.example.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name sonar.example.com;
ssl_certificate /etc/nginx/ssl/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
client_max_body_size 50m;
location / {
proxy_pass http://sonarqube_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300;
proxy_read_timeout 300;
proxy_send_timeout 300;
}
}
After deploying with the reverse proxy, configure SonarQube to use the external URL. Navigate to Administration, then General Settings, then Server Base URL, and set it to https://sonar.example.com. This ensures that webhook callbacks, email notifications, and badge URLs use the correct address.
Configuring HTTPS with Let’s Encrypt
For automated certificate management, replace the static SSL certificate approach with Certbot. You can either run Certbot on the host and mount the certificates into the Nginx container, or use a Certbot sidecar container.
The host-based approach is simpler and more reliable:
# Install Certbot on the host
sudo apt install certbot
# Obtain certificate (stop Nginx first if port 80 is in use)
sudo certbot certonly --standalone -d sonar.example.com
# Certificates are stored at:
# /etc/letsencrypt/live/sonar.example.com/fullchain.pem
# /etc/letsencrypt/live/sonar.example.com/privkey.pem
Update the Nginx volume mount to reference the Let’s Encrypt directory:
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- /etc/letsencrypt/live/sonar.example.com:/etc/nginx/ssl:ro
- /etc/letsencrypt/archive/sonar.example.com:/etc/nginx/ssl-archive:ro
Set up automatic renewal with a cron job that also reloads Nginx:
0 3 * * * certbot renew --quiet && docker compose exec nginx nginx -s reload
For details on integrating SonarQube with CI/CD pipelines after setting up HTTPS, see our guides on SonarQube Jenkins and SonarQube GitHub Actions.
Backup Strategies
A production SonarQube deployment needs both database and volume backups. The database is the most critical component - it stores all project configurations, quality profiles, quality gates, user accounts, and analysis history.
Database Backup
Create a backup script at ./scripts/backup.sh:
#!/bin/bash
set -euo pipefail
BACKUP_DIR="/backups/sonarqube"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
RETENTION_DAYS=30
mkdir -p "$BACKUP_DIR"
# Dump the PostgreSQL database
docker compose exec -T db pg_dump \
-U "${SONAR_DB_USER:-sonar}" \
-Fc \
--no-owner \
sonarqube > "$BACKUP_DIR/sonarqube_db_$TIMESTAMP.dump"
# Back up SonarQube extensions (plugins)
docker run --rm \
-v sonarqube_extensions:/source:ro \
-v "$BACKUP_DIR":/backup \
alpine tar czf "/backup/sonarqube_extensions_$TIMESTAMP.tar.gz" -C /source .
# Remove backups older than retention period
find "$BACKUP_DIR" -type f -mtime +$RETENTION_DAYS -delete
echo "Backup completed: $TIMESTAMP"
Schedule this script with cron to run daily during low-usage hours:
0 2 * * * /path/to/scripts/backup.sh >> /var/log/sonarqube-backup.log 2>&1
Restore Procedure
To restore from a backup, stop SonarQube, restore the database, restore volumes, and restart:
# Stop SonarQube but keep the database running
docker compose stop sonarqube
# Restore the database
docker compose exec -T db pg_restore \
-U sonar \
-d sonarqube \
--clean \
--if-exists \
< /backups/sonarqube/sonarqube_db_20260313_020000.dump
# Restore extensions
docker run --rm \
-v sonarqube_extensions:/target \
-v /backups/sonarqube:/backup:ro \
alpine sh -c "rm -rf /target/* && tar xzf /backup/sonarqube_extensions_20260313_020000.tar.gz -C /target"
# Restart everything
docker compose up -d
Test your backup and restore procedure regularly. An untested backup is not a backup.
Upgrade Process
Upgrading SonarQube in a Docker Compose deployment follows a predictable pattern. The key rules are to always back up first, never skip major versions, and verify the upgrade path against the SonarQube documentation.
Step-by-step upgrade procedure
-
Check the upgrade path - SonarQube requires sequential upgrades through each LTS release. You cannot jump directly from version 9.x to 2025.x.
-
Back up everything - run your backup script before making any changes.
-
Update the image tag in docker-compose.yml:
image: sonarqube:2025.1-community # was sonarqube:10.8-community
- Pull the new image and restart:
docker compose down
docker compose pull
docker compose up -d
- Monitor the migration:
docker compose logs -f sonarqube
SonarQube automatically detects that the database schema needs updating and runs migrations. This can take several minutes for major version upgrades. Do not interrupt this process.
- Verify the upgrade by navigating to the web interface and checking Administration, then System, where the version number should reflect the update.
If the upgrade fails and you need to roll back, restore from your backup, revert the image tag in docker-compose.yml, and restart.
For a comprehensive look at running SonarQube in Docker beyond the compose context, see our SonarQube Docker guide.
Multi-Instance Setup
Running separate SonarQube instances for different environments - production, staging, and development - is a common pattern in enterprise deployments. Each instance needs its own database and volume set.
Create a docker-compose.staging.yml override file:
services:
sonarqube:
container_name: sonarqube-staging
ports:
- "127.0.0.1:9001:9000"
environment:
SONAR_JDBC_URL: jdbc:postgresql://db:5432/sonarqube_staging
volumes:
- sonarqube_staging_data:/opt/sonarqube/data
- sonarqube_staging_extensions:/opt/sonarqube/extensions
- sonarqube_staging_logs:/opt/sonarqube/logs
- sonarqube_staging_temp:/opt/sonarqube/temp
db:
container_name: sonarqube-db-staging
environment:
POSTGRES_DB: sonarqube_staging
volumes:
- postgresql_staging_data:/var/lib/postgresql/data
volumes:
sonarqube_staging_data:
sonarqube_staging_extensions:
sonarqube_staging_logs:
sonarqube_staging_temp:
postgresql_staging_data:
Run the staging instance using both compose files:
docker compose -f docker-compose.yml -f docker-compose.staging.yml --env-file .env.staging up -d
For the reverse proxy, add a separate server block in Nginx for the staging subdomain that routes to sonarqube-staging:9000.
When managing multiple instances, ensure your host has sufficient resources. Each SonarQube instance requires its own 4-5 GB of memory. A host running production and staging instances simultaneously needs at least 12 GB of RAM dedicated to Docker.
Cloud Alternative: CodeAnt AI
If managing Docker Compose configurations, database backups, reverse proxies, and kernel parameters sounds like more infrastructure overhead than your team wants to handle, a cloud-based code analysis platform eliminates this operational burden entirely.
CodeAnt AI provides automated code review and static analysis as a managed service at $24-40/user/month. It connects directly to your GitHub, GitLab, or Bitbucket repositories and runs analysis on every pull request without requiring any self-hosted infrastructure. For teams that want code quality enforcement without the DevOps investment of running SonarQube in Docker Compose, CodeAnt AI is worth evaluating.
That said, self-hosted SonarQube gives you full control over your data, supports air-gapped environments, and has no per-user licensing costs for the Community Edition. The right choice depends on your team’s priorities around data sovereignty, budget, and willingness to manage infrastructure. For a broader comparison, see our list of SonarQube alternatives.
Troubleshooting Common Issues
SonarQube container exits immediately
Check the logs with docker compose logs sonarqube. The most common causes are:
- vm.max_map_count too low - Elasticsearch fails its bootstrap check. Set
vm.max_map_count=524288on the host. - Insufficient memory - the container is killed by the OOM killer. Increase the memory limit in the compose file and ensure your Docker host has enough RAM.
- Database not ready - if the health check on the PostgreSQL service is not configured correctly, SonarQube may try to connect before the database is accepting connections. The
depends_onwithcondition: service_healthyin the compose file above prevents this.
Database connection refused
Verify that the PostgreSQL container is running and healthy:
docker compose ps db
docker compose exec db pg_isready -U sonar
Confirm the JDBC URL hostname matches your database service name. If you renamed the service from db to something else, update SONAR_JDBC_URL accordingly.
Analysis is slow or times out
Slow analysis performance usually indicates resource starvation. Check container resource usage:
docker stats sonarqube sonarqube-db
If the SonarQube container is consistently hitting its memory limit, increase both the container memory limit and the JVM heap sizes. If CPU usage is maxed out, increase the CPU limit or reduce concurrent analysis by configuring sonar.ce.workerCount=1 in the SonarQube administration settings.
Plugins disappear after restart
This happens when the sonarqube_extensions volume is not properly mounted. Verify the volume exists and is correctly mapped:
docker volume inspect sonarqube_extensions
If you are using bind mounts instead of named volumes, check file permissions. The SonarQube container runs as UID 1000 by default, and the mounted directory must be writable by that user.
Port conflict on 9000
If another service is already using port 9000 on the host, change the host port mapping in the compose file:
ports:
- "127.0.0.1:9100:9000"
Update your reverse proxy configuration to point to the new port.
Security Hardening
A production sonarqube docker compose production deployment should follow these security practices:
- Never use default credentials - change the admin password immediately after first login. Better yet, configure SAML or LDAP authentication.
- Use Docker secrets or an .env file for database passwords. Never hardcode credentials in the docker-compose.yml file.
- Bind SonarQube’s port to localhost (
127.0.0.1:9000:9000) so it is only accessible through the reverse proxy. - Enable force authentication - in SonarQube Administration, go to Security and enable “Force user authentication” to prevent anonymous access.
- Keep images updated - regularly pull the latest patch versions of both SonarQube and PostgreSQL to get security fixes.
- Run SonarQube on an isolated Docker network - the
sonarnetbridge network in the compose file ensures SonarQube and PostgreSQL can communicate without exposing the database to other containers or the host network. - Configure Content Security Policy headers in your Nginx reverse proxy to prevent XSS and clickjacking attacks.
Monitoring and Health Checks
The health check defined in the compose file polls SonarQube’s /api/system/health endpoint. This endpoint returns one of three statuses:
- GREEN - all components are operational
- YELLOW - at least one component is in a degraded state
- RED - at least one component is non-functional
For proactive monitoring, set up external health check monitoring that alerts your team when the endpoint returns anything other than GREEN. Tools like Uptime Kuma, Healthchecks.io, or your existing monitoring stack can poll this endpoint.
To expose SonarQube metrics to Prometheus, install the Prometheus Exporter plugin through the SonarQube marketplace. Add a Prometheus and Grafana service to your compose stack for a complete monitoring solution. This gives you dashboards for analysis queue depth, response times, JVM metrics, and database connection pool utilization.
Summary
A production-ready sonarqube docker compose setup requires attention to several areas that a basic development configuration ignores - proper PostgreSQL configuration, resource limits, persistent volumes, HTTPS termination through a reverse proxy, automated backups, and a tested upgrade procedure. The configuration provided in this guide addresses all of these concerns and scales comfortably for teams of up to 200 developers.
For teams just getting started with SonarQube, this Docker Compose approach offers the best balance of simplicity and production readiness. As your organization grows and requires high availability or multi-node deployment, you can migrate to Kubernetes using the official SonarQube Helm chart without losing your data - simply export the PostgreSQL database and import it into your Kubernetes-hosted database.
To continue building out your SonarQube deployment, explore our guides on integrating with SonarQube Jenkins and SonarQube GitHub Actions for complete CI/CD pipeline coverage.
Frequently Asked Questions
What is the best database for SonarQube Docker Compose?
PostgreSQL is the recommended and best-supported database for SonarQube in a Docker Compose setup. SonarSource officially recommends PostgreSQL versions 13 through 16. While SonarQube also supports Microsoft SQL Server and Oracle, PostgreSQL offers the best performance characteristics for SonarQube workloads and is the easiest to run as a container alongside SonarQube. The embedded H2 database that ships with SonarQube should never be used in production because it does not support upgrades and is not designed for concurrent access.
How much memory does SonarQube need in Docker?
SonarQube Community Edition requires a minimum of 2 GB of RAM, but 4 GB is recommended for production use. Developer and Enterprise editions need at least 4 GB, with 8 GB recommended for large codebases. The embedded Elasticsearch instance is the primary memory consumer. In your Docker Compose file, set memory limits using the deploy.resources.limits.memory directive and configure the SONAR_CE_JAVAOPTS and SONAR_WEB_JAVAOPTS environment variables to control JVM heap sizes. A general rule is to allocate half of the container memory to JVM heap and leave the rest for Elasticsearch and OS overhead.
How do I persist SonarQube data with Docker Compose volumes?
Define named volumes in your docker-compose.yml for four critical paths - /opt/sonarqube/data for analysis data, /opt/sonarqube/extensions for installed plugins, /opt/sonarqube/logs for log files, and /var/lib/postgresql/data for the PostgreSQL database. Named volumes survive container restarts and recreation. Avoid using bind mounts for SonarQube data directories on Linux because SonarQube runs as a non-root user inside the container and file permission mismatches can cause startup failures.
Why does SonarQube fail to start with max virtual memory areas error?
SonarQube uses Elasticsearch internally, which requires the Linux kernel parameter vm.max_map_count to be set to at least 524288. On the Docker host, run sysctl -w vm.max_map_count=524288 to set it temporarily, or add vm.max_map_count=524288 to /etc/sysctl.conf for a permanent fix. On Docker Desktop for Mac and Windows, this setting is handled automatically. Without this setting, Elasticsearch crashes on startup and SonarQube logs will show a bootstrap check failure.
Can I run SonarQube Docker Compose with HTTPS?
Yes. The recommended approach is to place an Nginx or Traefik reverse proxy in front of SonarQube. Add an Nginx service to your Docker Compose file that terminates TLS using your SSL certificates or Let's Encrypt, and proxy requests to the SonarQube container on port 9000. Do not expose SonarQube's port 9000 directly to the internet. Set the sonar.core.serverBaseURL property in SonarQube to your HTTPS URL so that generated links, webhook callbacks, and email notifications use the correct protocol.
How do I back up SonarQube running in Docker Compose?
Back up the PostgreSQL database using pg_dump by running docker compose exec db pg_dump -U sonar sonarqube and redirecting the output to a SQL file. Also back up the named volumes for SonarQube data and extensions using docker run with a temporary container that tars the volume contents. Schedule these backups with cron. For restoration, load the SQL dump with psql and restore volumes by extracting the tar archives. Always stop SonarQube before restoring to avoid data corruption.
How do I upgrade SonarQube in a Docker Compose setup?
First, back up your PostgreSQL database and SonarQube volumes. Then update the image tag in your docker-compose.yml to the new version. Run docker compose down to stop the current instance, pull the new image with docker compose pull, and start with docker compose up -d. SonarQube automatically runs database migrations on first startup with the new version. Check the logs with docker compose logs -f sonarqube to confirm the migration completes. Never skip major versions - upgrade through each LTS release sequentially.
What environment variables does SonarQube accept in Docker?
The most important environment variables are SONAR_JDBC_URL, SONAR_JDBC_USERNAME, and SONAR_JDBC_PASSWORD for database connection. SONAR_WEB_JAVAOPTS and SONAR_CE_JAVAOPTS control JVM heap for the web server and compute engine. SONAR_SEARCH_JAVAOPTS controls Elasticsearch JVM settings. SONAR_WEB_PORT sets the internal port (default 9000). You can also pass any sonar.properties setting as an environment variable by converting dots to underscores and uppercasing - for example, sonar.core.serverBaseURL becomes SONAR_CORE_SERVERBASEURL.
How do I run multiple SonarQube instances with Docker Compose?
Create separate Docker Compose files or use multiple service definitions with distinct container names, ports, volumes, and network names. Each instance needs its own PostgreSQL database and its own set of named volumes. Map different host ports to each instance - for example, 9000 for production and 9001 for staging. Use a shared reverse proxy to route traffic based on subdomain or path. Ensure each instance has sufficient memory allocation and that the host has enough resources to support all instances.
How do I configure SonarQube behind Nginx reverse proxy in Docker Compose?
Add an Nginx service to your docker-compose.yml on the same Docker network as SonarQube. Create an nginx.conf that sets proxy_pass to http://sonarqube:9000, includes proxy_set_header directives for Host, X-Real-IP, and X-Forwarded-Proto, and configures SSL with your certificate files. Mount the Nginx config and SSL certificates as volumes. Remove the ports mapping from the SonarQube service so it is only accessible through Nginx. Set sonar.core.serverBaseURL in SonarQube to your external HTTPS URL.
Is Docker Compose suitable for production SonarQube deployments?
Docker Compose is suitable for small to medium production deployments serving teams of up to 200 developers. It provides reproducible configuration, easy upgrades, and straightforward backup procedures. For larger organizations with high availability requirements, Kubernetes with Helm charts is more appropriate because it supports horizontal scaling of compute engines, automated failover, and rolling updates. SonarQube Data Center Edition is designed for high availability but requires Enterprise licensing.
Why is SonarQube slow when running in Docker Compose?
The most common causes of slow SonarQube performance in Docker are insufficient memory allocation (especially for Elasticsearch), running on spinning disk storage instead of SSD, and not setting proper JVM heap sizes. Ensure your container has at least 4 GB of memory, use SSD-backed volumes, set SONAR_CE_JAVAOPTS and SONAR_WEB_JAVAOPTS to allocate appropriate heap sizes, and verify that vm.max_map_count is set correctly. On Docker Desktop, ensure the VM is allocated enough CPU cores and memory in the Docker Desktop settings.
Explore More
Tool Reviews
Related Articles
- How to Install SonarQube with Docker: Complete Guide for 2026
- SonarQube GitHub Actions: Automated Code Analysis Workflow
- SonarQube GitLab CI Integration: Configuration Guide
- SonarQube Jenkins Integration: Complete Pipeline Setup Guide
- SonarQube Maven Plugin: How to Analyze Java Projects (2026 Guide)
Free Newsletter
Stay ahead with AI dev tools
Weekly insights on AI code review, static analysis, and developer productivity. No spam, unsubscribe anytime.
Join developers getting weekly AI tool insights.
Related Articles
Codacy GitHub Integration: Complete Setup and Configuration Guide
Learn how to integrate Codacy with GitHub step by step. Covers GitHub App install, PR analysis, quality gates, coverage reports, and config.
March 13, 2026
how-toCodacy GitLab Integration: Setup and Configuration Guide (2026)
Set up Codacy with GitLab step by step. Covers OAuth, project import, MR analysis, quality gates, coverage reporting, and GitLab CI config.
March 13, 2026
how-toHow to Set Up Codacy with Jenkins for Automated Review
Set up Codacy with Jenkins for automated code review. Covers plugin setup, Jenkinsfile config, quality gates, coverage, and multibranch pipelines.
March 13, 2026
SonarQube Review
CodeAnt AI Review