9 Docker tips you didn't know you needed
Richard Coker
October 31, 2024 · 1 min read
1
0
Docker is pretty much the go-to tool in modern app development, powering everything from testing to CI/CD pipelines and micro-services, plus Docker itself is easy to grasp — well, at least on the surface. In this guide, aimed at junior and mid-level engineers with basic Docker knowledge, I’ll share 9 tips to help you “dock” into deeper waters (sorry, I just had to) .
1. Keep an Eye on Container Activity
Keeping an eye on your container’s logs is necessary for debugging and understanding what’s happening inside your app. Instead of running docker logs over and over — and over again , use the --follow flag to tail the logs in real time, pretty much like how tail -f works for log files.
1docker logs --follow <container_name_or_id>
This lets you see the latest log entries as they happen, which is perfect for monitoring long-running processes or identifying issues on the fly. It’s such a simple, yet command, saving a whole lot of time when troubleshooting.
(Trust me, this would come in so handy)
2. Secure Sensitive Data with Docker Secrets
When building Docker images, it’s necessary to handle sensitive data like API keys, tokens, and passwords securely. If you pass in sensitive data as environment variables or build arguments plainly, they can be found in you docker image build which is a no-no in production because anyone who has access to your image can have access to the credentials as well. so how do you solve this problem? I’m glad you asked! Docker BuildKit provides a streamlined way to manage secrets during the build process, ensuring they don’t end up in your image layers. It is best practice to using secrets when passing sensitive data in containers.
1# Enable Docker BuildKit
2export DOCKER_BUILDKIT=1
3
4# Build with a secret
5docker build --secret id=my_secret,src=/path/to/secret/file.txt -t app:latest .
In your Dockerfile you can reference it as follows
1# syntax=docker/dockerfile:1.2
2FROM alpine:latest
3RUN --mount=type=secret,id=my_secret \
4 cp /run/secrets/my_secret /etc/my_secret_data
3. Never Run Your Container as Root
One common pattern you’ll notice in Docker tutorials or sample Dockerfiles is that containers often run as the root user. But how can you tell if your container is running as root? It’s easy — just run docker exec -it your_container_name whoami, and it will confirm the user inside the container.
Why avoid running as root? Using root in your Docker image increases security risks, as vulnerabilities in your app could allow attackers to gain root access on the host machine. Running as a non-root user reduces the risk of privilege escalation and protects both your container and the host. The best practice is to create a non-root user in your Dockerfile, assigning only the permissions needed to run the application securely.
1
2# Create a non-root user and group
3RUN addgroup --system --gid 1001 nodejs
4RUN adduser --system --uid 1001 nextjs
5
6# Set the correct permission for prerender cache
7RUN mkdir .next
8RUN chown nextjs:nodejs .next
9
10# Automatically leverage output traces to reduce image size
11COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
12COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
13
14USER nextjs
15
16# Start the Next.js application
17
18CMD ["node", "server.js"]
This example demonstrates creating a user(next js) and group (node js) while assigning permissions for the .next directory to the nextjs user, and this useful when you want to limit or set specific privileges to the user running the container
4. Boost Build Speed with Container Registry Caching
You might already be familiar with Docker’s built-in image layer caching, but did you know you can also cache from a container registry? This technique is particularly useful in CI/CD pipelines. Instead of pulling base images or dependencies every time, you can cache these layers in your container registry, giving your builds a serious speed boost, granted you organised your Dockerfile effectively.
1docker build --cache-from your-container-registry/your-base-image:tag -t your-new-image:latest .
when incorporating this in your pipeline make sure your pipeline has the necessary access to your container registry i.e Dockerhub, AWS ECR, Google Container Registry or whatever Azure uses
5. Maintain a Tidy Docker Environment
Maintaining a tidy Docker environment is crucial for performance and storage management, especially when you’re frequently building and running containers leading up to megabytes of gigabytes of wasted resources. One of the most effective commands for this task is docker prune. It helps you free up disk space by removing unused data like stopped containers, dangling images, and unused networks—all with a single command.
1# Remove all stopped containers
2docker container prune
3
4# Remove dangling images
5docker image prune
6
7# Remove all unused images
8docker image prune -a
9
10# Remove all unused networks
11docker network prune
12
13# Comprehensive cleanup
14docker system prune
15
16# Remove all resources without prompt
17docker prune -af
Pairing up the docker prune -af in addition to the caching mechanism would make a powerful combo that would significantly improve the efficiency of your pipeline. As with great power comes great responsibility just be careful before implementing this.
6.Manage Container Resources Wisely
Have you ever had your application or service slow down or crash unexpectedly because one of your containers was hogging all the resources? Managing resources in Docker is crucial to prevent such scenarios. By limiting CPU and memory usage, you can ensure that your containers play nicely with others on the host.
1#restrict cpu
2docker run --name my_app --cpus="1.5" my_image
3
4#restrict memory
5docker run --name my_app --memory="512m" my_image
6
7#restrict swap memory
8docker run --name my_app --memory="512m" --memory-swap="1g" my_image
By setting resource limits, you ensure your containers run smoothly without overwhelming the host, enhancing performance and reliability. However, don’t go overboard — strict limits can cause containers to crash repeatedly, leading to a never-ending loop. As with everything — balance is key!
7. Enhance Reliability with Health Checks
Docker’s health checks are your secret weapon for maintaining reliable applications. By implementing health checks in your Dockerfile, you can automatically monitor the status of your running containers to ensure they’re functioning as expected.
You can define a health check using the HEALTHCHECK instruction, specifying a command that Docker will run to assess the container's health.
1#your preceeding dockerfile configuration
2HEALTHCHECK --interval=30s --timeout=5s --retries=3 CMD curl -f http://localhost/ || exit 1
In this example, Docker attempts to access a web service every 30 seconds. If it fails three consecutive times, the container is marked as unhealthy. This information is invaluable for orchestration tools like Docker Swarm or Kubernetes, which can take action — like restarting or replacing the container — if it becomes unhealthy. By incorporating health checks into your Docker setup, you enhance application reliability and improve your deployment strategy.
8. Make the Most of Docker Restart Policies
Speaking of deployment strategy, one way to enhance the resilience of your Docker containers is by utilizing restart policies. These policies allow you to define how Docker should handle container failures, ensuring your applications remain available even after unexpected crashes.
Here’s a brief overview of the different Docker restart policies you can use, along with their descriptions:
- No Restart (no)- This is the default policy. The container will not restart automatically if it stops.
- Always- The container will restart indefinitely, regardless of the exit status.
- Unless Stopped- Similar to the `always` policy, but it won’t restart the container if it was manually stopped.
- On-Failure- The container will restart only if it exits with a non-zero exit status, indicating an error.
1#no restart
2docker run --restart no my-container
3
4#Always
5docker run --restart always my-container
6
7#unless stopped
8docker run --restart unless-stopped my-container
9
10#on failure
11docker run --restart on-failure:5 my-container
12
13
9. Simplify Your Dockerfile with External Scripts
When building Docker images, it’s often necessary to execute scripts or commands as part of the container’s startup process. Instead of cluttering your Dockerfile with long commands, consider using external scripts for better organization and maintainability.
take for example a script entrypoint.sh
with Dockerfile config
1#preceeding Dockerfile config
2
3# Make the entrypoint script executable
4RUN chmod +x entrypoint.sh
5
6# Set the entrypoint to the script
7ENTRYPOINT ["./entrypoint.sh"]
This setup ensures that your Django application is correctly migrated before starting the server, enhancing reliability in your Dockerized environment. Now think of the possibilities of the scripts you could write and also the maintainability as well.
As a bonus tip for making it to the end, the last tip would be to simply take some time to read Docker’s documentation as you can find a whole lot of handy things that are aren’t listed her
As the lead DevSecOps engineer at Tanta Innovative Limited, I handle all things in between development, security and operations. With over 3 years of expertise in, software development, Linux administration and cloud engineering, I specialize in DevOps engineering. I hold a BSc in Computer Science from the University of Lagos and I'm passionate about bringing 'cool' ideas to life!