Container Image Scanning: Catching Vulnerabilities Before They Ship
Container images carry hundreds of dependencies you didn't write. Learn how to scan them with Trivy, Grype, Snyk, and Docker Scout, manage false positives, choose minimal base images, and automate dependency updates.
Infrastructure engineer with 10+ years building production systems on AWS, GCP,…

Your Container Is Only as Secure as Its Dependencies
Every Docker image you ship to production carries hundreds of packages you didn't write -- OS libraries, language runtimes, transitive dependencies. Any one of them could have a known vulnerability. Container image scanning analyzes your images for known CVEs (Common Vulnerabilities and Exposures) before they reach production, catching security issues in CI instead of in a breach notification.
The tooling has improved dramatically. Modern scanners check OS packages, language-specific dependencies, misconfigurations, and even embedded secrets -- all in seconds. The harder problem isn't running the scan. It's handling the results without drowning in false positives or blocking every deployment.
What Is Container Image Scanning?
Definition: Container image scanning is the automated process of analyzing container images for known security vulnerabilities, misconfigurations, and compliance violations. Scanners inspect OS packages, application dependencies, and image configuration against vulnerability databases like the National Vulnerability Database (NVD) to identify risks before deployment.
How Container Scanning Works: Step by Step
- Extract image layers -- the scanner unpacks the image filesystem, including all layers from the base image and your application layers.
- Identify packages -- scan OS package managers (apt, apk, yum) and language-specific manifests (package-lock.json, go.sum, requirements.txt, Gemfile.lock).
- Match against vulnerability databases -- each package version is checked against CVE databases (NVD, GitHub Advisory Database, vendor-specific advisories).
- Assess severity -- vulnerabilities are rated using CVSS scores: Critical (9.0-10.0), High (7.0-8.9), Medium (4.0-6.9), Low (0.1-3.9).
- Report results -- output a list of vulnerabilities with affected packages, fixed versions (if available), and remediation guidance.
Scanner Comparison: Trivy, Grype, Snyk, Docker Scout
Four scanners dominate the ecosystem, each with different strengths:
| Feature | Trivy | Grype | Snyk Container | Docker Scout |
|---|---|---|---|---|
| Pricing | Free (open source) | Free (open source) | Free tier / Team from $25/dev/month | Free tier / Pro from $9/month |
| Scan speed | Fast (5-15s) | Very fast (3-10s) | Medium (10-30s) | Fast (5-15s) |
| OS packages | Excellent | Excellent | Excellent | Excellent |
| Language deps | Excellent (20+ languages) | Good (15+ languages) | Excellent | Good |
| Misconfiguration | Yes (Dockerfile, K8s, Terraform) | No | Yes (IaC scanning) | No |
| Secret detection | Yes | No | No (separate tool) | No |
| SBOM generation | Yes (SPDX, CycloneDX) | Yes (via Syft) | Yes | Yes |
| CI integration | Native GitHub Action | Native GitHub Action | Native GitHub Action | Docker CLI plugin |
| Fix suggestions | Shows fixed versions | Shows fixed versions | Upgrade paths + PRs | Base image recommendations |
My recommendation: Start with Trivy. It's free, fast, covers the widest range of scan targets (images, filesystems, repos, Kubernetes, IaC), and has the richest feature set of any open-source scanner. Use Snyk if you want automated fix PRs and your organization is willing to pay for commercial tooling.
Integrating Trivy Into GitHub Actions
Here's a practical CI pipeline that builds a Docker image and scans it before pushing to a registry:
name: Build and Scan
on:
push:
branches: [main]
pull_request:
jobs:
build-and-scan:
runs-on: ubuntu-latest
permissions:
contents: read
security-events: write # for uploading SARIF results
steps:
- uses: actions/checkout@v4
- name: Build image
run: docker build -t myapp:${{ github.sha }} .
- name: Run Trivy scan
uses: aquasecurity/trivy-action@master
with:
image-ref: myapp:${{ github.sha }}
format: 'sarif'
output: 'trivy-results.sarif'
severity: 'CRITICAL,HIGH'
exit-code: '1' # fail the build on critical/high vulns
- name: Upload SARIF to GitHub Security
uses: github/codeql-action/upload-sarif@v3
if: always() # upload even if scan found vulns
with:
sarif_file: 'trivy-results.sarif'
- name: Push to registry (only if scan passes)
if: github.ref == 'refs/heads/main'
run: |
docker tag myapp:${{ github.sha }} ghcr.io/myorg/myapp:${{ github.sha }}
docker push ghcr.io/myorg/myapp:${{ github.sha }}
Pro tip: Upload scan results as SARIF to GitHub's Security tab. This integrates vulnerability findings directly into your pull request reviews, so developers see security issues alongside code review comments. It also provides a historical view of your security posture across the repository.
Integrating Grype Into GitLab CI
Grype is Anchore's open-source scanner and pairs with Syft for SBOM generation. It's the fastest scanner in benchmarks and works well in GitLab CI:
# .gitlab-ci.yml
stages:
- build
- scan
- push
build:
stage: build
image: docker:latest
services:
- docker:dind
script:
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
- docker save $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA > image.tar
artifacts:
paths:
- image.tar
scan:
stage: scan
image: anchore/grype:latest
script:
- grype docker-archive:image.tar --fail-on critical
- grype docker-archive:image.tar -o json > grype-report.json
artifacts:
paths:
- grype-report.json
when: always
push:
stage: push
image: docker:latest
services:
- docker:dind
script:
- docker load < image.tar
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
only:
- main
The False Positive Problem
The biggest complaint about container scanning isn't the scanning itself -- it's the noise. A typical base image scan returns 50-200 vulnerabilities, most of which are in packages your application never actually calls. Here's how to deal with it:
Strategy 1: Use Minimal Base Images
The fastest way to reduce vulnerabilities is to ship fewer packages:
| Base Image | Typical CVE Count | Size | Use Case |
|---|---|---|---|
ubuntu:24.04 | 50-150 | ~78 MB | When you need apt and a full OS |
node:22-slim | 20-60 | ~220 MB | Node.js apps needing some OS tools |
node:22-alpine | 5-15 | ~130 MB | Node.js apps, minimal attack surface |
gcr.io/distroless/nodejs22 | 0-5 | ~130 MB | Node.js apps, maximum security |
scratch | 0 | 0 MB | Statically compiled binaries (Go, Rust) |
Pro tip: Google's distroless images contain only the application runtime and its dependencies -- no shell, no package manager, no utilities. This dramatically reduces the attack surface and CVE count. The tradeoff is that you can't shell into a distroless container for debugging. Use a debug variant (
distroless/nodejs22:debug) in development and the standard variant in production.
Strategy 2: Ignore Unfixable Vulnerabilities
Many CVEs in base images have no available fix. Trivy and Grype let you filter these out:
# Only show vulnerabilities that have a fix available
trivy image --ignore-unfixed myapp:latest
# Create a .trivyignore file for known acceptable risks
echo "CVE-2023-12345" >> .trivyignore
echo "CVE-2023-67890 # accepted risk: not reachable in our code" >> .trivyignore
trivy image myapp:latest
Strategy 3: VEX (Vulnerability Exploitability Exchange)
VEX statements let you formally document that a vulnerability doesn't affect your application because the vulnerable code path isn't reachable. This is more rigorous than a .trivyignore file and can be shared across teams:
{
"@context": "https://openvex.dev/ns/v0.2.0",
"statements": [
{
"vulnerability": { "name": "CVE-2023-12345" },
"products": [{ "name": "myapp" }],
"status": "not_affected",
"justification": "vulnerable_code_not_in_execute_path"
}
]
}
Keeping Images Updated With Dependabot and Renovate
Scanning catches existing vulnerabilities; automated dependency updates prevent them from accumulating. Both Dependabot and Renovate can update base images and application dependencies automatically.
Dependabot for Docker Base Images
# .github/dependabot.yml
version: 2
updates:
- package-ecosystem: docker
directory: "/"
schedule:
interval: weekly
reviewers:
- "platform-team"
- package-ecosystem: npm
directory: "/"
schedule:
interval: weekly
open-pull-requests-limit: 10
Renovate for More Control
{
"extends": ["config:recommended"],
"docker-compose": { "enabled": true },
"packageRules": [
{
"matchDatasources": ["docker"],
"matchPackageNames": ["node"],
"allowedVersions": "/^22-/",
"automerge": true
},
{
"matchUpdateTypes": ["patch"],
"automerge": true,
"automergeType": "branch"
}
]
}
Renovate's package rules give you fine-grained control: auto-merge patch updates, restrict major version bumps, and group related updates into a single PR. For container security, the most impactful setting is auto-merging base image updates -- a weekly node:22-alpine rebuild picks up OS-level security patches automatically.
Multi-Stage Builds for Smaller, Safer Images
Multi-stage builds are the single most effective technique for reducing image size and vulnerability count. Build in one stage with all dev dependencies, then copy only the production artifacts to a minimal final stage:
# Stage 1: Build
FROM node:22-alpine AS build
WORKDIR /app
COPY package.json pnpm-lock.yaml ./
RUN corepack enable && pnpm install --frozen-lockfile
COPY . .
RUN pnpm build
# Stage 2: Production (minimal image)
FROM gcr.io/distroless/nodejs22-debian12
WORKDIR /app
COPY --from=build /app/dist ./dist
COPY --from=build /app/node_modules ./node_modules
COPY --from=build /app/package.json ./
EXPOSE 3000
CMD ["dist/main.js"]
The build stage contains compilers, dev dependencies, and build tools -- none of which ship in the final image. The production stage contains only the runtime and your compiled application.
Watch out: When using multi-stage builds, scan the final stage, not the build stage. The build stage will have far more vulnerabilities because it includes dev dependencies, but those packages never reach production. Configure your CI scanner to target the final image tag.
Frequently Asked Questions
What is container image scanning?
Container image scanning is the automated analysis of Docker and OCI container images for known security vulnerabilities, misconfigurations, and compliance violations. Scanners inspect operating system packages and application dependencies against vulnerability databases like the NVD, rating each finding by severity (Critical, High, Medium, Low) so teams can prioritize remediation.
Which container scanner should I use?
Start with Trivy if you want a free, comprehensive scanner that covers OS packages, language dependencies, IaC misconfigurations, and secrets in a single tool. Use Grype if scan speed is your top priority. Choose Snyk Container if you want automated fix PRs and your organization has budget for commercial tooling. Docker Scout integrates tightly with Docker Desktop for local development scanning.
How do I reduce false positives in container scanning?
Three approaches: use minimal base images (Alpine, distroless, scratch) to eliminate unnecessary packages, filter unfixable vulnerabilities with --ignore-unfixed, and use VEX statements to formally document that specific CVEs don't affect your application because the vulnerable code path isn't reachable. A .trivyignore or .grype.yaml file suppresses accepted risks.
What is a distroless container image?
Distroless images, maintained by Google, contain only the application runtime and its essential dependencies -- no shell, no package manager, no OS utilities. This minimizes the attack surface and dramatically reduces CVE count. The tradeoff is that you cannot shell into the container for debugging. Use the debug variant during development and the standard variant in production.
How often should I scan container images?
Scan on every build in CI to catch new vulnerabilities introduced by code or dependency changes. Additionally, run scheduled scans (daily or weekly) against images already deployed to your registry, since new CVEs are published constantly against existing packages. Most registries (ECR, GCR, Docker Hub) support automated scanning on push.
What is an SBOM and why does it matter for container security?
An SBOM (Software Bill of Materials) is a complete inventory of every package and dependency in your container image. It matters because you can't secure what you can't see. When a new zero-day vulnerability is disclosed, an SBOM lets you instantly determine which of your images are affected. Trivy and Syft generate SBOMs in SPDX and CycloneDX formats.
Should I block deployments on any vulnerability finding?
Block on Critical and High severity vulnerabilities that have available fixes. Don't block on Medium/Low or unfixable vulnerabilities -- you'll grind deployments to a halt. Use a policy like "fail CI on Critical/High with fix available, warn on everything else." Review and triage warnings weekly. The goal is actionable security gates, not zero-vulnerability perfection.
Conclusion
Container image scanning is table stakes for production security. The scanners are free, fast, and integrate into any CI pipeline in under 30 minutes. The real work is managing the results: choosing minimal base images to reduce noise, filtering false positives, and keeping dependencies updated automatically.
Start by adding Trivy to your CI pipeline with --severity CRITICAL,HIGH --exit-code 1. Switch your base images to Alpine or distroless. Set up Dependabot or Renovate for automated base image updates. These three changes will catch the majority of vulnerabilities before they ship and keep your images lean and current without manual effort.
Written by
Abhishek Patel
Infrastructure engineer with 10+ years building production systems on AWS, GCP, and bare metal. Writes practical guides on cloud architecture, containers, networking, and Linux for developers who want to understand how things actually work under the hood.
Related Articles
GitHub Actions Deep Dive: Writing Reusable Workflows
Go beyond basic GitHub Actions: composite actions, reusable workflows, secret passing, dependency caching, matrix builds, and permission lockdown patterns you can apply immediately.
10 min read
CloudServerless vs Containers: Choosing the Right Compute Model
Compare serverless (Lambda, Cloud Run) and containers (ECS, EKS, Fargate) on cold starts, pricing, scaling, vendor lock-in, and local development. Learn when to use each compute model.
8 min read
AI/ML EngineeringDeploying ML Models in Production: From Notebook to Kubernetes
End-to-end guide to deploying ML models -- from ONNX export and FastAPI serving to Kubernetes GPU workloads, canary deployments, and Prometheus monitoring.
9 min read
Enjoyed this article?
Get more like this in your inbox. No spam, unsubscribe anytime.