CI/CD

CI/CD Pipeline Explained: From Code to Production (Step-by-Step)

A beginner-to-advanced guide explaining CI/CD pipelines, tools involved, automation strategies, and real-world workflows.

A
Abhishek Patel11 min read

Infrastructure engineer with 10+ years building production systems on AWS, GCP,…

CI/CD Pipeline Explained: From Code to Production (Step-by-Step)
CI/CD Pipeline Explained: From Code to Production (Step-by-Step)

Every Manual Deploy Is a Risk You Chose to Take

I've watched a senior engineer fat-finger a production deploy at 4 PM on a Friday, taking down a payment service for 45 minutes. The fix? A CI/CD pipeline that would've caught the broken test, blocked the merge, and deployed automatically after passing every check. CI/CD isn't about automation for automation's sake -- it's about removing humans from the parts of the process where humans consistently make mistakes.

If you're still SSH-ing into servers to deploy, or running a bash script from your laptop, or manually clicking "merge" without a green build, this guide will walk you through building a pipeline that handles everything from commit to production. I'll cover the concepts, the tools, the patterns that work at scale, and the ones that don't.

What Is a CI/CD Pipeline?

Definition: A CI/CD pipeline is an automated workflow that takes code from a developer's commit through building, testing, and deploying to production. CI (Continuous Integration) automatically builds and tests every code change. CD (Continuous Delivery) automatically prepares releases for deployment. Continuous Deployment goes further by automatically deploying every change that passes all stages.

The pipeline metaphor is literal: code flows through stages, each one validating a different aspect. If any stage fails, the pipeline stops and the team is notified. No broken code reaches production. No untested changes slip through.

CI vs CD vs Continuous Deployment

ConceptWhat It DoesTriggerHuman Approval
Continuous IntegrationBuild + test on every commit/PRPush or PR eventNone required
Continuous DeliveryAutomatically prepare a deployable artifactMerge to mainManual deploy trigger
Continuous DeploymentAutomatically deploy to productionMerge to mainNone -- fully automated

Most teams start with CI, graduate to Continuous Delivery, and some eventually reach Continuous Deployment. The jump from Delivery to Deployment requires high test coverage, feature flags, and robust monitoring -- otherwise you're just automatically shipping bugs faster.

Anatomy of a Production Pipeline

Here's the stage-by-stage breakdown of a real-world pipeline. Each stage should fail fast -- put the quickest checks first.

Stage 1: Source (Trigger)

The pipeline triggers on a git event: push to a branch, PR opened, or merge to main. Configure branch protection rules to require a passing pipeline before merging. This is non-negotiable -- if developers can bypass the pipeline, they will.

Stage 2: Build

Compile the code, resolve dependencies, and produce a build artifact. For compiled languages (Go, Java, Rust), this means producing a binary. For interpreted languages (Python, Node.js), this means installing dependencies and bundling. Docker builds happen here too -- produce a tagged container image.

Stage 3: Test

Run your test suite in order of speed and coverage:

  1. Linting and static analysis (5-30 seconds) -- ESLint, Prettier, Ruff, golangci-lint
  2. Unit tests (30-120 seconds) -- isolated, fast, high coverage
  3. Integration tests (2-10 minutes) -- database connections, API contracts, service interactions
  4. End-to-end tests (5-20 minutes) -- browser automation, full workflow validation

Pro tip: Parallelize test stages. Run lint, unit tests, and security scans simultaneously. Most CI platforms support parallel jobs -- use them. A 20-minute sequential pipeline often becomes 8 minutes when parallelized properly.

Stage 4: Security Scanning

Scan dependencies for known vulnerabilities (Snyk, Trivy, Dependabot), run SAST (static application security testing) on your code, and scan container images for CVEs. Block merges on critical/high vulnerabilities. This stage catches 90% of security issues before they reach production.

Stage 5: Artifact Storage

Push the build artifact to a registry: Docker images to ECR/Docker Hub/GHCR, npm packages to npm/Artifactory, binaries to S3. Tag artifacts with the git commit SHA for traceability. Never deploy from source -- always deploy a pre-built, tested artifact.

Stage 6: Deploy to Staging

Automatically deploy to a staging environment that mirrors production. Run smoke tests against staging to verify the deployment works. This catches configuration issues, environment variable problems, and integration failures that don't show up in CI tests.

Stage 7: Deploy to Production

For Continuous Delivery, this is a manual approval step. For Continuous Deployment, it's automatic. Either way, use a deployment strategy (rolling, blue-green, or canary) to minimize risk. Monitor error rates and latency for 10-15 minutes after deploy -- automated rollback on anomaly detection is the gold standard.

GitHub Actions: A Complete Pipeline Example

name: CI/CD Pipeline
on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  lint:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with: { node-version: 20 }
      - run: npm ci
      - run: npm run lint

  test:
    runs-on: ubuntu-latest
    services:
      postgres:
        image: postgres:16
        env:
          POSTGRES_PASSWORD: test
        ports: ['5432:5432']
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with: { node-version: 20 }
      - run: npm ci
      - run: npm test
      - run: npm run test:integration

  security:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: aquasecurity/trivy-action@master
        with:
          scan-type: fs
          severity: CRITICAL,HIGH

  build-and-push:
    needs: [lint, test, security]
    if: github.ref == 'refs/heads/main'
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: docker/build-push-action@v5
        with:
          push: true
          tags: ghcr.io/myorg/myapp:${{ github.sha }}

  deploy-staging:
    needs: build-and-push
    runs-on: ubuntu-latest
    environment: staging
    steps:
      - run: kubectl set image deployment/myapp
              myapp=ghcr.io/myorg/myapp:${{ github.sha }}
      - run: kubectl rollout status deployment/myapp

  deploy-production:
    needs: deploy-staging
    runs-on: ubuntu-latest
    environment:
      name: production
      url: https://myapp.com
    steps:
      - run: kubectl set image deployment/myapp
              myapp=ghcr.io/myorg/myapp:${{ github.sha }}
      - run: kubectl rollout status deployment/myapp

CI/CD Tools Comparison

ToolTypePricing (Team)Best ForLearning Curve
GitHub ActionsCloud-hostedFree (2,000 min/mo), $4/user afterGitHub-native projectsLow
GitLab CICloud or self-hostedFree (400 min/mo), $29/user/moGitLab-native, self-hostedLow-Medium
JenkinsSelf-hostedFree (open source)Complex, customizable pipelinesHigh
CircleCICloud-hostedFree (6,000 min/mo), $15/userFast builds, Docker-nativeLow
AWS CodePipelineCloud-hosted$1/pipeline/mo + build minutesAWS-heavy infrastructureMedium
BuildkiteHybrid$15/user/moSelf-hosted agents, scaleMedium
DaggerPortable engineFree (open source)CI-agnostic pipelinesMedium
Argo CDGitOps (CD only)Free (open source)Kubernetes GitOpsMedium-High

Note: Jenkins is still the most deployed CI/CD tool globally, but its market share is declining. New projects should strongly consider GitHub Actions or GitLab CI unless you need Jenkins' plugin ecosystem or have existing Jenkinsfile infrastructure. The maintenance overhead of a Jenkins server is substantial.

Pipeline Optimization: Speed Matters

A slow pipeline is a pipeline developers circumvent. Target under 10 minutes for the full CI check and under 20 minutes for deploy-to-production. Here's how:

  1. Cache dependencies aggressively -- cache node_modules, pip packages, Go modules, Docker layers. A cold npm install takes 45 seconds; a cached one takes 3 seconds.
  2. Parallelize everything possible -- lint, unit tests, security scans, and type checking can all run simultaneously.
  3. Use incremental builds -- tools like Turborepo, Nx, and Bazel only rebuild what changed. In a monorepo, this can reduce build times by 80%.
  4. Run tests selectively -- only run tests affected by changed files. Jest's --changedSince flag and similar tools in other frameworks enable this.
  5. Use larger runners -- GitHub Actions' 4-core runners are 2x faster than default runners for CPU-bound builds. The cost difference is often worth the developer time saved.
  6. Optimize Docker builds -- multi-stage builds, proper layer ordering (dependencies before source code), and BuildKit caching can cut build times from 5 minutes to 30 seconds.

Pipeline Security: Don't Ship Your Secrets

Your CI/CD pipeline has access to production credentials, deploy keys, and API tokens. It's a high-value target.

Essential Security Practices

  1. Use environment-scoped secrets -- production secrets should only be accessible to the production deployment job, not to PR builds
  2. Never echo secrets in logs -- most CI tools mask secrets automatically, but custom scripts can leak them. Audit your logs.
  3. Pin action versions by SHA -- uses: actions/checkout@b4ffde65 not uses: actions/checkout@v4. Tag references can be hijacked.
  4. Restrict who can approve production deploys -- use GitHub environment protection rules or similar features
  5. Rotate secrets regularly -- automate rotation with tools like Vault or AWS Secrets Manager

Watch out: Pull requests from forks can access your CI environment. By default, GitHub Actions doesn't pass secrets to fork PRs, but some CI tools do. Always verify your CI tool's behavior with fork PRs. A malicious fork PR that exfiltrates secrets is a common attack vector for open-source projects.

GitOps: Declarative Deployments

GitOps extends CI/CD by making git the single source of truth for infrastructure and application state. Tools like Argo CD and Flux watch a git repository and automatically reconcile the cluster state to match the declared configuration.

Instead of kubectl apply in your pipeline, you update a YAML file in a config repo, and the GitOps controller deploys it. This gives you a complete audit trail (git history), easy rollbacks (git revert), and drift detection (the controller alerts when actual state diverges from declared state).

Metrics That Matter

Track these DORA metrics to measure your pipeline's effectiveness:

MetricEliteHighMediumLow
Deployment FrequencyMultiple/dayWeekly-DailyMonthly-WeeklyMonthly+
Lead Time for Changes<1 hour1 day-1 week1 week-1 month1 month+
Change Failure Rate0-15%16-30%16-30%46-60%
Time to Restore<1 hour<1 day1 day-1 week1 week+

Frequently Asked Questions

How long should a CI/CD pipeline take?

Target under 10 minutes for CI checks on pull requests and under 20 minutes for the full deploy-to-production flow. The DORA research shows elite teams have lead times under 1 hour from commit to production. If your pipeline takes 30+ minutes, developers will batch changes, reducing deploy frequency and increasing risk per deploy. Invest in caching, parallelization, and selective testing to hit the 10-minute target.

Do I need separate CI and CD tools?

Not usually. GitHub Actions, GitLab CI, and CircleCI handle both CI and CD in a single platform. The exception is Kubernetes-heavy environments where a dedicated GitOps tool like Argo CD handles the deployment side. In that case, your CI tool builds and tests, pushes an artifact, and updates a config repo -- Argo CD handles the actual deployment and reconciliation loop.

What test coverage percentage should I require?

Don't gate on a specific coverage number. 80% coverage with thoughtful tests beats 95% coverage with tests that just exercise code paths without meaningful assertions. Instead, require coverage on changed files -- if you modify a function, that function should have tests. Tools like Codecov and Coveralls can enforce diff coverage thresholds (e.g., 90% of new/changed lines must be covered).

Should I use monorepo or polyrepo CI/CD?

Monorepos simplify dependency management and cross-service changes but complicate CI -- you need tools like Turborepo or Nx to avoid rebuilding everything on every commit. Polyrepos have simpler per-repo pipelines but make cross-service changes harder (multiple PRs, coordinated deploys). Teams under 50 engineers usually benefit from a monorepo. Larger organizations often split into domain-specific repos.

How do I handle database migrations in CI/CD?

Run migrations as a separate pipeline step before deploying application code. Use tools like Flyway, Alembic, or Prisma Migrate. Always make migrations backward-compatible so the old application version works with the new schema during rolling deploys. Test migrations against a copy of production data in staging. Never run destructive migrations (dropping columns) in the same deploy as the code change -- separate them by at least one release cycle.

What's the difference between blue-green and canary deployments?

Blue-green maintains two identical environments and switches traffic instantly from old (blue) to new (green). Rollback is instant -- just switch back. Canary gradually routes a small percentage of traffic (1-5%) to the new version, monitors for errors, and slowly increases the percentage. Canary is safer for detecting subtle bugs but slower to complete. Use blue-green for simple applications and canary for high-traffic services where gradual rollout catches issues before they affect all users.

How do I handle secrets and environment variables?

Never store secrets in your repository or pipeline configuration files. Use your CI platform's built-in secret management (GitHub Encrypted Secrets, GitLab CI Variables) for pipeline secrets. For application secrets in production, use a dedicated secrets manager like HashiCorp Vault, AWS Secrets Manager, or Doppler. Inject secrets as environment variables at runtime, never bake them into container images or build artifacts.

Build Your Pipeline Incrementally

Don't try to build the perfect pipeline on day one. Start with lint and unit tests on PRs. Add integration tests once your test infrastructure is stable. Add container scanning when you move to Docker. Add staging deploys when your team is comfortable with the workflow. Add production auto-deploy when your test coverage and monitoring give you confidence.

Every stage you add reduces risk and increases velocity -- but only if it's reliable. A flaky pipeline that fails randomly is worse than no pipeline at all, because developers learn to ignore failures. Fix flakiness immediately. Your pipeline is only as trustworthy as its least reliable stage.

A

Written by

Abhishek Patel

Infrastructure engineer with 10+ years building production systems on AWS, GCP, and bare metal. Writes practical guides on cloud architecture, containers, networking, and Linux for developers who want to understand how things actually work under the hood.

Related Articles

Enjoyed this article?

Get more like this in your inbox. No spam, unsubscribe anytime.

Comments

Loading comments...

Leave a comment

Stay in the loop

New articles delivered to your inbox. No spam.