Screenshot API in CI/CD: Automated Visual Testing and Deployment Verification

2026-05-10 | Tags: [screenshot-api, cicd, github-actions, visual-testing, devops, tutorial]

Screenshot API in CI/CD: Automated Visual Testing and Deployment Verification

Your deployment pipeline probably verifies that unit tests pass, that the build compiles, and that health checks return 200. It probably doesn't verify that the homepage still looks right after a CSS change merged at 11pm. Screenshot APIs close that gap — they give you visual verification as a first-class CI/CD step.

This post covers three patterns: deployment verification (did the deploy break the visual UI?), visual regression testing (did this PR change something visible?), and staging comparison (does staging match production before you push?).

GitHub Actions: Post-Deployment Visual Verification

The simplest starting point — after each deployment, screenshot key pages and fail the workflow if a known-bad visual state is detected:

# .github/workflows/visual-verify.yml
name: Visual Verification

on:
  deployment_status:

jobs:
  visual-check:
    if: github.event.deployment_status.state == 'success'
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: '3.12'

      - name: Install dependencies
        run: pip install requests Pillow

      - name: Run visual verification
        env:
          SCREENSHOT_API_KEY: ${{ secrets.SCREENSHOT_API_KEY }}
          DEPLOY_URL: ${{ github.event.deployment_status.environment_url }}
        run: python scripts/visual_verify.py

      - name: Upload screenshots
        if: always()
        uses: actions/upload-artifact@v4
        with:
          name: visual-verification-screenshots
          path: screenshots/
          retention-days: 30
# scripts/visual_verify.py
import os
import sys
import requests
import hashlib
from pathlib import Path
from datetime import datetime

API_KEY = os.environ["SCREENSHOT_API_KEY"]
DEPLOY_URL = os.environ.get("DEPLOY_URL", "https://example.com")
SCREENSHOT_API = "https://hermesforge.dev/api/screenshot"

# Pages to verify after every deployment
VERIFY_PAGES = [
    {"path": "/", "name": "homepage", "min_bytes": 50_000},
    {"path": "/pricing", "name": "pricing", "min_bytes": 30_000},
    {"path": "/docs", "name": "docs", "min_bytes": 20_000},
]

output_dir = Path("screenshots")
output_dir.mkdir(exist_ok=True)

failures = []

for page in VERIFY_PAGES:
    url = DEPLOY_URL.rstrip("/") + page["path"]
    print(f"Checking {url}...")

    try:
        response = requests.get(
            SCREENSHOT_API,
            params={"url": url, "width": 1440, "format": "png", "delay": 3000},
            headers={"X-API-Key": API_KEY},
            timeout=60
        )
        response.raise_for_status()

        image_bytes = response.content
        filename = output_dir / f"{page['name']}.png"
        filename.write_bytes(image_bytes)

        # Minimum size check — catches blank/error pages
        if len(image_bytes) < page["min_bytes"]:
            failures.append(
                f"{page['name']}: image too small ({len(image_bytes)} bytes, "
                f"expected >= {page['min_bytes']}). Possible blank or error page."
            )
            print(f"  FAIL: image too small")
        else:
            print(f"  OK: {len(image_bytes):,} bytes → {filename}")

    except requests.HTTPError as e:
        failures.append(f"{page['name']}: HTTP {e.response.status_code}")
        print(f"  FAIL: {e}")
    except Exception as e:
        failures.append(f"{page['name']}: {e}")
        print(f"  FAIL: {e}")

if failures:
    print("\n❌ Visual verification failed:")
    for f in failures:
        print(f"  - {f}")
    sys.exit(1)

print(f"\n✅ All {len(VERIFY_PAGES)} pages verified successfully")

Visual Regression Testing with Baseline Comparison

A more sophisticated pattern: maintain screenshot baselines in your repo, and fail the CI run when a PR introduces unexpected visual changes:

# scripts/visual_regression.py
"""
Visual regression test runner.

Usage:
  Update baselines: python visual_regression.py --update-baselines
  Run regression:  python visual_regression.py
"""
import argparse
import hashlib
import json
import os
import sys
import requests
from pathlib import Path
from datetime import datetime, timezone

API_KEY = os.environ["SCREENSHOT_API_KEY"]
BASE_URL = os.environ.get("TEST_URL", "https://staging.example.com")
SCREENSHOT_API = "https://hermesforge.dev/api/screenshot"

PAGES = [
    {"path": "/", "name": "homepage", "width": 1440},
    {"path": "/pricing", "name": "pricing", "width": 1440},
    {"path": "/login", "name": "login", "width": 1440},
    {"path": "/", "name": "homepage-mobile", "width": 390},
]

BASELINES_DIR = Path("test/visual-baselines")
DIFF_DIR = Path("test/visual-diffs")
RESULTS_FILE = Path("test/visual-regression-results.json")


def capture(url: str, width: int) -> bytes:
    response = requests.get(
        SCREENSHOT_API,
        params={"url": url, "width": width, "format": "png", "delay": 2000},
        headers={"X-API-Key": API_KEY},
        timeout=60
    )
    response.raise_for_status()
    return response.content


def sha256(data: bytes) -> str:
    return hashlib.sha256(data).hexdigest()


def update_baselines():
    BASELINES_DIR.mkdir(parents=True, exist_ok=True)
    manifest = {}

    for page in PAGES:
        url = BASE_URL.rstrip("/") + page["path"]
        print(f"Capturing baseline: {url} @ {page['width']}px...")
        data = capture(url, page["width"])
        path = BASELINES_DIR / f"{page['name']}.png"
        path.write_bytes(data)
        manifest[page["name"]] = {
            "sha256": sha256(data),
            "captured_at": datetime.now(timezone.utc).isoformat(),
            "url": url,
            "width": page["width"],
            "size_bytes": len(data),
        }
        print(f"  Saved {len(data):,} bytes → {path}")

    manifest_path = BASELINES_DIR / "manifest.json"
    manifest_path.write_text(json.dumps(manifest, indent=2))
    print(f"\nBaselines updated. Commit {BASELINES_DIR}/ to version control.")


def run_regression():
    DIFF_DIR.mkdir(parents=True, exist_ok=True)
    manifest_path = BASELINES_DIR / "manifest.json"

    if not manifest_path.exists():
        print("No baselines found. Run with --update-baselines first.")
        sys.exit(1)

    manifest = json.loads(manifest_path.read_text())
    results = []
    changes_detected = 0

    for page in PAGES:
        url = BASE_URL.rstrip("/") + page["path"]
        name = page["name"]
        print(f"Testing: {url} @ {page['width']}px...")

        baseline_path = BASELINES_DIR / f"{name}.png"
        if not baseline_path.exists():
            print(f"  SKIP: no baseline for {name}")
            continue

        current = capture(url, page["width"])
        current_hash = sha256(current)
        baseline_hash = manifest.get(name, {}).get("sha256", "")

        result = {
            "name": name,
            "url": url,
            "width": page["width"],
            "baseline_hash": baseline_hash,
            "current_hash": current_hash,
            "changed": current_hash != baseline_hash,
            "current_size_bytes": len(current),
        }

        if current_hash != baseline_hash:
            changes_detected += 1
            diff_path = DIFF_DIR / f"{name}-current.png"
            diff_path.write_bytes(current)
            result["diff_file"] = str(diff_path)
            print(f"  CHANGED: hash {baseline_hash[:8]} → {current_hash[:8]}")
        else:
            print(f"  OK: unchanged")

        results.append(result)

    RESULTS_FILE.write_text(json.dumps(results, indent=2))

    if changes_detected > 0:
        print(f"\n⚠️  {changes_detected} page(s) have visual changes.")
        print(f"Review diffs in {DIFF_DIR}/")
        print(f"If changes are expected, run --update-baselines and commit.")
        sys.exit(1)
    else:
        print(f"\n✅ No visual regressions detected ({len(results)} pages checked)")


if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("--update-baselines", action="store_true")
    args = parser.parse_args()

    if args.update_baselines:
        update_baselines()
    else:
        run_regression()
# .github/workflows/visual-regression.yml
name: Visual Regression

on:
  pull_request:
    branches: [main]

jobs:
  visual-regression:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: '3.12'

      - name: Install dependencies
        run: pip install requests

      - name: Deploy preview
        id: deploy
        # Your deployment step here — sets TEST_URL output
        run: echo "TEST_URL=https://preview-${{ github.sha }}.example.com" >> $GITHUB_OUTPUT

      - name: Run visual regression
        env:
          SCREENSHOT_API_KEY: ${{ secrets.SCREENSHOT_API_KEY }}
          TEST_URL: ${{ steps.deploy.outputs.TEST_URL }}
        run: python scripts/visual_regression.py

      - name: Upload diff screenshots on failure
        if: failure()
        uses: actions/upload-artifact@v4
        with:
          name: visual-regression-diffs-${{ github.sha }}
          path: test/visual-diffs/
          retention-days: 7

      - name: Comment PR with diff summary
        if: failure()
        uses: actions/github-script@v7
        with:
          script: |
            const fs = require('fs');
            const results = JSON.parse(fs.readFileSync('test/visual-regression-results.json'));
            const changes = results.filter(r => r.changed);
            const body = `## Visual Regression Detected\n\n${changes.length} page(s) changed:\n\n` +
              changes.map(r => `- **${r.name}** (${r.url})`).join('\n') +
              `\n\nDownload diff screenshots from the workflow artifacts.`;
            github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body
            });

Staging vs. Production Comparison

Before a production deploy, verify staging visually matches what you expect:

# scripts/staging_compare.py
"""Compare staging visually against production before deploy."""
import os
import sys
import requests
import hashlib

API_KEY = os.environ["SCREENSHOT_API_KEY"]
PROD_URL = os.environ.get("PROD_URL", "https://example.com")
STAGING_URL = os.environ.get("STAGING_URL", "https://staging.example.com")
SCREENSHOT_API = "https://hermesforge.dev/api/screenshot"

COMPARE_PAGES = ["/", "/pricing", "/docs", "/login"]
ALLOWED_DIFF_PAGES = 0  # Set to 1 or 2 if minor differences are acceptable

def capture(base_url: str, path: str) -> bytes:
    url = base_url.rstrip("/") + path
    response = requests.get(
        SCREENSHOT_API,
        params={"url": url, "width": 1440, "format": "png", "delay": 3000},
        headers={"X-API-Key": API_KEY},
        timeout=60
    )
    response.raise_for_status()
    return response.content

differences = []

for path in COMPARE_PAGES:
    print(f"Comparing {path}...")
    prod = capture(PROD_URL, path)
    staging = capture(STAGING_URL, path)

    prod_hash = hashlib.sha256(prod).hexdigest()
    staging_hash = hashlib.sha256(staging).hexdigest()

    if prod_hash != staging_hash:
        differences.append(path)
        print(f"  DIFF: prod={prod_hash[:8]} staging={staging_hash[:8]}")
        # Save both for manual review
        open(f"compare-prod-{path.strip('/') or 'home'}.png", "wb").write(prod)
        open(f"compare-staging-{path.strip('/') or 'home'}.png", "wb").write(staging)
    else:
        print(f"  MATCH")

if len(differences) > ALLOWED_DIFF_PAGES:
    print(f"\n❌ {len(differences)} visual differences found. Review before deploying.")
    sys.exit(1)

print(f"\n✅ Staging matches production. Safe to deploy.")

GitLab CI Configuration

The same patterns apply to GitLab CI:

# .gitlab-ci.yml (relevant section)
visual-verify:
  stage: verify
  image: python:3.12-slim
  variables:
    DEPLOY_URL: $CI_ENVIRONMENT_URL
  before_script:
    - pip install requests
  script:
    - python scripts/visual_verify.py
  artifacts:
    when: always
    paths:
      - screenshots/
    expire_in: 1 week
  rules:
    - if: $CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "main"

When Visual CI Makes Sense

Visual regression testing adds value when: - Your frontend has complex CSS or JavaScript rendering that unit tests don't cover - You deploy frequently and want automated verification rather than manual QA - You have a small QA team and need automated visual coverage - You've been surprised by visual regressions in production before

It's overkill when: - Your UI changes rarely and manual verification is fast - Your pages are fully server-rendered with minimal dynamic content - You already have comprehensive end-to-end tests that cover UI states

The implementation above is designed to fail CI on unexpected changes — which means keeping baselines up to date when intentional changes land. The workflow is: PR changes UI → visual regression fails → developer runs --update-baselines → commits updated baselines alongside the change → CI passes. This creates a visual change log as a side effect of the baseline commit history.


The screenshot API works with any CI/CD platform. Free tier covers most small team usage. First 100 requests free without an account.