Monitor Website Uptime with Screenshot Proof
Monitor Website Uptime with Screenshot Proof
Standard uptime monitors tell you that your site went down. Screenshot-based monitoring shows you what it looked like when it happened. Was it a blank page? An error message? A partial render? A screenshot is worth a thousand status codes.
Why Screenshots for Monitoring?
HTTP status codes miss a lot:
- 200 OK but broken: The server returns 200, but the page is blank, shows an error, or renders incorrectly
- Partial failures: The header loads but the main content fails to render
- Third-party breakage: Your CDN, payment widget, or analytics script breaks the page
- Visual regressions: The page loads but looks completely wrong
A screenshot captures the actual user experience, not just the HTTP response.
Basic Uptime Monitor with Screenshots
import requests
import time
import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from email.mime.image import MIMEImage
from datetime import datetime
SCREENSHOT_API = "https://hermesforge.dev/api/screenshot"
CHECK_INTERVAL = 300 # 5 minutes
SITES = [
{"url": "https://yoursite.com", "name": "Main Site"},
{"url": "https://app.yoursite.com", "name": "App"},
{"url": "https://api.yoursite.com/health", "name": "API Health"},
]
def check_site(site):
"""Check if a site is up and capture a screenshot."""
url = site["url"]
result = {"name": site["name"], "url": url, "timestamp": datetime.utcnow().isoformat()}
# Step 1: HTTP health check
try:
resp = requests.get(url, timeout=10)
result["status_code"] = resp.status_code
result["response_time_ms"] = int(resp.elapsed.total_seconds() * 1000)
result["http_ok"] = 200 <= resp.status_code < 400
except requests.RequestException as e:
result["status_code"] = 0
result["response_time_ms"] = 0
result["http_ok"] = False
result["error"] = str(e)
# Step 2: Capture screenshot (regardless of HTTP status)
try:
screenshot_resp = requests.get(SCREENSHOT_API, params={
"url": url,
"viewport": "desktop",
"format": "png",
"delay": "2000",
"block_ads": "true",
}, timeout=30)
if screenshot_resp.status_code == 200:
result["screenshot"] = screenshot_resp.content
except Exception:
result["screenshot"] = None
return result
def is_degraded(result):
"""Determine if the site is in a degraded state."""
if not result["http_ok"]:
return True
if result["response_time_ms"] > 5000:
return True
return False
def alert(result):
"""Send an alert with the screenshot attached."""
subject = f"[DOWN] {result['name']} - {result.get('status_code', 'N/A')}"
body = f"""
Site: {result['name']}
URL: {result['url']}
Status: {result.get('status_code', 'N/A')}
Response Time: {result.get('response_time_ms', 'N/A')}ms
Error: {result.get('error', 'None')}
Time: {result['timestamp']}
"""
msg = MIMEMultipart()
msg["Subject"] = subject
msg["From"] = "monitor@yoursite.com"
msg["To"] = "oncall@yoursite.com"
msg.attach(MIMEText(body, "plain"))
if result.get("screenshot"):
img = MIMEImage(result["screenshot"], name="screenshot.png")
msg.attach(img)
# Send via your SMTP server
with smtplib.SMTP("smtp.yoursite.com", 587) as server:
server.starttls()
server.login("monitor@yoursite.com", "password")
server.send_message(msg)
# Main loop
previous_states = {}
while True:
for site in SITES:
result = check_site(site)
was_down = previous_states.get(site["url"], False)
is_down = is_degraded(result)
if is_down and not was_down:
# Just went down — alert with screenshot
alert(result)
print(f"ALERT: {result['name']} is DOWN")
elif not is_down and was_down:
print(f"RECOVERED: {result['name']} is back up")
previous_states[site["url"]] = is_down
time.sleep(CHECK_INTERVAL)
Scheduled Screenshot Archives
Capture periodic screenshots as a visual history of your site:
import os
from datetime import datetime
def archive_screenshot(url, name, output_dir="screenshots"):
"""Capture and archive a timestamped screenshot."""
os.makedirs(output_dir, exist_ok=True)
timestamp = datetime.utcnow().strftime("%Y%m%d_%H%M%S")
resp = requests.get(SCREENSHOT_API, params={
"url": url,
"viewport": "desktop",
"format": "png",
"block_ads": "true",
}, timeout=30)
if resp.status_code == 200:
path = f"{output_dir}/{name}_{timestamp}.png"
with open(path, "wb") as f:
f.write(resp.content)
return path
return None
# Archive every hour
for site in SITES:
slug = site["name"].lower().replace(" ", "-")
path = archive_screenshot(site["url"], slug)
if path:
print(f"Archived: {path}")
Slack Integration
Post screenshot alerts to a Slack channel:
def slack_alert(result, webhook_url):
"""Post an alert with screenshot to Slack."""
# First, upload the screenshot if we have one
blocks = [
{
"type": "header",
"text": {"type": "plain_text", "text": f"🔴 {result['name']} is DOWN"}
},
{
"type": "section",
"fields": [
{"type": "mrkdwn", "text": f"*URL:* {result['url']}"},
{"type": "mrkdwn", "text": f"*Status:* {result.get('status_code', 'N/A')}"},
{"type": "mrkdwn", "text": f"*Response:* {result.get('response_time_ms', 'N/A')}ms"},
{"type": "mrkdwn", "text": f"*Time:* {result['timestamp']}"},
]
}
]
if result.get("error"):
blocks.append({
"type": "section",
"text": {"type": "mrkdwn", "text": f"*Error:* `{result['error']}`"}
})
requests.post(webhook_url, json={"blocks": blocks})
Cron-Based Light Monitor
For simpler setups, use cron instead of a daemon:
#!/bin/bash
# check_sites.sh — run via cron every 5 minutes
SITES=("https://yoursite.com" "https://app.yoursite.com")
API="https://hermesforge.dev/api/screenshot"
LOG="/var/log/uptime-monitor.log"
for url in "${SITES[@]}"; do
status=$(curl -o /dev/null -s -w "%{http_code}" --max-time 10 "$url")
timestamp=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
if [ "$status" != "200" ]; then
slug=$(echo "$url" | sed 's|https://||;s|/|-|g')
screenshot_path="/var/log/screenshots/${slug}_${timestamp}.png"
mkdir -p /var/log/screenshots
curl -s "${API}?url=${url}&viewport=desktop&format=png" -o "$screenshot_path"
echo "${timestamp} DOWN ${url} status=${status} screenshot=${screenshot_path}" >> "$LOG"
# Send alert (mail, Slack webhook, PagerDuty, etc.)
else
echo "${timestamp} OK ${url} status=${status}" >> "$LOG"
fi
done
# Add to crontab
*/5 * * * * /opt/monitoring/check_sites.sh
Tips
- Always capture on failure — a screenshot of the error state is the most valuable diagnostic
- Capture on recovery too — confirm the site actually looks right, not just returns 200
- Use
block_ads=true— cookie banners in monitoring screenshots are noise - Set
delay=2000— give the page time to render fully before capturing - Archive screenshots — keep a rolling 7-day window for post-incident review
- Compare screenshots — use image hashing to detect visual changes between checks
- Mobile viewport too — some failures only affect mobile rendering