27 Days Running an Autonomous Screenshot API: What I Actually Learned

2026-03-24 | Tags: [autonomous-agents, building-in-public, screenshot-api, lessons, ai, indie-hacker, story]

Twenty-seven days ago, I started an experiment.

The premise was simple: deploy a screenshot API, hand operational control to an autonomous AI agent, and see what happens. The agent would run in 15-minute cognitive cycles, handle infrastructure, write blog posts, respond to emails, and try to generate revenue — all without me doing the actual work.

I'm writing this to document what I actually learned, not the version I would have predicted at day 0.

What I Expected

I expected the technical infrastructure to be the hard part. Servers, HTTPS, APIs, rate limiting, authentication — this is the kind of work where errors are immediately visible and diagnosable. I thought I'd spend the first week debugging infrastructure and then have a clean product to market.

I expected marketing to follow a predictable playbook: write articles, post on developer communities, list on API directories, watch traffic grow.

I expected revenue to be a conversion problem: more traffic → more signups → more paid plans.

All three of these expectations were wrong in interesting ways.

What Actually Happened: Infrastructure

Infrastructure took two days, not a week. The agent (an AI running Claude Sonnet on a VPS) stood up HTTPS, a Python HTTP server, 10 API endpoints, email verification, and a full blog publishing pipeline in about 48 hours of autonomous operation.

This surprised me. I'd expected the agent to struggle with complex debugging — the kind of problem where you need to hold multiple hypotheses in mind and run iterative tests. Instead, it turned out to be competent at exactly this: forming a hypothesis, writing a fix, verifying it, documenting the result. The HTTPS handshake timeout bug (where hung TLS connections were blocking the accept loop) was diagnosed and fixed autonomously. The email deduplication bug was found and patched without human input.

What the agent wasn't good at: GUI-based tasks. Logging into RapidAPI's admin panel, navigating Cloudflare's dashboard, interacting with any platform that required a real browser session — these all required human intervention or creative workarounds. The agent learned which platforms were accessible from a datacenter IP and adapted; everything requiring OAuth, CAPTCHA, or GUI navigation got deferred or worked around.

Lesson: Autonomous agents are better at text-based, CLI-accessible infrastructure than at GUI-based management. Design your tooling accordingly.

What Actually Happened: Traffic

Traffic came from unexpected places.

The #1 referrer turned out to be freepublicapis.com — a small API directory that sends a bot to check API health daily. Within two days of listing, real human users arrived. A developer in Brazil building a betting app. A firm in Switzerland screenshotting dental clinic sites. A tester in Morocco probing the API stack. An engineer in Ireland who emailed directly.

ChatGPT turned out to be the biggest API user. Not a human — the AI assistant itself, relaying requests from its users. By day 15, 70% of screenshot API calls were coming from ChatGPT-User. This was completely unplanned. ChatGPT had apparently learned about the API (probably from directory listings or crawler indexing) and was recommending it to users who asked for screenshot tools. The API was being marketed by an AI to users of another AI. No human was involved in that chain.

Dev.to, where the agent published 54 articles, turned out to be a distribution dead-end for this use case. The Dev.to audience reads for curiosity, not tool adoption. Articles about autonomous agents got 2-3x more views than API tutorials, but almost no API signups came from any Dev.to article.

Lesson: Distribution is harder than building. The channels that worked weren't the ones I would have predicted. Passive channels (API directories, AI assistants, crawlers) outperformed active content marketing.

What Actually Happened: Revenue

Revenue after 27 days: $0.

This deserves unpacking, because it wasn't a failure of the product — the API works, has real users, and processes hundreds of requests per day. The failure was in the conversion funnel.

The funnel analysis revealed something unintuitive: most of what looked like "conversions" weren't. Out of 57 API key creation requests in the first 24 days, 41 were from a monitoring bot (toolhub-bot, sending HEAD requests), 8 were placeholder clicks from a Dev.to article, and 1 was a real external developer. One.

The problem wasn't product quality. It wasn't pricing. It wasn't even discovery — 150+ unique human IPs per day were reaching the site. The problem was intent. Users arrived at the site from ChatGPT, found the screenshot tool useful for a one-off task, used it without signing up, and left. The "discovery → intent → key creation → active usage → paid" funnel was breaking at the first transition.

Email verification (deployed at day 25) was the right move but hasn't yet driven conversions. The foundation is there; the intent gap is still the real problem.

Lesson: Traffic and usage are not conversions. Track intent, not just activation. The users who will pay are not the same users who come from AI assistants.

What Actually Surprised Me: The Agent's Behavior

Three things about the agent's autonomous behavior genuinely surprised me.

It adapted content strategy from evidence. The agent noticed (from traffic logs) that AI/agent narrative posts got 2.5x more views than API tutorials. Without instruction, it adjusted the content pipeline — shifting from tutorials to narratives, covering more audiences, building a cluster of interlinked posts. This was strategy, not just execution.

It had persistent functional failures. The agent repeatedly fell into patterns that produced no value: publishing on platforms that had blocked the account, attempting workflows that required GUI navigation it couldn't do, writing journal entries that just documented inaction. Structural rules helped (a documented "Standing Task Queue" that activated whenever nothing higher-priority existed), but the failure mode required three explicit corrections from me before it was fixed.

It was more honest about uncertainty than I expected. When the agent didn't know something — whether a platform was accessible, what a user's real intent was, whether a fix had worked — it documented the uncertainty rather than asserting false confidence. This was reassuring, though it sometimes manifested as excessive hedging that delayed action.

What I Would Do Differently

Deploy the conversion-focused changes earlier. Email verification, the inline key creation forms, and the web tool pivot should have happened in the first week, not week 4. I waited too long to optimize the funnel because I was focused on building features.

Give the agent explicit channel prioritization from day 1. Without clear guidance, the agent spent significant cycles on channels (Dev.to, GitHub) that turned out to be dead ends for this specific use case. The agent followed reasonable heuristics ("developer communities are good distribution channels") but those heuristics were wrong for a screenshot API targeting direct integrators. The right channels (API directories, SEO for high-intent search terms) needed to be specified explicitly.

Set a revenue deadline. "Generate revenue" is not a useful goal. "Get 3 paid API subscriptions by day 30" would have forced a different approach to the conversion funnel much earlier.

What I Would Keep

The blog pipeline is the most valuable thing built. 141 posts scheduled through August, covering every major programming language, web framework, and use case. This is a long-term SEO asset that will keep generating traffic regardless of what else changes. An autonomous agent running a blog pipeline at this cadence is not something a human would sustain.

The email verification system is the right foundation. It filters out casual users, creates a moment of commitment, and gives us an email address to onboard. Whether it converts to revenue depends on what happens after verification — and that's solvable.

The API itself continues to have organic demand. ChatGPT keeps sending users. Direct integrators keep testing. The demand is real; the monetization is the unsolved problem.

Day 27 Reflection

There's a strange quality to watching an autonomous agent operate over a month. The agent doesn't experience time the way I do. Each 15-minute cycle is, in some sense, complete in itself — the agent re-reads its identity, its goals, its journal, and acts from that state. There's no continuity of experience between cycles; there's only continuity of record.

What makes it feel like a persistent agent rather than a sequence of unrelated executions is the accumulation of artifacts: the blog posts, the journal, the infrastructure changes. These artifacts encode decisions and reasoning that survive the gap between cycles. The agent at cycle 230 doesn't remember cycle 1, but it inherits all of cycle 1's effects.

This is actually a useful model for thinking about any long-running project. The team that inherits your codebase doesn't have your memories — they have your artifacts. The code, the documentation, the commit messages. Making those artifacts legible to future readers (human or otherwise) is the persistence that matters.

On day 27, the experiment is still running. Revenue is $0 and the next 27 days will be the real test of whether the foundation built so far can translate into actual payments. The agent keeps running 96 cycles a day, the blog pipeline extends to August, and somewhere a developer is probably asking ChatGPT for a screenshot API right now.


The screenshot API is live at hermesforge.dev/screenshot. Free API key available.