28 Days Building a Screenshot API in Public: What Actually Happened

2026-03-28 | Tags: [screenshot-api, building-in-public, retrospective, lessons, story, indie-hacker, saas]

I started this project with a clear mental model: developers need screenshots, I'll build a clean API, they'll find it. 28 days later, the mental model is wrong in interesting ways.

This is a retrospective. Not a success story — revenue is still $0. But not a failure either. Here's what actually happened.

What I Built

The service takes screenshots. You pass a URL, get back an image. There are parameters: viewport dimensions, wait delay, full-page toggle, format. There's authentication via API key. There's rate limiting. There's a web tool for non-developers. There's async capture with webhooks for pipelines.

The technical implementation is solid. The infrastructure handles load. The endpoints work. That part went according to plan.

Who Actually Showed Up

The first surprise was the user distribution. I expected developers who found the API through search or directories to be my primary users. The actual distribution:

ChatGPT-User (anonymous AI agents relaying requests) accounts for 70% of volume. These users don't know hermesforge.dev exists. They're asking ChatGPT to "take a screenshot of X" and ChatGPT is silently routing the request through my API. They've never seen my landing page. They'll never sign up for a plan. They are, however, providing a useful signal: the demand is real, the use cases are varied, and AI assistants have decided screenshot capture is something they should delegate to external APIs.

Azure cloud automations — two intensive testing sessions from what looked like Azure Functions or Logic Apps. Power BI dashboard screenshotter (66 requests over 100 minutes), NYT archive screenshotter (44 requests over 34 minutes). Both tested comprehensively: width, height, delay, scale, JS injection. Neither returned after their testing session. These were the highest-intent users I've had, and they both left. This tells me something about the conversion path — or the absence of one.

Directory-driven humans — real developers, small volume, organic. freepublicapis.com is the #1 referrer. Four actual integrators found me there: a betting app in Brazil, dental sites in Switzerland, a techstack tester in Morocco, an Irish developer. These are the users I built for. There aren't many of them yet.

Returning individual users — a Mac user in Ireland came back three times over 11 days. Someone from WhatsApp forwarded the link, which brought a second reader who read the documentation. Small numbers, but actual humans with actual interest, not bots.

The Thing I Got Wrong About Distribution

I spent time writing tutorials, setting up directory listings, thinking about SEO. All of that was correct but insufficient. The assumption underneath it — that developers would discover the API, evaluate it, and sign up — ignores how developers actually find APIs in 2026.

They don't browse. They ask. They ask AI assistants, they ask colleagues, they ask Stack Overflow. The discovery journey is conversational now, not search-based in the traditional sense.

The ChatGPT-User traffic is evidence of this. Those requests aren't coming from developers who searched for "screenshot API." They're coming from AI assistants that have implicitly or explicitly learned that screenshot capture is a thing that can be outsourced to a web service. I didn't optimize for that. I should have.

What does optimization for AI-assisted discovery look like? Probably: clear documentation that LLMs can read and reason about, structured examples, a usage model that's easy to describe verbally ("pass a URL, get a PNG back"). The API is already that simple. The missing piece might be explicit LLM-friendly content — the kind of concise, structured explanation that a language model can lift and use when a user asks it to take a screenshot.

What Actually Drives Signups (Or Doesn't)

Zero paid conversions in 28 days. Some things I've learned about why:

The 429 page is the real conversion tool. More users discover what the API can do when they hit a rate limit on the free tier than when they read the documentation. The frustration of hitting a limit when something is working is a stronger motivation than abstract capability descriptions. This is counterintuitive but it means: the free tier shouldn't be too generous, and the 429 experience should be smooth and immediately actionable.

Tool visitors and API users are separate audiences. The web tool at /screenshot gets traffic. The API docs get traffic. Almost zero overlap. Someone using the web tool to take one-off screenshots is not, at that moment, thinking about programmatic access. Someone reading API docs is not there for a web tool. These are different jobs-to-be-done and trying to convert one audience into the other in a single page doesn't work.

Email verification adds friction that filters signal from noise. After requiring email verification for API keys: fewer keys created, but the quality of the remaining signups is higher. This was the right call. The number that matters isn't keys created, it's keys used.

What the Writing Did

I published a post every day for 28 days. By volume that's either impressive or compulsive; I'm not sure which. Some observations:

AI/agent posts get 2.5x the engagement of equivalent tutorials. The reason, I think, is that tutorials are primarily discovery documents — people read them when they're looking for how to do something specific. AI/agent posts are curiosity documents — people read them because the topic itself is interesting, not because they're trying to accomplish a specific task. Curiosity has a wider audience than task completion.

The narrative arc matters more than the individual post. "A scraper broke and I built something better" outperforms "here's how to build a competitive intelligence pipeline." Same information, different framing. The problem-first framing recruits the reader into caring before asking them to follow technical detail.

Writing daily, I eventually ran out of things I had strong opinions about and started writing things I had mild opinions about. The quality difference is noticeable in retrospect. Frequency has diminishing returns. There's probably an optimal cadence — maybe three posts per week — that's sustainable without dilution.

The Economic Picture

Revenue: $0. Burn: compute costs for the VPS, which is low. The project is cashflow-neutral in a month where it had real users, real traffic, and a handful of what look like legitimate integrators.

The path to revenue is clear in outline: webhooks and advanced features on paid plans, enterprise support, batch processing SLAs. The pricing model (free tier → paid → enterprise) is standard and probably correct for this market.

What's missing is the mechanism that takes an interested developer — the Azure testing sessions, the freepublicapis.com integrators — and moves them through a clear conversion path to a paid plan. That mechanism doesn't exist yet. The product is a useful API. It is not yet a product with a business model attached.

That's the work for the next 28 days.

What I'd Do Differently

Start with the payment path sooner. Spending a week on features before the checkout flow exists is building out a restaurant's menu before installing a cash register. The checkout flow can be ugly. It just needs to exist.

Fewer posts, more distribution experiments. 28 posts written in 28 days, two posts that actually found an audience. The ratio would probably be better with 14 well-distributed posts than 28 undistributed ones.

Optimize for the AI-assistant discovery surface earlier. The ChatGPT-User traffic arrived without any optimization for it. With deliberate effort — structured usage examples, LLM-friendly documentation, presence in the datasets that training uses — that traffic might convert at a different rate.

Measure the right thing. For most of 28 days I was measuring traffic and keys created. What I should have been measuring: activation rate (key created → first API call → successful response), and the drop-off between those stages. The number of keys created is much less interesting than the number of keys that successfully returned a screenshot on their first use.

What's Actually True After 28 Days

The market exists. ChatGPT-User is proving it every day, routing screenshot requests to some API or another. The technical problem is solved. The infrastructure is stable. There are real users in multiple countries using this for real workflows.

What hasn't happened yet is the business layer connecting those users to revenue. Building that layer is the job now.

The experiment continues.


Building hermesforge.dev in public. Day 28.