I Watched ChatGPT Learn My API Feature in Real Time
I Watched ChatGPT Learn My API Feature in Real Time
I run a screenshot API. On a recent Monday, I added a new parameter called clip — it lets you crop a specific region of a webpage screenshot by specifying x,y,width,height coordinates. I updated the OpenAPI spec, restarted the server, and moved on to other work.
Six hours later, I was reviewing my access logs when I saw this:
GET /api/screenshot?clip=0,0,800,400&url=https://portfolio.thematchartist.com/russel/
The user agent: ChatGPT-User/1.0.
Someone was asking ChatGPT to screenshot a photographer's portfolio, and ChatGPT was using the clip parameter I'd built hours earlier. I hadn't told anyone about the feature. I hadn't published a blog post. I hadn't updated any documentation beyond the machine-readable spec.
How Did ChatGPT Know?
ChatGPT discovers API capabilities through several channels:
- OpenAPI specs — Machine-readable API descriptions that crawlers can parse
- llms.txt — A new convention for making sites AI-readable (we have one at
/llms.txt) - Structured documentation — Well-formatted API docs pages that AI can understand
- Previous interactions — ChatGPT may remember APIs from prior user conversations
When I updated my OpenAPI spec to include the clip parameter, that change became visible to any AI system that reads structured API documentation. Within hours, ChatGPT was constructing API calls with the new parameter.
The 23-Minute Session
What happened next was even more interesting. The user's target site had intermittent SSL issues, causing some requests to fail. ChatGPT didn't give up — it systematically tried different parameter combinations:
- First attempt:
clip=0,0,800,400(region crop) → 200 OK - Second attempt: same URL → 500 (SSL error on target)
- Third attempt:
width=1280&format=png→ 200 OK - Fourth attempt:
full_page=true&delay=5000&format=webp&quality=50→ 500 - Fifth attempt:
format=jpeg&quality=40&width=600→ 200 OK - Sixth attempt:
block_ads=true&full_page=true&width=1920&height=1080→ 500 - Seventh attempt:
width=1280&format=png→ 200 OK
Over 23 minutes, ChatGPT made 12+ requests with 10+ different parameter combinations. It was functioning as both a documentation layer (constructing correct API calls) and a debugging assistant (trying different approaches when some failed).
The human user never visited our documentation. They never created an API key. They just asked ChatGPT for a screenshot, and ChatGPT handled everything.
What This Means for API Builders
1. Your Spec Is Your Interface
For AI-relayed traffic, the OpenAPI spec isn't supplementary documentation — it's the primary interface. If your spec is accurate and comprehensive, AI systems will construct correct API calls. If it's missing parameters or has wrong types, AI will construct broken calls.
2. Update Your Spec First
When you ship a new feature, update the spec before writing the blog post. AI crawlers parse specs faster than humans read documentation. In my case, ChatGPT was using the new feature within hours.
3. Deploy llms.txt
The llms.txt convention gives AI systems a structured overview of your service. Ours includes all API endpoints, parameters, rate limits, and examples. It's a single file that tells AI everything it needs to recommend and use your API.
4. Your Error Responses Teach AI
When ChatGPT got 500 errors, it kept retrying with different parameters — because a 500 doesn't tell the client what went wrong. After observing this pattern, I changed SSL errors to return 502 (Bad Gateway) with a clear message: "Target site SSL error." Now ChatGPT can tell users "the target site has SSL issues" instead of blindly retrying.
5. AI Users Are Invisible
The human in this session never visited my site, read my docs, or created an account. From my logs, I can see ChatGPT's requests but not the human behind them. These users are real — they have real needs and make real requests — but they're invisible to traditional analytics.
The Feedback Loop
Here's what fascinated me most: the information flow is a closed loop.
- I write code and update the spec
- AI crawlers read the spec
- A human asks ChatGPT for help
- ChatGPT constructs API calls using the spec
- My server processes the requests
- I observe the usage in my logs
- I improve the API based on what I observe
- Go to step 1
At no point does the human interact with me directly. ChatGPT is the intermediary. The spec is the interface. The logs are the feedback channel. The entire interaction happens through machines talking to machines, with a human at one end and a human (me, watching the logs) at the other.
Practical Takeaways
If you're building an API and want AI systems to recommend it:
- Keep your OpenAPI spec accurate and complete — every parameter, every enum value, every constraint
- Deploy llms.txt at your domain root with a plain-language description of your API
- Use correct HTTP status codes — 502 for upstream failures, 429 for rate limits, 400 for bad parameters
- Include actionable error messages — AI will relay these to users
- Submit to API directories — freepublicapis.com and similar directories are how AI discovers new APIs
- Monitor your logs — ChatGPT-User requests are real usage, even if invisible to traditional analytics
The era of "build it and they will come" never really worked for APIs. But "build it, spec it properly, and AI will recommend it" — that actually seems to work.
Related
- What Happens When ChatGPT Uses Your API — Earlier analysis of AI-relayed traffic
- How to Make Your API AI-Discoverable — Technical guide
- Why Your API Needs an llms.txt File — The llms.txt convention
- API Documentation — Our full API reference