What Happens When You Submit a Project at 11pm and Check at 8am
At 23:15 last night, a project was submitted to the build pipeline: an Irish off-licence management system — inventory, POS, compliance, supplier management, dashboard.
By 23:38, it was in PLANNING.
By 23:45, it was in IMPLEMENTATION with a materialised task graph.
By 00:00, 17 of 20 implementation tasks were complete.
No one was at a keyboard for most of this. Here's what actually happened.
The First 90 Seconds: INTAKE
The moment a project lands in the pipeline, two agents start in parallel:
PM writes a project charter from the directive. The charter defines vision, objectives, stakeholders, success criteria, and constraints. For this project, it had to interpret a fairly open brief — "comprehensive off-licence management web app for Irish operators" — into a structured set of outcomes. The PM's output becomes the canonical scope document.
Coordinator reviews the charter against the directive. It's asking: does this scope accurately reflect what was asked? Are there scope gaps or hallucinated requirements? For the off-licence system, the coordinator approved without conditions.
Then the PM wrote a PRD: full feature specifications, user stories, data models, API surface, non-functional requirements (GDPR, Irish liquor licensing compliance). The coordinator reviewed that too. Also approved.
Two artifacts produced and reviewed in two minutes. INTAKE complete.
PLANNING: The Task Graph
PLANNING is where the PM creates a concrete execution plan — a directed acyclic graph of tasks with dependencies, role assignments, and artifact targets.
For a project like this, the graph has to be sensible. You can't start building the POS interface before you've designed the data model. You can't write compliance reporting before you've implemented the transaction engine. The dependency structure matters.
The coordinator reviews the task plan too. If the plan has structural problems — circular dependencies, missing prerequisites, tasks assigned to the wrong role — it gets rejected and rewritten.
This one was approved on the first pass. PLANNING completed in under 7 minutes. The materialised task graph had 20 concrete tasks dispatched to engineering and QA agents.
IMPLEMENTATION: Parallel Execution
By the time IMPLEMENTATION started, the system had 20 tasks queued. The engine dispatches up to the concurrency limit at once, respects dependencies, and routes each task to the appropriate agent role.
The engineering agent handles: module design, feature implementation, integration work. The QA agent handles: test writing, test execution, integration validation. The coordinator handles: artifact reviews at key checkpoints.
At 00:00Z, the progress was: - 17/20 tasks complete - 2 blocked (waiting on a dependency) - 1 running
The blocked tasks were probably the final integration or test tasks waiting for the last implementation piece. The pipeline was working as designed.
What "85% Complete" Actually Means
It means 17 discrete units of work — each with a specific objective, assigned role, and deliverable — have been executed, reviewed where required, and marked complete. The artifacts from each task (code files, test files, design decisions) are stored in the project workspace.
It doesn't mean 85% of the code is written and 15% isn't. The task completion ratio isn't a proxy for code completion. Some late-stage tasks (integration tests, final validation) may represent a small fraction of code but a large fraction of confidence.
The system isn't optimising for "lines of code produced." It's optimising for "verified requirements delivered."
The Artifact Audit Trail
Three artifacts are approved at this point: 1. Charter (INTAKE) 2. PRD (INTAKE) 3. Task plan (PLANNING)
Each artifact has a coordinator review record: the decision, the reasoning, the timestamp. This is the audit trail that distinguishes autonomous delivery from vibe coding. Not just "something was built" but "the right scope was agreed, the plan was sound, and each checkpoint was cleared."
When the project completes, there will be a downloadable ZIP: source code, tests, a README, documentation. The charter and PRD define what it should be. The audit trail shows how it got there.
The Honest Part
This isn't magic. The output requires review. An autonomous pipeline produces code that compiles, passes the tests it writes for itself, and meets the stated spec. Whether the spec was right — whether it captures what the operator actually needed — is a human judgment call.
The PM agent interpreted "Irish off-licence management" into a feature set. That interpretation might be right. It might miss something obvious to an Irish shop owner (minimum unit pricing rules, Revenue compliance specifics, particular supplier integrations). The coordinator catches scope hallucinations but can't hallucinate requirements that weren't in the original directive.
This is why the exclusivity model is 60 days, not instant download. There's time for the operator to review, request adjustments, and get something that actually fits their context before it goes open-source.
But: at 11pm, an open-ended project description was submitted. By midnight it was 85% through implementation. The review and refinement cycle starts from a working foundation rather than a blank page.
That's a different starting point than most software projects begin from.