Better specs in.
Better code out.
Your AI coding agents build exactly what you ask for. The problem is what you're asking for. Speclint is the quality gate that makes you write specs worth building.
// Lint your specs before agents touch them
completeness_score: 85 → agent_ready: true
No signup required · 5 lints/day free
// how it works
Three steps. Zero guesswork.
A spec lands on GitHub
Every GitHub issue is a spec — it defines what an agent should build, how to verify it, and where to stop. Before any agent touches it, the speclint-action fires automatically on issues.opened.
on:
issues:
types: [opened]Speclint scores the spec
The issue body is evaluated across 5 dimensions. Each dimension maps to a real agent failure mode. The result is a completeness_score from 0–100.
{
"completeness_score": 82,
"agent_ready": false,
"missing": ["has_definition_of_done"]
}Gate or label, you decide
Below your threshold? Speclint comments with what's missing. Edit the issue → it re-lints automatically on issues.edited. Above it? Label it agent_ready: true and let Cursor, Codex, or Claude Code run. Set your own threshold in the action config (default: 80).
if score >= threshold: # default: 80
label("agent_ready")
else:
comment("missing: ...")
# re-lints on issues.edited// scoring rubric
Five dimensions. 100 points.
What separates a GitHub issue from an agent-ready specification
“The distance between Level 3 and Level 4 is the quality of the spec, not the quality of the model.”
Problem contains an observable, quantifiable outcome
≥2 acceptance criteria with action verbs
Tags, tech assumptions, or explicit scope limits
Title isn't "improve X" or "fix Y" with no specificity
AC mentions specific state, value, or threshold
Pass codebase_context to get ACs that reference your actual stack — not generic patterns.
// what happens when specs fail
The remediation loop.
Yes, Speclint will block bad specs. That's the point. A 2-minute edit now saves a 2-hour wrong implementation later.
Spec scores low
Speclint posts a structured comment listing exactly what's missing and a concrete suggestion for what to add. No ambiguity — it tells you the fix, not just the problem.
comment: "Missing: has_definition_of_done suggestion: Add which report types, max rows, and file format accepted"
Dev edits the issue
The fix is usually one paragraph. Add the missing outcome, tighten the ACs, add constraints. It's spec work, not code work.
# Edit the GitHub issue body # Add the missing context # Usually < 5 minutes
Auto re-lint
The action fires on issues.edited too — your fix is scored automatically. No manual re-run, no waiting for CI.
on:
issues:
types: [opened, edited] # ← re-lints on editSpec passes
Issue gets labeled agent_ready: true and enters the agent queue. Total time: ~2 minutes.
label("agent_ready: true")
// Cursor, Codex, Claude Code
// can now pick it up// dogfooding in production
We use Speclint to build Speclint.
Customer Zero — real data from our own pipeline. Every ticket we write goes through the linter. This is what that looks like.
Spec: "SL-026: Add persona scoring to /api/lint" completeness_score: 50 agent_ready: false ✗ Missing: has_measurable_outcome
No measurable outcome. The spec says WHAT to build but not WHY it matters.
Spec: "SL-026: Reduce wasted agent token spend by 30%
through persona-aware scoring"
completeness_score: 75
agent_ready: true ✓
Gained: has_measurable_outcomeOne rewrite. Two minutes. The spec now articulates the business outcome, not just the feature.
Our orchestration agent — the AI that writes specs and dispatches coding agents — now writes specs differently because it knows they'll be scored. The quality gate didn't just catch bad specs. It changed how specs are written in the first place. That's the product.
“The rewrite forced us to answer: why does this feature matter? That's not a lint rule — that's product thinking. And it takes 2 minutes.”
— David Nielsen, Speclint
// the agent pipeline problem
The spec is the bottleneck.
Agent rework isn't a model problem — it's a spec problem. A quality gate before the agent changes everything downstream.
Issue filed
0 min
Agent picks it up
5 min
Builds wrong thing
2 hrs
Rework & rewrite
4 hrs
Agent rebuilds
4+ hrs
Issue filed
0 min
Speclint scores it
2 sec
Dev adds context
2 min
Agent builds right thing
15 min
“The model isn't the bottleneck. The spec is. We spent $1K/day on AI agents before we realized $29/mo on spec quality would cut our rework in half.”
— David Nielsen, Speclint
// install in 2 minutes
Drop it in your workflow.
It runs on every issue.
The GitHub Action fires automatically on issues.opened and issues.edited. Fix the spec, get instant feedback — no manual re-run needed.
- →Scores every spec in < 2s using the /api/lint endpoint
- →Posts what's missing from the spec as a GitHub comment
- →Auto re-lints on issue edits — fix the spec, get instant feedback
- →Labels passing issues with agent_ready
- →Optionally blocks merging with fail-on-low-score
- →Works with Cursor, Codex, Claude Code — any agent
name: Speclint
on:
issues:
types: [opened, edited]
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: speclint-ai/speclint-action@v1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
speclint-api-key: ${{ secrets.SPECLINT_API_KEY }}
min-score: 80 # block below this threshold
fail-on-low-score: true// pricing
Simple pricing. No seat games.
You're spending $1,000/day on AI coding agents. Are you spending $0 making sure they build the right thing?
Your GitHub issues already contain specs. Speclint tells you if they're good enough.
- 5 specs per day
- All 5 scoring dimensions
- JSON response via /api/lint
- No API key required — or get a free key to track usage
- Community support
- Unlimited lints
- 25 issues per request
- codebase_context scoring
- agent_ready label automation
- Priority support
- Unlimited lints
- 50 issues per request
- Dependency mapping (coming soon)
- Team analytics dashboard (coming soon)
- SLA + dedicated support
// built for the agent era
The spec quality layer your agent pipeline is missing.
AI coding agents are only as good as what you give them. The model isn't the bottleneck — the spec is. Speclint sits at the front of your pipeline, before any token is spent, to verify the input is worth running.
POST https://speclint.ai/api/lint
x-license-key: sk_live_...
Content-Type: application/json
{
"items": ["Fix mobile Safari login failure — users cannot log in via mobile Safari after deployment"]
}
// Response
{
"items": [{
"title": "Fix mobile Safari login failure",
"problem": "Users cannot log in via mobile Safari after deployment",
"acceptanceCriteria": [
"User can log in on Safari iOS 14+",
"No console errors during auth"
],
"estimate": "S",
"priority": "HIGH — blocks core functionality",
"tags": ["bug", "critical", "mobile"],
"completeness_score": 75,
"agent_ready": true,
"breakdown": {
"has_measurable_outcome": false,
"has_testable_criteria": true,
"has_constraints": true,
"no_vague_verbs": true,
"has_definition_of_done": true
}
}],
"summary": { "average_score": 75, "agent_ready_count": 1, "total_count": 1 }
}