v1.0 — GitHub Action available

Better specs in.
Better code out.

Your AI coding agents build exactly what you ask for. The problem is what you're asking for. Speclint is the quality gate that makes you write specs worth building.

// Lint your specs before agents touch them

completeness_score: 85 → agent_ready: true

No signup required · 5 lints/day free

issue #142
title: "Improve dashboard performance"
body: "The dashboard is slow. Make it faster."
labels: []
acceptance_criteria: null
completeness_score0/100
has_measurable_outcome0/25
has_testable_criteria0/25
has_constraints0/20
no_vague_verbs0/20
has_definition_of_done0/10

// how it works

Three steps. Zero guesswork.

01

A spec lands on GitHub

Every GitHub issue is a spec — it defines what an agent should build, how to verify it, and where to stop. Before any agent touches it, the speclint-action fires automatically on issues.opened.

on:
  issues:
    types: [opened]
02

Speclint scores the spec

The issue body is evaluated across 5 dimensions. Each dimension maps to a real agent failure mode. The result is a completeness_score from 0–100.

{
  "completeness_score": 82,
  "agent_ready": false,
  "missing": ["has_definition_of_done"]
}
03

Gate or label, you decide

Below your threshold? Speclint comments with what's missing. Edit the issue → it re-lints automatically on issues.edited. Above it? Label it agent_ready: true and let Cursor, Codex, or Claude Code run. Set your own threshold in the action config (default: 80).

if score >= threshold:  # default: 80
  label("agent_ready")
else:
  comment("missing: ...")
  # re-lints on issues.edited

// scoring rubric

Five dimensions. 100 points.

What separates a GitHub issue from an agent-ready specification

“The distance between Level 3 and Level 4 is the quality of the spec, not the quality of the model.”

has_measurable_outcome
25 pts
Measurable Outcome
check

Problem contains an observable, quantifiable outcome

examples
"The login is slow"
"Login P95 < 200ms under 1k concurrent users"
has_testable_criteria
25 pts
Testable Criteria
check

≥2 acceptance criteria with action verbs

examples
"Works correctly on all browsers"
"Loads in Chrome 120+, Firefox 122+, passes axe-core audit"
has_constraints
20 pts
Constraints Present
check

Tags, tech assumptions, or explicit scope limits

examples
"Add a filter to the table"
"Filter by status. No backend changes. Uses existing FilterBar component."
no_vague_verbs
20 pts
No Vague Verbs
check

Title isn't "improve X" or "fix Y" with no specificity

examples
"Improve user experience"
"Reduce checkout form from 6 fields to 3 fields"
has_definition_of_done
10 pts
Definition of Done
check

AC mentions specific state, value, or threshold

examples
"Feature is complete when tests pass"
"PR merged, Lighthouse perf ≥ 95, Sentry error rate 0%"
Codebase-aware scoring
Pro / Team

Pass codebase_context to get ACs that reference your actual stack — not generic patterns.

without context
AC: "Use a caching layer"
Generic. Could mean anything.
with codebase_context
AC: "Use Redis via the existing CacheService class, not a new caching layer"
Specific. Agent can act on this.
completeness_score ≥ 80
Agent-ready threshold
Issues below 80 get a structured comment
listing exactly what's missing

// what happens when specs fail

The remediation loop.

Yes, Speclint will block bad specs. That's the point. A 2-minute edit now saves a 2-hour wrong implementation later.

01

Spec scores low

Speclint posts a structured comment listing exactly what's missing and a concrete suggestion for what to add. No ambiguity — it tells you the fix, not just the problem.

comment: "Missing: has_definition_of_done
suggestion: Add which report types,
  max rows, and file format accepted"
02

Dev edits the issue

The fix is usually one paragraph. Add the missing outcome, tighten the ACs, add constraints. It's spec work, not code work.

# Edit the GitHub issue body
# Add the missing context
# Usually < 5 minutes
03

Auto re-lint

The action fires on issues.edited too — your fix is scored automatically. No manual re-run, no waiting for CI.

on:
  issues:
    types: [opened, edited]  # ← re-lints on edit
04

Spec passes

Issue gets labeled agent_ready: true and enters the agent queue. Total time: ~2 minutes.

label("agent_ready: true")
// Cursor, Codex, Claude Code
// can now pick it up
loop until agent_ready: true
coming soon
AI-assisted rewrite
Speclint will offer to fix the spec for you — not just flag it. One click to a passing spec.

// dogfooding in production

We use Speclint to build Speclint.

Customer Zero — real data from our own pipeline. Every ticket we write goes through the linter. This is what that looks like.

Before
Spec: "SL-026: Add persona scoring to /api/lint"

completeness_score: 50
agent_ready:        false ✗
Missing:            has_measurable_outcome

No measurable outcome. The spec says WHAT to build but not WHY it matters.

After
Spec: "SL-026: Reduce wasted agent token spend by 30%
       through persona-aware scoring"

completeness_score: 75
agent_ready:        true ✓
Gained:             has_measurable_outcome

One rewrite. Two minutes. The spec now articulates the business outcome, not just the feature.

Our orchestration agent — the AI that writes specs and dispatches coding agents — now writes specs differently because it knows they'll be scored. The quality gate didn't just catch bad specs. It changed how specs are written in the first place. That's the product.

“The rewrite forced us to answer: why does this feature matter? That's not a lint rule — that's product thinking. And it takes 2 minutes.”

— David Nielsen, Speclint

// the agent pipeline problem

The spec is the bottleneck.

Agent rework isn't a model problem — it's a spec problem. A quality gate before the agent changes everything downstream.

Without Speclint4+ hours / feature
1

Issue filed

0 min

2

Agent picks it up

5 min

3

Builds wrong thing

2 hrs

4

Rework & rewrite

4 hrs

5

Agent rebuilds

4+ hrs

With Speclint~15 minutes / feature
1

Issue filed

0 min

2

Speclint scores it

2 sec

3

Dev adds context

2 min

4

Agent builds right thing

15 min

“The model isn't the bottleneck. The spec is. We spent $1K/day on AI agents before we realized $29/mo on spec quality would cut our rework in half.”

— David Nielsen, Speclint

// install in 2 minutes

Drop it in your workflow.
It runs on every issue.

The GitHub Action fires automatically on issues.opened and issues.edited. Fix the spec, get instant feedback — no manual re-run needed.

  • Scores every spec in < 2s using the /api/lint endpoint
  • Posts what's missing from the spec as a GitHub comment
  • Auto re-lints on issue edits — fix the spec, get instant feedback
  • Labels passing issues with agent_ready
  • Optionally blocks merging with fail-on-low-score
  • Works with Cursor, Codex, Claude Code — any agent
// or run from terminal
npx speclint lint --issue 142
.github/workflows/speclint.yml
name: Speclint

on:
  issues:
    types: [opened, edited]

jobs:
  lint:
    runs-on: ubuntu-latest
    steps:
      - uses: speclint-ai/speclint-action@v1
        with:
          github-token: ${{ secrets.GITHUB_TOKEN }}
          speclint-api-key: ${{ secrets.SPECLINT_API_KEY }}
          min-score: 80          # block below this threshold
          fail-on-low-score: true

// pricing

Simple pricing. No seat games.

You're spending $1,000/day on AI coding agents. Are you spending $0 making sure they build the right thing?

Your GitHub issues already contain specs. Speclint tells you if they're good enough.

$0
Free
Kick the tires. No commitment.
  • 5 specs per day
  • All 5 scoring dimensions
  • JSON response via /api/lint
  • No API key required — or get a free key to track usage
  • Community support
Most Popular
$29/mo
Solo
For devs running agents daily.
  • Unlimited lints
  • 25 issues per request
  • codebase_context scoring
  • agent_ready label automation
  • Priority support
$79/mo
Team
For firms where bad specs cost real money.
  • Unlimited lints
  • 50 issues per request
  • Dependency mapping (coming soon)
  • Team analytics dashboard (coming soon)
  • SLA + dedicated support
$0
to start today
≤ 2s
per lint response
100%
cancel anytime

// built for the agent era

The spec quality layer your agent pipeline is missing.

AI coding agents are only as good as what you give them. The model isn't the bottleneck — the spec is. Speclint sits at the front of your pipeline, before any token is spent, to verify the input is worth running.

llms.txt compatible
Speclint publishes a machine-readable API contract at /llms.txt for agent discovery
OpenAPI schema at /openapi.yaml
Integrate with any orchestration layer in minutes
MCP server available
Mount Speclint as a tool inside Claude Desktop, Cursor, or any MCP host
Cursor
Codex
Claude Code
Devin
Copilot
POST /api/lint — response
POST https://speclint.ai/api/lint
x-license-key: sk_live_...
Content-Type: application/json

{
  "items": ["Fix mobile Safari login failure — users cannot log in via mobile Safari after deployment"]
}

// Response
{
  "items": [{
    "title": "Fix mobile Safari login failure",
    "problem": "Users cannot log in via mobile Safari after deployment",
    "acceptanceCriteria": [
      "User can log in on Safari iOS 14+",
      "No console errors during auth"
    ],
    "estimate": "S",
    "priority": "HIGH — blocks core functionality",
    "tags": ["bug", "critical", "mobile"],
    "completeness_score": 75,
    "agent_ready": true,
    "breakdown": {
      "has_measurable_outcome": false,
      "has_testable_criteria": true,
      "has_constraints": true,
      "no_vague_verbs": true,
      "has_definition_of_done": true
    }
  }],
  "summary": { "average_score": 75, "agent_ready_count": 1, "total_count": 1 }
}
suggestion— Speclint tells you exactly what to add. Not just a score, a fix.