axle · reference

Frequently asked questions

Detailed answers to what teams usually ask before adopting axle. For regulation-specific detail, the guides hub goes deeper per jurisdiction and stack. For a billing question not answered here, email asaf@amoss.co.il.

About axle

What is axle?

axle is a continuous accessibility compliance pipeline. It scans every pull request against WCAG 2.1 / 2.2 AA using axe-core 4.11, comments on the PR with the failing rules, proposes source-code fix diffs via Claude Sonnet, and publishes a tamper-evident accessibility statement URL when you're ready to disclose compliance to regulators or customers.

It ships as a GitHub Action, an npm CLI (axle-cli), plugins for Netlify / Cloudflare Pages / Vercel, a WordPress plugin, a Raycast extension, and a web scanner at the homepage. Same engine, different delivery surfaces.

How is axle different from an accessibility overlay widget (accessiBe, UserWay, AudioEye)?

The opposite approach. Overlays inject JavaScript into the served page at runtime that attempts to patch broken HTML with ARIA. Regulators under EAA 2025 / EN 301 549 / ADA evaluate the served HTML, not what a runtime script layers on top — which is why the FTC fined accessiBe $1M in January 2025 for deceptive compliance claims. Overlays also break for users who disable JavaScript or use custom assistive tech stacks.

axle never ships JavaScript to your users. It scans in CI, proposes source-code fixes that a human reviews and merges, and the deployed HTML is genuinely accessible. Full background: Why accessibility overlays don't work.

Who built axle and why?

axle was built by an independent developer (asaf@amoss.co.il) as a practical alternative to the overlay ecosystem and to the manual-only audit firms. EAA 2025 enforcement made continuous compliance a real engineering problem — not something you can solve once a year with an audit PDF. axle is the tool the developer wanted to exist.

Compliance and regulations

Does axle make my site EAA 2025 compliant?

axle is a tool for teams seeking compliance. It produces the artefacts regulators look for — automated scan reports as an audit trail, a published accessibility statement with named contact and escalation channel, and evidence of per-PR diligence.

That said: automated checks catch roughly 57% of WCAG issues (per Deque's published research on axe-core coverage). For the remaining 43% — semantic judgements, alt-text quality, heading hierarchy, cognitive-load issues — a human audit is recommended before the first regulator touchpoint. axle is not a certification.

Does it help with US ADA Title III lawsuits?

Yes, in the sense that axle gives you a defensible diligence record. The dominant plaintiff-firm model scans landing pages with automated tools (often axe-core itself) and threatens suit based on the output. A clean axle CI history shows the violations they'd cite never landed on main in the first place. That reframes a suit from “we have no process” to “here's our continuous process and the week this was introduced and caught”. Still not a substitute for consulting an ADA-admitted attorney once you receive a letter.

Does axle support Israeli תקנה 35?

Yes. The Hebrew statement generator at axle-iota.vercel.app/statement produces a הצהרת נגישות aligned with regulation 35(ד) — accessibility coordinator contact, escalation to נציב שוויון, methodology, and date, in proper Hebrew RTL layout. The form runs entirely in your browser; nothing is uploaded.

For compliance officers: the Team plan adds a published verified URL (/s/<id>) that is tamper-evident and timestamped, so the statement can be referenced in disclosure documents and regulator filings without worrying about post-hoc edits.

Does axle replace a human accessibility audit?

No. It replaces the need to pay for a full audit every quarter. axle catches the majority of what costs the most to fix — the machine- detectable regressions that accumulate between audits. You still want a qualified human (IAAP CPACC, DHS Section 508, AnySurfer, or similar) to validate the semantic and experiential aspects once a year or after a major redesign.

Pricing and plans

How much does axle cost?

  • Open — free forever: unlimited scans on one repo, PR comments, public badge, Hebrew statement generator, bring-your-own Anthropic API key for AI-generated fixes.
  • Team — $49/month: hosted AI fixes (no BYO key needed), up to 10 repos, multi-language statement pack, published verified statement URL, trend history across scans.
  • Business — $299/month: unlimited repos, full EU- language statement pack (DE/FR/IT/ES/NL/PT/DA/SV/FI/PL/CS/HU + EN/HE), SLA support, private Slack channel for escalations.

No seat counts on any plan. Annual billing available (approximately 2 months free). Cancel anytime — billing handled through Polar.sh.

Is the free tier really free forever, or a trial?

Really free forever, for one repo. The reasoning: the marginal cost of running axe-core in your own GitHub runner is zero to me. Paid tiers cover hosted AI fixes (Claude API calls add up), multiple repos (support load), and the verified-URL feature (which runs on my infrastructure). If those don't apply, the free tier is the right fit indefinitely.

Do you offer an enterprise / self-hosted tier?

The Business plan covers most enterprise needs at $299/mo. For true self-hosted deployments (air-gapped, VPC-only, no outbound to Anthropic), email asaf@amoss.co.il with the requirement. The GitHub Action itself is already self-hosted-runner-compatible; the piece that needs negotiation is the AI-fix backend.

Deployment and integration

Where does axle run? Is it a SaaS I need to sign up for?

The free tier and CI pipelines run on your own infrastructure — the GitHub Action runs on your GitHub runner, the Netlify plugin runs during your Netlify build, the CLI runs anywhere Node.js runs. No signup required; the axe-core engine is open source.

The hosted service at axle-iota.vercel.app is optional and only involved if you (a) use the web-scan form on the homepage, (b) use the paid hosted AI-fix feature, or (c) publish a verified statement URL.

What's the difference between the GitHub Action and the npm CLI?

Same axe-core 4.11 engine, different delivery surface. The Action plugs into GitHub PR workflows, leaves a sticky comment, and fails the check if violations cross your configured threshold. The CLI (axle-cli) runs anywhere Node.js runs — local dev, GitLab / Jenkins / CircleCI / Bitbucket pipelines, cron jobs, or manual scans during a redesign. Same JSON + markdown output format between them so existing tooling works across both.

What CI systems does axle support?

First-class support via the GitHub Action for GitHub Actions. For everything else (GitLab, Jenkins, CircleCI, Bitbucket Pipelines, Buildkite, Azure Pipelines, TeamCity), use the npm CLI — it runs on any Node 18+ runner and returns the same output format. PR-comment integration is GitHub-only today; other platforms get JSON + markdown reports you can wire to their equivalent comment APIs.

What hosting platforms have dedicated integrations?

Netlify (@axle/netlify-plugin), Cloudflare Pages (@axle/cloudflare-plugin), and Vercel (@axle/vercel-plugin) are published on npm with build-step hooks. WordPress has a plugin on WordPress.org that runs client-side scans inside the admin. Raycast has an extension for ad-hoc scans from the command bar. A Chrome extension for manual page auditing is in submission review.

Does axle work with React / Next.js / Vue / Svelte / my framework?

Yes — axle scans the rendered HTML, not framework source. It works with any stack that serves HTML: React, Vue, Svelte, Solid, Angular, Next.js, Remix, Nuxt, Astro, SvelteKit, Rails, Django, Laravel, Phoenix, Go templates, static HTML. Framework-specific guides: React, Next.js, Shopify, WordPress.

Technical details

What scanning engine does axle use?

axe-core 4.11 (open source, Deque Systems). It's the same engine used by plaintiff-firm scanners, Google Lighthouse accessibility audits, Microsoft Accessibility Insights, and Deque's own commercial offering. Using the same engine as the scanners that detect violations is a deliberate choice — your CI sees what they see.

What percentage of WCAG violations do automated scans catch?

Roughly 57% of WCAG 2.1 AA issues are machine-detectable, per Deque's published methodology. The remaining ~43% require human judgement — is this alt text meaningful? does this heading structure make sense semantically? is this error message understandable to a screen reader user? axle catches the 57%, and the CI loop prevents regression while you fix the human-judgement piece at audit time.

How noisy are the scan results? Will my team drown in false positives?

axe-core is specifically designed to minimise false positives — its ethos is “zero false positives” even at the cost of some coverage. In practice, violations at critical and serious severity are almost always real and actionable. Moderate and minorsometimes highlight edge cases; axle's default threshold fails PRs only on critical/serious and reports moderate/minor as warnings.

Does axle scan dynamic / single-page app content?

Yes — scans run in a headless browser (Playwright) that fully renders client-side content before evaluation. For SPAs with multiple routes, the Action config accepts a list of URLs and scans each. For authenticated routes, a pre-scan auth step can be configured. Progressive enhancement and lazy-loaded content are handled via explicit wait-for conditions.

How do the Claude-generated fixes work, and are they safe to merge blind?

When a violation is detected, axle feeds the offending HTML and the axe-core rule metadata to Claude Sonnet, which returns a unified diff against the source file. The diff appears as a suggestion in the PR comment; a human reviews and merges (or edits) it. axle never commits autonomously.

Quality is high for mechanicalfixes (missing alt attributes, missing labels, contrast adjustments, ARIA role corrections). For semantic issues (“is this heading structure appropriate?”) Claude sometimes over-suggests; treat those as proposals not mandates.

Statements and disclosure

What's in the accessibility statement generator?

All elements regulators look for: conformance level declaration, list of non-accessible content with justification, named accessibility contact, escalation procedure per jurisdiction (נציב שוויון / ARCOM / NDA / ACM / SPF-FOD / etc.), assessment methodology, preparation date. The generator runs locally in your browser — no form data is uploaded. Output is HTML you can paste directly into your CMS / Shopify Page / WordPress page / Next.js route.

What's a &ldquo;verified statement URL&rdquo;?

On paid plans, the statement can be published at axle-iota.vercel.app/s/<id> with a cryptographic hash of the content and a timestamp. When regulators ask for a statement URL in disclosure documents, that verified URL is tamper-evident — if the statement is modified later, the hash stops matching. This is meaningful because regulators increasingly treat the statement itself as a legal document with a specific version-in-force at a specific date.

What languages does the statement generator support?

English and Hebrew on the free tier. Paid tiers add German, French, Italian, Spanish, Dutch, Portuguese, Danish, Swedish, Finnish, Polish, Czech, and Hungarian — covering the 12 largest EU-language surfaces plus English and Hebrew. Each language uses native regulator references (e.g. ARCOM in French, AgID in Italian, OAW in Spanish).

Privacy and security

What data does axle collect?

On the free tier running in your CI: none. Scans execute in your own runner and never phone home. The hosted web scanner records the target URL and the axe-core result for the displayed report (stored in Upstash Redis for the session); that data is deleted after 30 days unless you save a permalink. The statement generator runs entirely client-side — form content never leaves your browser. When you sign up for a paid plan, Polar.sh collects billing info; axle stores only your email, plan, and the repos you've configured.

Does axle see my source code?

Only on paid plans that use hosted AI fixes: when you opt in, the offending HTML snippet and the source file section containing the offending markup are sent to Anthropic's API (Claude Sonnet) to generate the diff. Anthropic's API does not train on this data per their enterprise API terms. On the free tier with BYO-key, the same flow runs but through your own Anthropic account. Zero-retention mode (Business plan) adds an explicit pass-through flag to ensure prompts aren't logged.

Is axle GDPR compliant?

Yes. Data processing happens in the EU (Upstash Frankfurt, Vercel Frankfurt / Dublin). A DPA (Data Processing Agreement) is available on request for paid plans. Personal data is limited to the email address on the account and any emails captured through the lead form on scan results (which users opt into explicitly).

Still have a question?

Email asaf@amoss.co.il. I read every message and usually reply within a day. For compliance or legal questions specific to your jurisdiction, I'll still recommend consulting a qualified attorney — I write engineering tooling, not defence briefs.