Agency.pizza logo
Agency.pizza logo
Accessibility Testing Automation: How to Build It Into Your Dev Process
Created by Agency Pizza TeamAgency Pizza Team

Accessibility Testing Automation: How to Build It Into Your Dev Process

Automated a11y testing catches issues before users do — but only about 30% of them. Here's what tools to use, what they miss, and how to set up a pipeline that actually works.

#Web development#Design#Technology
...

Accessibility Testing Automation: How to Build It Into Your Dev Process

Accessibility testing that happens once before launch isn't accessibility testing — it's a one-time audit that's outdated the moment you ship new code.

If your team is deploying weekly, you need checks that run automatically with every build. Here's how to set that up without it consuming a disproportionate share of your engineering time.

What automation catches — and what it misses

This distinction matters because teams often check off "accessibility" after running an automated scan and miss the failures that actually hurt real users.

The WebAIM Million annual report — which audits the top one million websites — found that 95.9% of home pages had detectable WCAG failures. More telling: automated tools identified only about 30–40% of those failures. The rest require human judgment.

Automated tools reliably catch:

  • Missing or empty alt attributes on images
  • Color contrast ratios below WCAG thresholds
  • Missing form labels
  • Heading hierarchy violations (jumping from H1 to H4)
  • ARIA attribute misuse
  • Keyboard traps in simple single-page flows
  • Missing language attributes on <html> elements

Automated tools reliably miss:

  • Whether alt text is meaningful — they check existence, not quality. "image123.jpg" passes the check.
  • Logical reading order for screen reader users
  • Dynamic content updates that assistive technology doesn't announce
  • Custom components that technically pass attribute rules but are confusing in practice
  • Whether the overall page flow makes sense when navigation landmarks are ignored

The WCAG 2.1 guidelines define three conformance levels: A, AA, and AAA. Most legal standards (ADA, EN 301 549 in Europe) require AA compliance. Automated tools can verify roughly half of AA criteria reliably. The other half need human testing.

The tooling stack worth knowing

axe-core — The de facto standard for automated accessibility testing. Open source, maintained by Deque, integrates with Cypress, Playwright, Selenium, and Jest. The axe DevTools browser extension is the right starting point for developers doing spot checks during development. Zero false positives is axe's explicit design goal — every violation it flags is a real issue.

Lighthouse — Built into Chrome DevTools and available as a CLI. Provides an accessibility score alongside performance and SEO. Good for quick audits, not comprehensive enough as a sole testing tool. The score is useful as a directional signal, not as a pass/fail gate.

Pa11y — Command-line tool designed for CI integration. Runs against a URL, returns a list of failures, exits with an error code if violations are found. Low setup cost, integrates with most CI systems.

WAVE — Browser extension from WebAIM. Overlays visual indicators directly on the page — useful for designers and developers who want to see issues in spatial context rather than in a list.

Playwright with axe-core — For teams already using Playwright for end-to-end testing, adding axe checks to existing test flows is a low-overhead way to get accessibility testing into the critical user paths.

Setting up the pipeline

Local development: Install the axe browser extension. Run it on every component or page you touch before opening a PR. Takes under a minute, catches the common issues before they get reviewed into the codebase.

Pull request checks: Add Pa11y or axe to your CI configuration. Set the job to fail if new violations are introduced. The important framing here: you're not trying to fix all existing accessibility debt at once. You're preventing the debt from growing.

A basic GitHub Actions configuration using Pa11y:

- name: Accessibility check
  run: npx pa11y-ci --config .pa11yci.json

Pre-release audit: Before significant releases, run a Lighthouse pass across your five most important pages. Review the accessibility report alongside the performance report.

Quarterly manual review: Schedule a session using a real screen reader — NVDA on Windows (free), VoiceOver on Mac (built in) — on your critical user flows. Signup, checkout, core product interaction. These sessions surface the issues automation can't find, and they're worth the time.

The practical order of operations

  1. Start with the browser extension during active development — immediate feedback, zero infrastructure cost
  2. Add Pa11y to CI to prevent regression on new code
  3. Write accessibility requirements into your definition of done for new features ("keyboard navigable, screen reader announces state changes")
  4. Run manual screen reader testing before major releases
  5. Track and backlog issues automation can't catch, prioritized by user impact

Accessibility problems found after launch cost significantly more to fix than problems caught during development.
If you're building a product that needs to meet AA compliance — or just wants to be usable by more people — it's worth getting the pipeline right from the start.
agency.pizza →

let’s talk about your next project