Test for Accessibility
Testing for accessibility means evaluating digital products and experiences to ensure that people with disabilities can perceive, operate, understand, and interact with them. This includes both manual and automated evaluations, ideally conducted by or with disabled users.
Role in the ENABLE Modelβ
This is the fifth step in the ENABLE model of builder-side care. It ensures that accessibility isn't just intended in design or assumed in development -- it is verified. Without testing, undetected barriers remain, undermining equity.
Why Testing Mattersβ
Even with good intentions and inclusive designs, accessibility bugs frequently slip through. Testing catches these issues early -- before they reach users and require burdensome compensations. Skipping testing often results in preventable harm, loss of trust, and legal risk.
Examplesβ
Blind College Students Win $240,000 Jury Verdict Against LA Community College District (May 26, 2023)
-- Law Office of Lainey Feingold
- Two blind community college students and the National Federation of the Blind won over $240,000 after a jury found 14 ADA violations. The District's math courses, textbooks, websites, and software (including MyMathLab) were inaccessible to screen readers. Students had "tried in vain to convince the LACCD to give them the accessible tools and content they needed" before litigation -- a failure of both testing and response to feedback.
DOJ Settlement: Service Oklahoma Mobile App Accessibility (January 22, 2024)
-- U.S. Department of Justice
- After the DOJ found Service Oklahoma violated Title II of the ADA with an inaccessible mobile app, the state agreed to ensure all mobile apps conform to WCAG 2.1, Level AA. This settlement illustrates why testing for accessibility before launch is essential -- when builder-side QA fails, legal protections become the enforcement mechanism.
From User Perceptions to Technical Improvement: Enabling People Who Stutter to Better Use Speech Recognition (CHI 2023)
-- ACM Conference on Human Factors in Computing Systems
- Researchers found that consumer speech recognition systems do not work well for people who stutter -- users are "frequently cut off, misunderstood, or speech predictions do not represent intent." The study quantified how dysfluencies impede performance, demonstrating why accessibility testing must include diverse users with disabilities, not just automated checks.
- Use screen readers (e.g., NVDA, VoiceOver) to navigate your site
- Conduct keyboard-only testing to ensure navigation without a mouse
- Run automated accessibility audits (e.g., axe, Lighthouse)
- Perform user testing with people with disabilities
Care Sounds Likeβ
βWe must perform user studies to test how well our speech recognition AI works for people who stutter.β
βWe tested the signup form with a screen reader before shipping it.β
βWe included blind and motor-impaired testers in our usability test.β
Neglect Sounds Likeβ
βIt looks fine, so we assumed it works.β
βWe didn't have time for a full accessibility audit.β
βWe'll fix any issues after launch if someone complains.β
Real-world Scenarioβ
Larry was locked out of his bank account during a fraud prevention call because the automated phone system couldn't understand his stutter. It's likely that no builder-side accessibility testing had been conducted with people who stutter. A simple real-world test with such a user would have revealed this failure -- and potentially prevented the barrier entirely.