Skip to main content
📚 Cite this page

AMA
Weru Lawrence. Untitled. The ENABLE Model website. Published 2025. Accessed 2026-04-01. https://enablemodel.com/docs/manifestations/deque

APA
Weru, L. (2025). Untitled. The ENABLE Model. https://enablemodel.com/docs/manifestations/deque

MLA
Weru, Lawrence. "Untitled." The ENABLE Model, 2025, https://enablemodel.com/docs/manifestations/deque.

Chicago
Weru, Lawrence. "Untitled." The ENABLE Model. 2025. https://enablemodel.com/docs/manifestations/deque.

BibTeX

@misc{enable2025deque,
              author = {Weru, Lawrence},
              title = {Untitled},
              year = {2025},
              url = {https://enablemodel.com/docs/manifestations/deque},
              note = {The ENABLE Model}
            }

Deque / axe-core

Developers run axe-core in CI pipelines to catch and block detectable WCAG violations before code ships, using the free open-source engine that Deque Systems controls and that Google chose to power Chrome DevTools accessibility audits.

What it is

Deque Systems was founded in 1999 by Preety Kumar, who has described her goal as "unifying Web access, both from the user and the technology perspective."1 The company is headquartered in Herndon, Virginia, with offices in Hyderabad and Utrecht.2

In June 2015, Deque open-sourced axe-core, an accessibility rules engine that scans web and mobile applications for detectable WCAG violations -- missing alt text, color contrast failures, form labels, ARIA misuse -- and reports results without false positives.3 When axe-core is uncertain, those cases are "flagged for manual review" rather than flagging it as a violation.3 By 2025, axe-core had been downloaded more than 3 billion times and was used in over 13 million GitHub projects.4 In 2017, Google selected axe-core as the engine powering Chrome DevTools' Lighthouse accessibility audits.5 Google and Microsoft identify it as their standard for automated accessibility results.3 The U.S. Department of Justice uses it as a reference engine.3

The free axe-core engine is the base of a commercial product stack. Deque sells axe DevTools Pro -- a paid browser extension and IDE linter that adds Intelligent Guided Tests and CI/CD blocking -- axe Monitor for continuous site scanning, and axe Auditor for managing manual audit workflows.6 Enterprise contracts run $1,200–$2,500 per developer seat annually.7 Deque's estimated annual revenue is $50M–$100M; no external funding has been publicly disclosed.8 In Q4 2025, Forrester Research named Deque a Leader in its Digital Accessibility Platforms Wave report, awarding the company the highest possible scores across 21 criteria.9

Deque runs axe-con, a free annual digital accessibility conference; the 2024 edition was its fourth year.10

The ENABLE Model observes Deque at builder-side/qa-testing: the point in development where code is checked against accessibility standards before it reaches production. axe-core is the most widely integrated tool at this stage. When qa-testing is absent or inadequate, violations reach users, and the labor of discovering and documenting them falls to disabled people as navigator-side/give-feedback or, when feedback produces no response, assert-rights.

Why it matters

The history of automated web accessibility testing runs from academic nonprofits to a private company's engine at the center of the global standard.

Bobby launched in the late 1990s from the Center for Applied Special Technology (CAST), an independent nonprofit research organization in Massachusetts.11 It was the first automated accessibility checker, offering a "Bobby Approved" badge to websites that passed. CAST closed the tool in 2005 after IBM acquired the company that had bought it. WAVE followed in 2000 as a research project at Temple University, later taken over by WebAIM -- a nonprofit at Utah State University.12 Both tools came from the academic and nonprofit sector. When Google selected axe-core for Chrome DevTools in 2017, the de facto standard for automated accessibility testing shifted to a private company's rules engine. Developers who open Chrome DevTools today and check accessibility are using Deque's ruleset. The shift happened without policy deliberation or public procurement. Google chose axe-core and the market consolidated around the choice.

The consolidation gave Deque structural influence over what "automated accessibility testing" means, including influence over how the field measures itself. In March 2021, Deque released a Business Wire press release: "Deque Study Shows Its Automated Testing Identifies 57 Percent of Digital Accessibility Issues, Surpassing Accepted Industry Benchmarks."13 The study used anonymized data from over 2,000 audits and 13,000+ pages drawn from Deque's own first-time audit customers.13 The prior consensus held that automated tools catch 20–30% of WCAG success criteria, a figure based on counting which criteria can be tested automatically. Deque's study measured instead the share of total detected issues that automated tools find -- a different question. High-frequency issues like missing alt text and color contrast dominate audit counts and are exactly what automated tools catch, so the volume-based metric produces a higher number. The two figures measure different things. Independent researcher Adrian Roselli's 2025 manual review found almost seven-and-a-half times more issues than the automated tool with the next highest detection count, across three times as many WCAG success criteria.14 W3C's Accessibility Conformance Testing Task Force data shows automated testing can fully or partially cover only 17 of 55 WCAG 2.2 Level A/AA success criteria -- 31%.14 The 57% and 31% figures both describe automated accessibility testing; they answer different questions and should be read together. The field does not yet have an independent body setting this benchmark, so the standard-setter and the tool-vendor are the same organization.

axe-core is genuinely free: MIT licensed, embedded in every major browser's developer tools, integrated into test frameworks like Playwright, Cypress, and Jest, and usable by any developer without cost or account. Before axe-core, there was no shared open-source engine that development teams could integrate directly into CI pipelines. It moved builder-side/qa-testing to the earliest possible moment in development: violations caught at the pull request stage never reach production, and never become the discovery labor that disabled users would otherwise carry as navigator-side/give-feedback. When a developer's pull request fails an axe-core rule, that specific violation is stopped before it can shift downstream. The commercial model that makes this sustainable is standard freemium: axe-core is free, the paid tools add coverage and workflow features, and Deque captures the enterprise market on top of the open standard it maintains. That structure is also what funds axe-core's development and the W3C standardization work Deque leads. The structural question it raises -- a private company holds the free standard that defines accessibility compliance for the global web -- is a feature of how open-source infrastructure often develops.

The WebAIM Million report -- an annual scan of the top 1 million websites' home pages -- uses WAVE, not axe-core, as its engine. In 2025, it found that 94.8% of home pages had detected WCAG failures, an average of 51 errors per page.15 Low contrast text appeared on 79.1% of pages. Missing alt text on 18.5% of all home page images. The figure reflects only automatically detectable errors; the actual failure rate including issues no automated tool can catch is higher. This is not a verdict on axe-core. Web inaccessibility has structural causes that predate and exceed any testing tool: builders who do not run the tests, organizations that run them but do not act on results -- a form of abandonment where accessibility is deprioritized after the scan passes -- and the absence of mandatory enforcement in most jurisdictions. What the WebAIM Million shows is that automated testing infrastructure, however well-distributed, does not by itself change the structural conditions that produce inaccessible output.

The zero false positives policy produces a specific kind of institutional risk. Deque designed axe-core to never report something as a violation unless it is certain.3 This means developers can trust what axe-core flags. It also means developers who treat "no axe-core violations" as "our site is accessible" are reaching a false conclusion. axe-core cannot determine whether screen reader announcements make sense to a user, whether keyboard navigation is logical, whether focus order is intuitive, or whether alternative text conveys the right information.16 Slack, which integrated axe-core into its CI pipeline, wrote explicitly that "automated tools can overlook nuanced accessibility issues that require human judgment, such as screen reader usability" -- and initially made axe-core checks optional rather than blocking, because the team understood the gap between a clean scan and actual usability.16 The zero false positive design is the right tradeoff for developer tool credibility. The problem is organizational: when axe-core passes, the organization might treat the accessibility question as answered. When the disabled user finds the barrier, they must give feedback to an organization that has documentation showing no violations. The burden of proof shifts to the navigator. If feedback is ignored, the next step is asserting rights -- filing a complaint or pursuing legal action against a builder whose testing logs show a clean result.

The bodily stakes of the gap between a passing scan and a usable site are real. When a disabled person cannot complete a government benefits application, schedule a medical appointment, or access their insurance explanation of benefits because the site passes automated testing but is not navigable with a screen reader, the harm lands in a body. That harm is built into the structure of how inaccessibility is tolerated -- not produced by axe-core's existence, and not something any testing tool alone could eliminate. axe-core catches violations at the frontier of what automated detection can currently do, and makes that detection available for free. The structural work of ensuring those detections translate into accessible experiences belongs to the organizations that deploy the tool: requirement-setting that mandates accessibility before development starts, design that builds in inclusion before code is written, and iteration that acts on what the tests surface. axe-core enables automates qa-testing, which is incomplete without manual testing. Whether the remaining stages happen is a choice each organization makes independent of the tool.

Real-world examples

In the news

Deque Study Shows Its Automated Testing Identifies 57 Percent of Digital Accessibility Issues, Surpassing Accepted Industry Benchmarks (March 2021)
-- Business Wire

  • Deque issued this press release reframing the standard benchmark for automated accessibility coverage. The prior consensus held that automated tools catch 20–30% of WCAG success criteria. Deque's study, based on its own first-time audit customer data, argued the figure was 57% when measured by volume of detected issues rather than coverage of WCAG success criteria. The figure circulates in the industry as the standard reference. Deque, which profits from automated accessibility tools, conducted and published the research behind the benchmark.
In the news

The WebAIM Million: 2025 Report (2025)
-- WebAIM

  • The 2025 annual scan of the top 1 million home pages found 94.8% had detected WCAG 2 failures -- an average of 51 errors per page. Low contrast text appeared on 79.1% of pages. axe-core has been embedded in Chrome DevTools, Lighthouse, and CI pipelines worldwide since 2017. The report measures only what automated scanning can detect; the actual failure rate including screen reader usability, logical navigation order, and other issues no automated tool can flag is higher. The report documents that automated testing infrastructure at scale has not eliminated the baseline of web inaccessibility that disabled navigators encounter daily.
In the news

Automated Accessibility Testing at Slack (January 2025)
-- Natalie Stormann, Engineering at Slack

  • Slack's engineering team described integrating axe-core via Playwright with an opt-in flag for developers. They wrote: "we set it to run as non-blocking. This meant developers would see the test results, but a failure or violation would not prevent them from merging their code to production." The team acknowledged: "Automated tools can overlook nuanced accessibility issues that require human judgment, such as screen reader usability." Slack's approach -- treat automated testing as one layer of a multi-method system, not a gate -- represents builder-side qa-testing practice that acknowledges the tool's limits rather than outsourcing the accessibility question to a passing score.
  • Google selected axe-core as the engine for Chrome DevTools Lighthouse accessibility audits in 2017.5 Every developer who opens Chrome DevTools and runs an accessibility audit is using Deque's rules. This is market position that no accessibility company has previously held -- embedded in the default tooling of the world's most widely used browser.
  • Bobby, the first automated accessibility checker (late 1990s, CAST), was a nonprofit tool. WAVE (2000, WebAIM/Utah State University) is still actively maintained as a free nonprofit tool. The transition from nonprofit academic tools to a private company's engine at the center of the standard has not been publicly deliberated.1112
  • axe DevTools Pro adds "Intelligent Guided Tests" -- semi-automated flows that direct human testers through scenarios axe-core's automated engine cannot handle, like keyboard navigation flows and screen reader announcements.6 The existence of this paid tier acknowledges that the free automated tool is incomplete. The incompleteness is part of the product architecture.

What care sounds like (builder-side interventions)

Care at the qa-testing stage involves integrating detection into development workflows while understanding what automated tests can and cannot tell you:

  • "We block merges on axe-core violations, and we schedule weekly screen reader testing sessions with blind testers on top of that."
  • "A passing axe-core scan is the floor, not the ceiling. We still need someone using a screen reader to confirm the experience makes sense."
  • "We run axe-core in our CI pipeline so every build is checked, and we flag things axe marks as 'incomplete' for the human review queue."
  • "We chose axe-core because it has zero false positives -- so when it flags something, it's real, and we fix it before merge."

What neglect sounds like (builder-side interventions)

Neglect involves using automated testing as a checkbox that substitutes for comprehensive accessibility rather than as one detection layer in a multi-method system:

  • "Axe-core passes, so we're accessible." *1
  • "We'll add accessibility testing after launch." *2
  • "Our CI pipeline handles accessibility." *3
  • "We don't have the budget for manual testing with disabled users -- that's what automated tools are for." *4

*1: A passing axe-core scan means no automatically detectable violations. It says nothing about screen reader usability, logical focus order, meaningful alternative text, or keyboard navigation logic.16
*2: Post-launch is when disabled users discover and report barriers. Automated testing integrated into CI catches detectable violations before they reach users -- but only if it runs at development time, not retroactively.
*3: CI automation catches the structural violations axe-core can identify. It cannot check whether a complex interactive component makes sense when navigated by keyboard alone, or whether a screen reader's announcement of a live region is comprehensible.
*4: Manual testing with disabled users is where the majority of accessibility barriers are found. axe-core catching 57% (by Deque's measure) of issues from first-time audit customers means a substantial portion of barriers reaches users undetected even when automated testing is in place.13

What compensation sounds like (navigator-side compensations)

Compensation describes the labor disabled people carry when builders treat a passing axe-core scan as an accessibility guarantee. The pattern pushes toward give-feedback -- reporting a barrier to an organization whose test results show none -- and when feedback fails, toward asserting rights or switching to an alternative:

  • "The form passed their automated test. The labels are there. But the error messages fire before the validation runs, so my screen reader announces them before I've finished entering anything -- and I can never figure out what I did wrong."
  • "I reported the navigation issue three times. They keep sending me their axe-core scan results showing no violations. That's not what I'm describing."
  • "I filed a complaint. Their response was that all pages meet WCAG standards per their accessibility testing platform. The pages are technically labeled. I still can't use them."
  • "I've learned to bring documentation when I report accessibility barriers -- video of my screen reader, exact steps to reproduce -- because the first response is always 'our testing shows no issues.'"
  • "I switched to a different service. The one I was using kept saying they were compliant. Compliant with what they can automatically check, maybe. Compliant with what I need to get through their checkout, no."

All observations occur within the context of web software development in the United States and globally, where automated accessibility testing consolidated around a single private company's open-source engine -- which runs inside the world's most-used browser's developer tools, is used by government agencies as a compliance reference, and remains the free base layer of that company's commercial product stack -- while the proportion of the world's most-visited websites that fail even automated accessibility checks remained above 94% through 2025.

Footnotes

  1. Preety Kumar | Deque

  2. Company Overview | Deque Systems

  3. Axe-core by Deque | open source accessibility engine for automated testing 2 3 4 5

  4. Axe-core at 3 billion: A milestone in the movement for digital accessibility

  5. Google Selects Deque's axe for Chrome DevTools 2

  6. Axe DevTools | Automate accessibility testing 2

  7. Deque Software Pricing & Plans 2026

  8. Financial Report: Revenue for Digital Accessibility Companies

  9. Deque named a leader in The Forrester Wave™: Digital Accessibility Platforms

  10. Deque and IAAP announce the "Fund the Future" Certification Scholarship Fund

  11. Bobby – the First Automated Accessibility Testing Tool 2

  12. An Early History of Web Accessibility 2

  13. Deque Study Shows Its Automated Testing Identifies 57 Percent of Digital Accessibility Issues 2 3

  14. Automated WCAG Testing Is Grrreat! — Adrian Roselli 2

  15. The WebAIM Million: 2025 Report

  16. Automated Accessibility Testing at Slack 2 3


Edited by Lawrence Weru S.M. (Harvard)

📝 Disclaimer

The ENABLE Model draws on the principles of anthropology and the practice of journalism to create a public ethnography of accessibility, documenting how people intervene or compensate for accessibility breakdowns in the real world. Inclusion here does not imply endorsement. It chronicles observed use -- how a tool, organization, or strategy is actually used -- rather than how it is marketed. References, when provided, are for verification and transparency.


📚 Cite this page

AMA
Weru Lawrence. Untitled. The ENABLE Model website. Published 2025. Accessed 2026-04-01. https://enablemodel.com/docs/manifestations/deque

APA
Weru, L. (2025). Untitled. The ENABLE Model. https://enablemodel.com/docs/manifestations/deque

MLA
Weru, Lawrence. "Untitled." The ENABLE Model, 2025, https://enablemodel.com/docs/manifestations/deque.

Chicago
Weru, Lawrence. "Untitled." The ENABLE Model. 2025. https://enablemodel.com/docs/manifestations/deque.

BibTeX

@misc{enable2025deque,
              author = {Weru, Lawrence},
              title = {Untitled},
              year = {2025},
              url = {https://enablemodel.com/docs/manifestations/deque},
              note = {The ENABLE Model}
            }