Manual ADA Compliance Audit (WCAG 2.1 AA)
manual ADA compliance audit (WCAG 2.1 AA)
A manual ADA compliance audit is slow on purpose. Automation catches volume. Manual work catches failure patterns. Lawsuits are built on patterns.
This article explains what a real manual audit covers, how each item is checked, and where teams usually fool themselves. It follows WCAG 2.1 Level AA because that’s what U.S. courts keep using as a measuring stick, even though it isn’t law.
No tools will save you here. Tools assist. Humans decide.
A manual audit is a person using assistive technology, keyboard-only input, and structured inspection to test real user flows.
Not pages.
Flows.
Logging in. Paying a bill. Submitting a form. Downloading a document. Finding a phone number.
Automated scans don’t simulate frustration. Manual audits do.
scope definition comes first or the audit collapses
Before testing starts, scope gets locked.
Number of templates.
Number of unique page types.
Critical workflows.
Third-party integrations.
If scope floats, results mean nothing.
A city site with 40,000 URLs does not get 40,000 manual checks. It gets representative templates plus high-risk flows. That decision must be written down.
Courts look for that note.
environment setup matters more than most admit
Manual audits require controlled environments.
Screen readers:
NVDA 2024.1 on Windows.
JAWS 2023 on Windows.
VoiceOver on macOS and iOS.
Browsers:
Chrome.
Firefox.
Safari.
Keyboard-only navigation.
No mouse.
No trackpad.
Zoom at 200%.
Text-only zoom and full-page zoom.
If these conditions aren’t fixed, findings get inconsistent.
structure is checked before visuals
document outline
Start with headings.
Open the page.
Turn off CSS if needed.
Inspect heading order.
H1 appears once or not at all.
H2s follow.
No jumps used for styling.
Screen readers rely on this structure. Sighted users don’t notice when it’s wrong. Plaintiffs’ testers do.
How to check:
Use browser accessibility tree.
Use screen reader heading navigation.
Confirm visual sections match structural ones.
landmarks and regions
Landmarks reduce noise.
Header.
Navigation.
Main.
Footer.
Complementary regions.
Legacy sites often fake these with divs.
How to check:
Screen reader landmark list.
Tab into page.
Confirm skip links land inside main, not near it.
If landmarks exist but repeat or nest incorrectly, that’s logged.
keyboard navigation is audited brutally
Keyboard testing exposes lies fast.
focus order
Tab through the page.
Watch focus.
Order must match reading order.
No jumps.
No hidden focus traps.
How to check:
Tab forward.
Shift+Tab backward.
Log every mismatch.
One broken modal fails the whole flow.
visible focus indicators
Focus must be visible at all times.
No outline removal.
No color-only indicators that disappear at zoom.
How to check:
Tab through interactive elements.
Zoom to 200%.
Check contrast of focus ring.
Designers remove outlines. Auditors put them back.
keyboard traps
Modals are repeat offenders.
User tabs in.
Can’t tab out.
Escape key fails.
How to check:
Open modal.
Tab through all elements.
Attempt escape.
Attempt shift+tab.
If escape doesn’t work, that’s a fail unless an alternative exists and is announced.
forms get the most attention for a reason
Most complaints start here.
labels and inputs
Every input needs a programmatic label.
Placeholder text doesn’t count.
Visual proximity doesn’t count.
How to check:
Inspect accessibility tree.
Use screen reader to read form fields.
Confirm label is announced before input type.
If “edit blank” is read, it fails.
required fields and instructions
Required status must be announced before submission.
How to check:
Screen reader reads “required” or equivalent.
ARIA-required or native required used correctly.
Visual asterisks alone fail.
error handling
Errors must be:
Identified.
Described.
Linked to the field.
How to check:
Submit empty form.
Listen for error announcement.
Move focus to first error.
Confirm error text references field label.
Red borders don’t help screen readers.
timing and session limits
Some forms time out silently.
How to check:
Leave form idle.
Return after timeout.
See if warning was announced.
Confirm extension option exists if timing is essential.
Few sites pass this.
non-text content is checked manually
images
Alt text must exist and match function.
Decorative images must be ignored.
Functional images must describe action.
How to check:
Disable images.
Use screen reader.
Confirm alt text isn’t file names or filler.
“Image” alone fails.
icons and SVGs
Inline SVGs often lack accessible names.
How to check:
Tab to icon buttons.
Listen to screen reader.
Confirm role and label announced.
Hamburger menus are repeat offenders.
charts and graphs
Data must be available in text.
How to check:
Screen reader reads chart.
If not, locate text alternative.
Confirm values match.
“See chart above” fails.
color and contrast are verified manually
Automated contrast tools help but don’t replace judgment.
text contrast
Normal text: 4.5:1.
Large text: 3:1.
How to check:
Sample text at actual size.
Check contrast in hover, focus, disabled states.
Design systems forget states.
color reliance
Information can’t rely on color alone.
How to check:
Disable color perception mentally.
Look for icons, text, or patterns that convey the same meaning.
Error messages in red alone fail.
resizing and reflow testing
WCAG 2.1 added reflow requirements.
zoom to 200%
No horizontal scrolling except for tables or maps.
How to check:
Zoom browser to 200%.
Resize window to mobile width.
Scroll horizontally.
If content disappears or overlaps, log it.
text spacing overrides
Users may override spacing.
How to check:
Apply custom CSS for line-height, letter-spacing.
Confirm content remains usable.
Few legacy sites survive this.
multimedia requires more than captions
video captions
Captions must be accurate and synchronized.
Auto-captions often fail accuracy thresholds.
How to check:
Play video.
Read captions.
Compare spoken words.
Errors matter.
audio descriptions
If visual content conveys information not in audio, description is required.
How to check:
Watch video without looking.
Confirm audio explains visual-only actions.
Many sites skip this entirely.
navigation consistency is audited across pages
Menus must behave the same.
Order.
Labels.
Keyboard behavior.
How to check:
Navigate multiple templates.
Compare menus.
Check screen reader output.
Inconsistency confuses users and auditors.
dynamic content and ARIA use
ARIA can fix things or break them.
live regions
Used for updates like cart totals or alerts.
How to check:
Trigger update.
Listen for announcement.
Confirm it’s not repeated endlessly.
ARIA spam is real.
roles and states
Buttons must be buttons.
Links must be links.
How to check:
Inspect role announced by screen reader.
Confirm states like expanded, checked, disabled are read.
Div buttons fail often.
tables are tested structurally
data tables
Headers must be associated.
How to check:
Screen reader reads cell.
Confirm header context announced.
Visual alignment doesn’t count.
layout tables
Should be avoided or marked appropriately.
How to check:
Screen reader reads row/column info.
If meaningless, table is misused.
Legacy layouts still use tables.
documents linked from pages are part of the audit
PDFs count.
pdf accessibility
Tagged structure.
Readable text.
Logical order.
How to check:
Open PDF in Acrobat.
Check tags panel.
Use screen reader.
Scanned PDFs without OCR fail immediately.
third-party content is logged, not ignored
Payment processors.
Maps.
Chat widgets.
If embedded, they’re part of the experience.
How to check:
Test embedded flow.
Document vendor limitations.
Record remediation attempts.
Blame doesn’t remove responsibility.
mobile and touch testing is manual too
Touch targets must be large enough.
How to check:
Use real device.
Attempt to activate controls.
Confirm no hover-only content blocks access.
Responsive failures show up here.
documentation is part of the audit output
Findings without logs are weak.
Each issue should include:
Page or template.
WCAG reference.
User impact.
Reproduction steps.
Evidence notes.
Screenshots help. Screen reader transcripts help more.
one real audit example
In 2023, a county tax portal passed automated scans with under 10 errors.
Manual audit found:
Keyboard trap in payment iframe.
Unlabeled checkboxes in exemptions form.
Error messages announced after focus moved away.
Fixing these took two weeks. Automated tools never flagged them.
That gap is why manual audits exist.
limitations of manual audits
Manual audits don’t scale forever.
They miss rare edge cases.
They depend on tester skill.
They cost more.
They also reflect how lawsuits are tested. That’s the trade.
what courts actually look for
Courts don’t expect zero issues.
They look for:
Documented audits.
Real testing.
Ongoing fixes.
Patterns of effort.
A manual audit shows intent backed by work.
final reality
A manual ADA compliance audit is uncomfortable because it removes excuses.
It shows where structure failed.
Where shortcuts were taken.
Where automation lied.
It doesn’t fix anything by itself. It tells the truth.
End.