Weekly Scans – $49/mo
weekly scans – $49/mo
Weekly scans at $49 a month sit in a strange place. They’re cheap enough to be dismissed. They’re automated enough to be misunderstood. And they’re common enough that plaintiffs’ firms recognize the pattern immediately when they see one referenced in a response letter.
This isn’t an endorsement or a teardown. It’s a description of what weekly automated accessibility scans actually do, what they miss, and why the price point matters more than people admit.
A weekly scan is an automated crawl of a website using a rules engine mapped to WCAG success criteria. The scan runs on a schedule. It checks HTML, CSS, and some JavaScript-rendered output. It produces a report.
That’s it.
No screen reader.
No keyboard-only navigation.
No judgment calls.
The scanner doesn’t know intent. It checks patterns.
Most $49 plans cap pages. Common limits sit between 500 and 5,000 URLs. Anything beyond that is ignored or sampled.
That limit should be written down before anyone claims coverage.
what the scan actually checks
Automated scanners are good at surface-level failures.
Missing alt attributes.
Empty links.
Duplicate IDs.
Form inputs without programmatic labels.
ARIA roles that don’t exist.
Contrast math, sometimes.
These are binary checks. Either the markup exists or it doesn’t.
If a page has <img src="photo.jpg"> with no alt, the scan flags it. If the alt exists but says “image123,” the scan passes it. That distinction matters later.
how WCAG mapping works in scanners
Most scanners map findings to WCAG 2.1 AA by reference number.
1.1.1 for non-text content.
1.3.1 for info and relationships.
2.4.4 for link purpose.
4.1.2 for name, role, value.
The mapping is mechanical. It doesn’t mean compliance. It means detection.
A scanner saying “WCAG 2.1 AA aligned” usually means “mapped,” not “tested.”
That language difference shows up in court filings.
what weekly cadence changes
Running scans weekly does one thing well.
It catches regressions.
A developer pushes a new header.
Navigation breaks.
Labels disappear.
Contrast drops.
Weekly scans catch that faster than quarterly scans.
They do not catch the original design flaw. They catch the repeat.
That’s useful. It’s not comprehensive.
the $49 price point matters
At $49 per month, the business model assumes volume. No human review. No remediation. No legal analysis.
The scan runs.
The report emails.
That’s the product.
Anyone expecting more at that price misunderstands how labor costs work.
That’s not cynicism. It’s math.
reporting format and its limits
Weekly scan reports tend to look similar across vendors.
Issue count.
Severity buckets.
Page URLs.
WCAG references.
What they rarely include:
User impact description.
Reproduction steps using assistive tech.
Legal relevance.
Courts don’t accept raw scanner output as proof of accessibility. They accept it as evidence of monitoring.
That distinction matters.
a real example from a small retailer
In 2022, a regional retailer with about 1,200 product pages ran weekly scans at $49 per month. The scanner reported under 30 issues consistently.
A blind user filed a complaint after being unable to complete checkout using NVDA.
Manual testing found:
Focus lost inside the cart modal.
Error messages announced after focus moved away.
Required fields not announced as required.
The scanner flagged none of these.
The retailer didn’t lie. They relied on the wrong layer.
what weekly scans miss by design
Automated scanners cannot test:
Keyboard traps.
Focus order logic.
Screen reader announcements.
Meaningful link text.
Error recovery.
Timed interactions.
CAPTCHAs.
PDF reading order.
These failures make up the majority of real complaints.
The scanner isn’t broken. It’s blind to behavior.
false positives are real too
Scanners over-report.
ARIA attributes flagged as invalid but harmless.
Contrast flagged on decorative text.
Hidden elements flagged that users never reach.
Teams spend time chasing noise. At $49 a month, no one filters it for you.
That labor cost shows up elsewhere.
overlays and widgets don’t fix scan findings
Some weekly scan services bundle overlays. The scan shows fewer errors after installation.
That doesn’t mean the site improved. It means the overlay injected markup the scanner likes.
Courts have rejected that logic repeatedly.
Scan reduction is not user access.
how plaintiffs’ testers use scanners
Plaintiffs’ firms run scanners too. They don’t rely on them.
They use scans to identify likely problem sites. Then they test manually.
If a defendant cites weekly scans, testers know where to look next. They test beyond the scanner’s reach.
This pattern shows up in demand letters.
what weekly scans are actually good for
Used correctly, weekly scans serve three purposes.
Regression detection.
Baseline documentation.
Internal accountability.
They answer one narrow question: did the code change introduce new detectable failures?
They don’t answer: can a disabled user complete tasks?
documentation value in disputes
Scan logs help when paired with real work.
Timestamped reports.
Consistent monitoring.
Issue history.
Courts like seeing effort. They don’t accept it as proof of access.
A weekly scan log without remediation notes is thin.
trade-offs nobody mentions
Weekly scans create a false sense of closure.
Teams see low numbers.
They assume risk dropped.
They stop manual testing.
That trade-off shows up months later in legal spend.
Another trade-off is complacency. Developers learn how to “appease the scanner” instead of fixing underlying structure.
scanning frequency versus depth
Weekly frequency doesn’t compensate for shallow checks.
A bad test run often is still a bad test.
Depth costs more than $49 a month. That’s not an insult. It’s reality.
the SEO angle people misunderstand
Automated accessibility scans don’t improve rankings by themselves.
Fixing real issues sometimes helps crawlability. That’s indirect.
Google doesn’t read your scan reports. It reads your DOM.
If the scan leads to better HTML, SEO benefits follow. If not, nothing changes.
where weekly scans fit in a real program
Weekly scans belong at the bottom of the stack.
Manual audit first.
Remediation next.
Training ongoing.
Scans last.
Flipping that order fails quietly.
why cheap scans stay popular
They’re easy to buy.
They’re easy to explain.
They’re easy to budget.
They also look good in dashboards.
That’s not the same as working for users.
a note on government and education sites
Public sector sites often rely on weekly scans because procurement allows it.
Budgets are tight.
Staff rotates.
Automation fills gaps.
But Title II complaints focus on usability, not scan metrics. Several DOJ settlements reference manual testing explicitly.
The scan alone doesn’t satisfy obligations.
what “$49/mo” signals to lawyers
It signals minimal effort, not bad faith.
Lawyers know the price.
They know the limitations.
They adjust expectations accordingly.
Citing a $49 scan as sole compliance evidence weakens a defense. Citing it as part of a stack is neutral.
final reality
Weekly scans at $49 a month are tools, not shields.
They catch patterns.
They miss behavior.
They document activity.
They don’t make a site accessible. They don’t stop lawsuits. They don’t understand users.
Used honestly, they reduce regressions. Used alone, they create blind spots.
That’s the whole story.