The Monthly Compliance Log (Court‑Admissible)
the monthly compliance log (court-admissible)
Monthly compliance logs exist because memory fails under scrutiny. Screenshots lie. People leave jobs. Websites change weekly. Courts want records that survive all of that.
A court-admissible monthly compliance log is not a dashboard export. It’s not a PDF scorecard. It’s a written, time-bound record that shows what was checked, what failed, what was fixed, and what stayed broken. With dates. With owners. With context.
This article explains what those logs look like when they actually hold up in ADA investigations and litigation. Not the sales version. The version defense counsel asks for after a demand letter lands.
why “we’re compliant” never works
ADA web cases don’t turn on intent. They turn on access at a specific moment.
When a blind user files a complaint, investigators ask a simple thing: what did the site do then, and what have you done since?
Verbal answers don’t count. Neither do undated audits. Neither do accessibility widgets.
Logs exist to answer that question without improvising.
Monthly compliance logs show up in three places.
First, during investigations by the U.S. Department of Justice or the Office for Civil Rights. Investigators request documentation. Logs shorten that exchange.
Second, during settlement negotiations. Defense counsel uses logs to show maintenance, not perfection.
Third, in discovery. Plaintiffs’ experts review logs to see whether accessibility failures repeat or get ignored.
Logs don’t end cases. They shape remedies.
monthly cadence is deliberate
Weekly logs are noisy. Quarterly logs are too thin.
Monthly logs match how courts think. They show ongoing effort without drowning reviewers in data.
Most DOJ settlement agreements that require monitoring specify monthly or quarterly reporting. Monthly logs fit cleanly into that expectation.
Frequency is part of credibility.
what makes a log “court-admissible”
Courts don’t certify logs. Judges assess reliability.
Reliable logs share traits:
They are dated and immutable.
They identify the scope tested.
They name tools and methods used.
They list failures in plain language.
They track remediation status.
They persist over time.
Anything that looks like it was assembled after the fact gets questioned.
screenshots are not logs
Screenshots show what someone wanted to capture. Logs show process.
A screenshot of a green score means nothing without history. A log that shows recurring failures does.
Courts prefer boring consistency over pretty visuals.
what goes into a proper monthly compliance log
A usable log has sections. Not templates. Sections.
It starts with scope.
Which domains. Which subdomains. Which apps. Which document repositories. Which third-party tools included. Which excluded, and why.
Vagueness here weakens everything downstream.
testing methods need to be explicit
Logs must state how testing was done.
Automated scanning tools, with names and versions.
Manual testing methods.
Assistive technologies used.
Browsers and devices.
A line that says “WCAG tested” means nothing.
Naming tools matters. Courts understand tools even if they don’t understand code.
wcag versions must be stated
Logs need to specify which WCAG version was used as reference.
Most logs cite WCAG 2.0 or 2.1 Level AA. Increasingly, WCAG 2.2 appears.
WCAG is published by the World Wide Web Consortium. It’s not law. It’s still the benchmark.
Failing to name a version makes scope arguments harder later.
issues must be described functionally
Logs that list “SC 1.3.1 failure” are weak.
Strong logs say what broke.
“Screen reader does not announce required fields on payment form.”
“Keyboard focus disappears after opening modal.”
“PDF agenda lacks tagged headings.”
Judges don’t read WCAG. They read effects.
severity matters, but ranking is risky
Some teams rank issues by severity.
That helps internally. It can hurt externally.
A log that labels an issue “low severity” and leaves it unfixed for months invites questions if a user complains.
Safer logs describe impact and remediation plans without dismissive labels.
remediation tracking is the heart of the log
Logs that list failures without follow-up are admissions.
Each issue needs:
Date identified.
Responsible party.
Planned fix.
Actual fix date or explanation for delay.
Unfixed issues aren’t fatal. Unexplained issues are.
an example from a county portal
In 2021, a county government faced a DOJ inquiry over its online tax payment portal.
The county produced monthly logs covering 14 months. The logs showed repeated keyboard failures in a third-party payment iframe.
Each month noted vendor contact attempts and lack of response. Eventually, the county replaced the vendor.
DOJ required remediation but reduced reporting duration. The logs mattered.
The iframe was still broken for months. The paper trail saved time.
third-party tools belong in the log
Many logs quietly omit third-party platforms.
That omission shows up fast. Investigators test them anyway.
Logs should note third-party issues even when fixes aren’t under direct control. That shows awareness.
Blaming vendors without documentation looks evasive.
overlays complicate logs
Accessibility overlays generate reports. Those reports rarely align with WCAG.
Logs that rely on overlay dashboards look shallow. They don’t show structural testing. They don’t show keyboard or screen reader behavior.
Using overlays doesn’t invalidate a log. Relying on them exclusively weakens it.
manual testing must be honest
Some teams overstate manual testing.
Saying “manual testing performed” without details invites skepticism.
Logs should state what was tested manually and what wasn’t. Honesty builds credibility.
Courts don’t punish gaps. They punish misrepresentation.
accessibility statements should match logs
Public accessibility statements often promise monitoring.
Logs are how that promise is proven.
If the statement says “regular audits” and no logs exist, plaintiffs notice.
Consistency between public language and internal records matters.
retention periods are often overlooked
Logs need to be retained.
Many ADA cases reference issues years old. Logs that disappear after six months don’t help.
Most government record schedules already require multi-year retention. Accessibility logs should follow that.
Deleting logs looks worse than having bad ones.
who owns the log matters
Logs owned by vendors create risk.
If a vendor disappears, the record disappears.
Best practice is internal ownership, even if vendors contribute data.
Courts prefer records controlled by the defendant.
format matters less than integrity
Logs can live in spreadsheets, ticketing systems, or documents.
Format doesn’t matter. Integrity does.
Time stamps.
No retroactive edits.
Clear authorship.
Anything that looks editable without trace invites doubt.
automation feeds logs, it doesn’t replace them
Automated scans generate data. Logs interpret it.
Dumping scan output into a folder is not logging.
Logs summarize what matters and connect it to action.
common mistakes that sink logs
Backdating entries.
Changing scope without noting it.
Removing resolved issues entirely.
Only logging successes.
Using generic language month after month.
These patterns appear in litigation. They hurt.
logs during active litigation
Once litigation starts, logs become discoverable.
Teams often freeze logging out of fear. That’s a mistake.
Stopping logs looks like avoidance. Continuing logs shows maintenance.
Defense counsel usually advises to keep logging with legal review.
cost and labor trade-offs
Maintaining logs takes time.
Someone has to review results, write summaries, track fixes. That’s labor.
Small organizations struggle here. That’s the criticism.
Courts don’t waive documentation because it’s inconvenient.
Some teams reduce scope to manage cost. That’s acceptable if disclosed.
monthly logs versus audits
Audits are snapshots. Logs are timelines.
Audits catch issues. Logs show what happened after.
Courts value timelines more than snapshots.
logs don’t prove compliance
This needs clarity.
Logs do not prove a site is accessible. They prove effort and maintenance.
Plaintiffs can still win cases against teams with perfect logs.
Logs affect remedies, not liability.
why logs change settlement outcomes
When logs show:
Issues identified quickly.
Fixes attempted.
Patterns improving.
Settlements often focus on future monitoring rather than penalties.
When logs show silence or repetition, settlements grow teeth.
accessibility debt shows up in logs
Logs reveal debt.
Repeated failures in the same components.
Long delays for the same issue types.
Dependence on outdated platforms.
That can hurt in court. It also guides remediation planning.
Ignoring what logs reveal wastes their value.
mobile apps belong in the same log
Many teams keep separate records for apps.
That fragments defense.
Investigators treat apps and websites together. Logs should too.
If an app isn’t logged, it looks forgotten.
emergency content deserves special logging
Emergency alerts, notices, and alerts carry higher scrutiny.
Logs should note testing of these channels specifically.
Failing here escalates enforcement quickly.
logs don’t replace policy
Policies set intent. Logs show execution.
Logs without policy look ad hoc. Policies without logs look hollow.
Courts expect both.
what a judge actually sees
Judges don’t read every entry.
They look at patterns.
Do logs exist before the complaint.
Do issues persist without explanation.
Does effort increase after notice.
Logs tell that story without argument.
the limitation nobody likes
Logs create evidence. Evidence can be used against you.
A log that admits failures exists.
The alternative is no record at all. That’s worse.
Courts prefer honest records over silence.
the real function of the monthly compliance log
It preserves memory.
It documents effort.
It narrows disputes.
It shapes remedies.
It survives staff turnover.
It does not make a site accessible. It does not stop complaints.
It makes your position legible.
End.