Why DSAR Redaction Still Goes Wrong: The 7 Mistakes That Trigger Complaints and ICO Scrutiny
Redaction failures are one of the fastest-growing reasons DSAR responses end up in disputes, complaints and ICO investigations. And contrary to what most organisations assume, the biggest risks don’t come from dramatic leaks or major oversights — they come from small, repetitive redaction patterns that, when examined, look like systemic governance failures rather than isolated human error.
Redaction failures are one of the most common reasons DSAR responses end up in complaints, disputes, and ICO investigations. And contrary to what many organisations assume, the biggest risks don’t come from dramatic leaks or major oversights. They come from small, repeated patterns that, when examined together, look like systemic governance failures rather than one-off human error.
With DUAA reinforcing proportionality and clarity obligations, and the ICO sharpening expectations around accuracy, fairness, and explainability (especially where AI is involved), organisations can no longer ignore these recurring blind spots. Below are the seven redaction mistakes DSAR teams still make—and how to prevent them.
1. Name Fragments Left in Email Chains, Chats, and Case Notes
This is the single most common redaction failure: first names, initials, or nicknames left behind in email threads, collaboration tools, tracked changes, OCR layers, or informal case notes.
Example: a university redacts an academic’s name from formal correspondence but leaves “Thanks, Mike” inside a Teams conversation, re-identifying the staff member.
Why it happens
-
Keyword-only redaction that misses variants or nicknames
-
Different redaction methods used across different systems
-
No centralised list of third-party name variations
How to prevent it
-
Build a per-case entity dictionary covering all name variations, initials, and role descriptors
-
Apply consistent entity logic across all systems and formats
-
Conduct a separate review focused only on informal conversational content
-
Document any intentional exceptions with rationale
2. Inconsistent Redaction Across Systems
A council redacts a social worker’s name in the case management export but leaves the same name visible in a scanned note or court bundle. An NHS Trust redacts clinicians in letters but not in MDT minutes.
Regulators view these inconsistencies as indicators of weak governance.
Why it happens
-
Different teams working in silos
-
System-based redaction processes instead of requester-based workflows
-
Lack of unified redaction rules for the case
How to prevent it
-
Create one redaction plan per DSAR that applies across all datasets
-
Perform cross-system sampling checks on selected third parties
-
Narrow scope using DUAA clarification powers when needed
3. Hidden Metadata and Invisible Identifiers
Even when visible text is removed, identifying information often remains in:
-
PDF metadata
-
OCR layers
-
Document properties
-
Comments and tracked changes
-
Embedded objects and previous versions
Example: a hospital removes a radiologist’s name from a letter but leaves it accessible in the underlying PDF text layer.
Why it happens
-
Tools that draw black boxes instead of permanently removing content
-
Applying format conversions after redaction
-
Not checking metadata, comments, and hidden layers
How to prevent it
-
Mandate tools that remove data at source
-
Follow a strict sequence: export → OCR → redact → validate → create final locked output
-
Never convert formats after the validated stage
-
Include metadata checks in the default redaction checklist
4. Contextual Information That Re-Identifies People
Names may be removed, but identity can still be obvious through:
-
Roles
-
Dates
-
Locations
-
Unique characteristics
Example: “Head of Classics who joined in 2019” clearly identifies an individual in a small department.
Why it happens
-
Over-focus on fixed identifiers (names, IDs)
-
Ignoring contextual identifiers
-
Misinterpretation of Article 15 as granting a right to literal wording
How to prevent it
-
Include contextual identifiers in redaction risk assessment
-
Replace highly specific descriptions with less identifying alternatives where appropriate
-
Record the reasoning to defend decisions if challenged
5. AI Over-Redaction and Fairness Risks
AI-driven redaction tools can incorrectly remove content, including:
-
The requester’s own personal data
-
Entire paragraphs that contain misinterpreted identifiers
-
Inconsistent redaction across similar documents
The ICO links this to fairness, accuracy, and meaningful human oversight.
How to prevent it
-
Treat AI as assistive, not authoritative
-
Require human confirmation for all AI-proposed redactions
-
Conduct sampling to tune models and reduce false positives
-
Maintain an AI-use register documenting oversight and performance monitoring
6. Undocumented Judgement Calls and No Audit Trail
Most DSAR disputes escalate not because the organisation made the wrong decision, but because they cannot prove why they made it.
Common issues include:
-
No rationale linked to exemptions
-
No escalation notes
-
Different thresholds used by different handlers
-
No record of how mixed-data content was assessed
How to prevent it
-
Record rationale for every sensitive redaction category
-
Link redactions to specific exemptions or legal bases
-
Require peer review for high-risk cases
-
Include decisions in a defensible DSAR file
7. Last-Mile Packaging and Dispatch Errors
Many breaches happen at the final stage, including:
-
Sending unredacted originals instead of redacted versions
-
Misaddressing secure files
-
Accidentally including a working draft
-
Exposing internal folders through incorrect sharing links
Why it happens
-
Packaging treated as admin instead of a risk point
-
No formal DSAR dispatch checklist
-
Case owners hand off too early
How to prevent it
-
Implement a DSAR exit gate checklist including:
-
Only validated redacted versions included
-
Encryption and authentication confirmed
-
Delivery method verified
-
Requester identity reconfirmed
-
-
Treat packaging as part of redaction, not separate
-
Use authenticated portals or controlled download links
Raising the Bar: Auditability, Explainability, Human Oversight
Organisations that avoid ICO scrutiny share three characteristics:
Auditability
Every redaction can be traced to a rule, rationale, or exemption with clear evidence.
Explainability
Teams can explain how decisions were made, including where AI was involved and how human judgement was applied.
Human Oversight
Tools assist, but humans decide. DSAR teams understand the limits of automation and apply proportionality and fairness deliberately.
The DSAR landscape continues to evolve, but the essentials remain unchanged: accuracy, consistency, and transparent decision-making. Organisations that treat these seven failure modes as design inputs not occasional mistakes will be significantly better prepared when the next DSAR complaint reaches the ICO.
020 8004 8625


