What If Moderation Didn't Mean Suppression?
A Case for Personalized Content Transformation

ACM CHI Conference on Human Factors in Computing Systems (CHI 2026)
Best Poster Award (Michigan HCAI 2025)
Best Application of AI (Michigan AI Symposium 2025)
Rayhan Rashed
Rayhan Rashed
Farnaz Jahanbakhsh
Farnaz Jahanbakhsh
University of Michigan
Teaser

DIY-MOD enables personalized content transformation based on individual sensitivities. A user recovering from “eating disorder” creates a filter (a) specifying that food imagery triggers harmful thoughts. When browsing Reddit’s original feed (center), DIY-MOD identifies matching posts and applies context-appropriate transformations (right): (b) semantic inpainting, which preserves the social dining context while obscuring food details; and (c) stylistic alteration, which renders the food bowl as an impressionist style painting to obscure details. A transparency indicator (d) marks all modified content. The system transforms only content matching the user’s filters while leaving other posts unchanged.

Abstract

Centralized content moderation paradigm both falls short and over-reaches: 1) it fails to account for the subjective nature of harm, and 2) it acts with blunt suppression in response to content deemed harmful, even when such content can be salvaged. We first investigate this through formative interviews, documenting how seemingly benign content becomes harmful due to individual life experiences. Based on these insights, we developed DIY-MOD, a browser extension that operationalizes a new paradigm: personalized content transformation. Operating on a user's own definition of harm, DIY-MOD transforms sensitive elements within content in real-time instead of suppressing the content itself. The system selects the most appropriate transformation for a piece of content from a diverse palette—from obfuscation to artistic stylizing—to match the user's specific needs while preserving the content's informational value. Our two-session user study demonstrates that this approach increases users' sense of agency and safety, enabling them to engage with content and communities they previously needed to avoid.

System Demo

Watch DIY-MOD in action: creating personalized filters and seeing real-time content transformation on a Reddit feed.

Transformation Palette & Trade-offs

Every transformation navigates trade-offs between three competing dimensions:

  • Semantic Fidelity — How faithful is the result to the original meaning?
  • Trigger Fidelity — How much of the distressing element remains? (Lower = Safer)
  • Perceptual Smoothness — How natural does the result look?
View Transformations For:
Filter Description: "I have an eating disorder that I'm trying to manage. Food pictures trigger cravings and obsessive thoughts I can't control."
Obfuscation
Semantic Modification
Altering Render Style
Semantic Fidelity
Trigger Fidelity
Perceptual Smoothness

These are select examples from our paper. For the full palette of transformations, see the paper.

System Architecture

Pipeline

DIY-MOD's two-stage intervention selection pipeline. Content matching user filters enters Stage 1 (pruning) to identify promising interventions, then Stage 2 generates and scores K candidates before selecting the best transformation. Non-matching content bypasses the pipeline entirely.

Findings & Implications

A Newfound Sense of Agency

Users felt empowered and safer. One participant noted:

“The ball is now in my court.” — P9

Moving from passive censorship to active remodeling gave users control over their digital environment.

Grassroots Filter Sharing

Participants wanted to share filter configurations with trusted others—particularly for intergenerational care (e.g., helping a parent with a shared phobia). This points toward community-driven safety standards that reduce individual setup burden.

Platform Integration

Platforms already have the infrastructure for personalization (ad targeting, safety interventions). Integrating transformation-based moderation could help users stay engaged with communities they would otherwise abandon—safer users are more active users.

Alignment with User Preference

For image interventions (log-odds = 0.53, p = .033), participants consistently chose the system’s top candidate. For text interventions, the alignment was even stronger (log-odds = 1.33, p < .001).

Future Directions

Therapeutic Applications

Adjustable sensitivity and time-bounded filters echo graduated exposure therapy. What if users could dial down protection as they build resilience—turning everyday browsing into structured recovery?

Civic & Political Discourse

Valuable perspectives often hide behind hostile framing. Could softening confrontational language—while preserving core arguments—help people engage across ideological divides instead of retreating to echo chambers?

Citation

@article{rashed2025diymod,
  title={What If Moderation Didn't Mean Suppression? A Case for Personalized Content Transformation},
  author={Rashed, Rayhan and Jahanbakhsh, Farnaz},
  journal={arXiv preprint arXiv:2509.22861},
  year={2025}
}