[r/LocalLLaMA]score: 0.20
🛡️ Shield 82M: A PII stripping/filtering model 🛡️
April 25, 2026
**Shield 82M is an 82M-parameter NER-based PII redaction model fine-tuned from `distilroberta-base`, designed to detect and replace entities such as names, emails, phone numbers, and addresses with labeled placeholders across multiple languages.** The model reports ~96% accuracy on PII detection tasks and demonstrates cross-lingual capability (tested on English and French) without language-specific fine-tuning. At 82M parameters, it is lightweight enough for on-device or self-hosted preprocessing pipelines, making it relevant for teams building GDPR-compliant data handling workflows or sanitizing training corpora before feeding into larger LLMs.
new model