Document Type
Conference Paper
Publication Date
2026
DOI
10.1145/3772318.3791809
Publication Title
CHI '26: Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems
Pages
1638 (1-25)
Conference Name
CHI 2026: CHI Conference on Human Factors in Computing Systems, April 13-17, 2026, Barcelona, Spain
Abstract
Large language models (LLMs) are increasingly deployed, yet they introduce significant privacy risks by disclosing personally identifiable information (PII) during interactions. Although prior work has demonstrated the feasibility of extracting PII from LLMs, no comprehensive study has evaluated the actual extent of PII leakage across mainstream LLMs or investigated user perceptions, literacy, and behavioral responses to these risks. To address these gaps, we conduct a large-scale evaluation of PII leakage in popular LLMs, demonstrating that attackers can extract email addresses and phone numbers with high success rates. Through a mixed-methods study involving 20 interviews and 204 survey participants, we identify significant discrepancies between user concerns and behavior: despite strong concerns about PII leakage and limited understanding of training data provenance, users continue to use LLMs due to perceived utility, often exhibiting privacy cynicism. Based on these findings, we propose design implications for enhancing the privacy-utility balance in future LLM deployments.
Rights
© 2026 Copyright held by the owner/authors.
This work is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) License.
Original Publication Citation
Cheng, S., Xu, H., Meng, S., Hao, S., Yue, C., & Zhao, L. (2026). The privacy paradox of LLMs: User perceptions and the reality of PII leakage. In N. Oliver, D. A. Shamma, H. Candello, P. Cesar, P. Lopes, A. Bozzon, T. Kosch, V. Liao, X. Ma, V. Artizzu, F. Draxler, G. López, A. V. Reinschluessel, X. Tong, & P. O. Toups Dugas (Eds.) CHI '26: Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems (Article 1638). Association for Computing Machinery. https://doi.org/10.1145/3772318.3791809
Repository Citation
Cheng, S., Xu, H., Meng, S., Hao, S., Yue, C., & Zhao, L. (2026). The privacy paradox of LLMs: User perceptions and the reality of PII leakage. In N. Oliver, D. A. Shamma, H. Candello, P. Cesar, P. Lopes, A. Bozzon, T. Kosch, V. Liao, X. Ma, V. Artizzu, F. Draxler, G. López, A. V. Reinschluessel, X. Tong, & P. O. Toups Dugas (Eds.) CHI '26: Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems (Article 1638). Association for Computing Machinery. https://doi.org/10.1145/3772318.3791809
ORCID
0000-0001-7483-5252 (Hao)