We are using AI more and more, and one of the most common questions I get is how to protect data privacy when interacting with large language models (LLMs). A key concern is whether our data could be used to train these models.
To address this, here’s how you can turn off the training mode in some of the most popular LLMs:
✅ ChatGPT: Disable chat history and training in the settings.
✅ Claude: It will not use your Inputs or Outputs to train the models unless under three conditions
✅ Gemini: Navigate to activity and turn off App activity or delete activity
✅ Perplexity: Turn off AI Data retention in Settings
✅ Copilot: Turn off the “Model Training” setting in privacy.
Another practical step is to anonymise your data—use dummy data or remove names and sensitive information when uploading documents or interacting with these models. This ensures your information stays private while you effectively use AI tools.
Lastly, thank Gary Marcus for reposting an insightful post by Ahmad Shady today about disabling the “experience content” setting in Microsoft documents. You can check out his post here for more details: https://lnkd.in/g_sRq2s2
