It probably goes without saying at this point, but your interactions with AI chatbots are not private – everything you type or upload to Gemini, ChatGPT, and other models can be read and used in a variety of ways. If you won’t be sending a document or repeating information to someone you don’t know, you shouldn’t even include it in a chatbot prompt.
Stanford researchers Reviewed Privacy Policies One of six US companies developing the most popular AI chatbots, including Cloud, Gemini and ChatGPT, found that all of them use chat data by default for training purposes. Some retain said data indefinitely, and most combine it with other information collected from consumers, such as search queries and purchases. In most cases, you can avoid using your data to train the LLM, but Chats can also be read by human reviewersAnd violations of long-term retention policies increase the risk of your stored information being leaked.
If you’re going to use an AI chatbot, these are the things you should avoid sharing:
What do you think so far?
-
Login credentials: Obviously, you should never paste prompts with usernames and passwords into chatbots, including documents containing login credentials. The AI is also weak at generating secure passwords – use the tools in your password manager instead, or even better, opt for Passkey if available.
-
financial data: AI chatbots are not financial experts, and you should not upload documents or use data related to your specific finances in prompts. This includes bank details, credit card numbers, investment information, account numbers and balances, etc. Sharing financial details anywhere that is not secure increases the risk of theft, fraud, and targeting by scammers.
-
Medical Records: AI chatbots are also not medical professionals and should not be trusted for medical advice. You probably wouldn’t want your medical records to be used to train an LLM – plus, uploading them risks potential data breaches.
-
Personally Identifiable Information (PII): AI signals should never include information like your name, address, email, phone number, date of birth, Social Security number, passport number, or any other data that could be used to steal your identity. (Financial information and medical records are also considered sensitive PII.)
-
General Health Information: In addition to keeping your sensitive medical records private, you should avoid giving chatbots information about your health that could be used to profile you. For example, the Stanford report notes that it may be possible for AI chatbots to infer health status from requests for heart-friendly dinner recipes, which could eventually be accessible to insurance companies. It also includes information on topics such as sexual health, medication use, and gender-affirming care.
-
Mental Health Concerns: Another thing your chatbot is not is a therapist. When it comes to mental health, AI has been useless at best and harmful at worst. Even with updates made to protect users in crisis, chatbots are not a replacement for real, human assistance.
-
Photos: AI image editing is popular, but that doesn’t mean it’s without risks. You may not want your personal photos to be used for training purposes, and image metadata includes information like your GPS location. At a minimum, avoid uploading photos of people (especially minors) and consider Stripping EXIF Data Before sharing.
-
Company Documents: AI can be useful for summarizing documents, creating presentations, drafting emails, and completing other work-related tasks more quickly, but you should use caution when uploading files containing sensitive company information to a chatbot. Your employer may also have a policy prohibiting this.
The bottom line is that you should be careful what you share with AI chatbots – don’t assume everything in your prompts is stored and someone else can read it. Avoid anything that is personal or identifiable, and enable all available privacy settings (such as data sharing and training opt-outs).
