Claude is Anthropic’s AI assistant. You can chat with it, ask for coding help, or get it to analyze things for you. Up until now, Anthropic promised not to use your conversations to train its AI.
But now, things are changing. Starting October 8, 2025, Anthropic will use your chats with Claude to train its future AI models—unless you tell them not to. This applies to anyone with a Free, Pro, or Max account. If you want to help improve Claude, you can opt in and your chats and code will be saved for up to five years. If you’d rather keep things private, you can opt out and your data will only be kept for 30 days, like before. Old conversations won’t be used unless you open them again. If you use Claude for work, school, government, or through the API, none of this applies to you. Those accounts have different rules.
So what does this mean for you? If you want to keep using Claude, you have to make a choice by October 8. If you deal with anything private or sensitive, it’s a good idea to check your settings. To opt out, just go to Privacy Settings and turn off 'Help improve Claude.' If you decide to opt in, your chats will help make Claude smarter and safer, but they’ll be kept for up to five years. You can always change your mind later, or delete any conversation you don’t want used for training.