Slashdot: Anthropic Will Start Training Its AI Models on Chat Transcripts

Source URL: https://yro.slashdot.org/story/25/08/28/1643241/anthropic-will-start-training-its-ai-models-on-chat-transcripts?utm_source=rss1.0mainlinkanon&utm_medium=feed
Source: Slashdot
Title: Anthropic Will Start Training Its AI Models on Chat Transcripts

Feedly Summary:

AI Summary and Description: Yes

Summary: Anthropic has announced a new policy regarding the use of user data for training its AI models, which now includes chat transcripts and coding sessions. Users must choose to opt out by September 28th, and if they do not, their data will be retained for up to five years, raising significant privacy implications for AI data usage.

Detailed Description: Anthropic’s decision to train its AI models on user data, including ongoing and new chat transcripts and coding sessions, introduces important considerations for privacy and data protection. Professionals in the fields of AI, cloud security, and data compliance should take note of several significant points:

– **Data Usage**: The new policy allows Anthropic to utilize user-generated data for training purposes unless the user opts out.
– **Data Retention**: Anthropic will retain user data for five years, compounding the implications for user privacy.
– **Opt-Out Deadline**: Users must make their decision to opt out by September 28th, indicating a limited time frame for privacy decisions that may affect data security.
– **Scope of Data**: The policy applies only to new or resumed chat and coding sessions, excluding data from previous interactions unless resumed by the user.
– **User Consent**: By clicking “Accept,” users consent to their data being used immediately, highlighting the need for clear communication on data practices.

This development underscores the critical need for individuals and organizations to be aware of the data policies of AI providers, particularly as they pertain to user-generated content. For compliance professionals, this means ensuring that user consent processes are transparent and that users are fully informed of their options regarding data usage in AI systems.