close
close

LinkedIn to use users’ personal data to train AI models

LinkedIn to use users’ personal data to train AI models

LinkedIn uses account holder data to train its AI models, according to the company’s updated privacy policy, which went into effect on September 18. The policy mentions that the company also relies on user data to develop, deliver, and personalize its services with AI. LinkedIn says it also uses people’s interactions with its generative AI features for training purposes. It further mentions that whenever it relies on user data for training purposes, it seeks to minimize the personal information contained in that data. To do this, it relies on privacy-enhancing technologies to redact or remove personal data. The EU, the UK, the European Economic Area, and Switzerland are the only regions where LinkedIn does not currently use data for training purposes.

Users can opt out of having their personal data and the content they create used to train generative AI. However, until they do, their data is still accessible. Opting out means that LinkedIn and its affiliates will not use your personal data or content on LinkedIn to train AI models in the future, but it will not affect the models they have already created using your data. The company retains people’s data for training purposes until it deletes it. Users can access information about the data LinkedIn used for training through its data access tool.

Concerns about the silent membership approach

The main problem with this approach is that it does not give users the opportunity to give informed consent, and users are not explicitly informed that their data will be used for AI training purposes. The advocacy group None of Your Business (NYOB) has flagged this exact concern in the context of Meta, which also follows a silent opt-in approach to training its AI models. In early June, NYOB filed complaints against Meta in several EU countries for using people’s data since 2007 to train AI models. This data includes things like posts or photos and their captions.

Another problem is that by only allowing users to opt out of access to their data, LinkedIn and Meta are shifting the burden of action onto their users. They assume consent by default, so if a user isn’t actively monitoring for updates to companies’ privacy policies, they won’t know that the companies are using their data to train AI models. Additionally, while the tech giants tell users what data they’re using to train models, they don’t tell them which specific AI models their data contributes to and how those models are applied. This again points to a lack of transparency in the approach to using people’s data for training purposes.

Indian data protection laws regarding use of personal data for training AI models

While India’s Digital Personal Data Protection Act (DPDPA, 2023) does not specifically mention AI, it allows companies to process publicly available personal data without any consent or adhering to the provisions of the law. This means that companies are not necessarily required to give users an option to opt-out before using their data to train AI models. For example, Meta, which also uses data from Indian users to train its models, does not provide an opt-out option to users in the country.

Advertisements

Also read: