LinkedIn has halted AI model training in the UK after data privacy watchdog and experts raised privacy and ethical concerns.
LinkedIn has temporarily stopped using UK-based data to train generative AI models. The decision follows concerns raised by the Information Commissioner’s Office (ICO) about user privacy. Stephen Almond, the ICO’s Executive Director for Regulatory Risk, said the suspension comes as LinkedIn re-evaluates its practices regarding AI training.
“We welcome LinkedIn’s confirmation that it has suspended such model training pending further engagement with the ICO,†Almond said. The move comes after LinkedIn faced scrutiny for sneakily using user data in ways that some see as a breach of privacy, particularly in relation to the AI models driving new platform features.
Almond added that the ICO was pleased to see LinkedIn reflected on its concerns about its approach to training generative AI models with information relating to its UK users. However, British cybersecurity expert Kevin Beaumont said that it was his official complaint that might have done the trick. “I complained to the ICO about this, who have forced LinkedIn to discontinue training AI on UK user data,” Beaumont wrote on Mastadon.
The Role of AI in LinkedIn’s Features
Generative AI, which LinkedIn has used for writing suggestions and content creation, relies heavily on data. This data includes personal information like profile details, user interactions and post content. In the process of training these models, LinkedIn aims to improve its features, offering users writing prompts or even post recommendations.
However, the inclusion of personal data in AI training has raised questions about privacy and user controls over them.
“We shouldn’t have to take a bunch of steps to undo a choice that a company made for all of us,” said Rachel Tobac, CEO of SocialProof Security and a member of the CISA Technical Advisory Council. “Organizations think they can get away with auto opt-in because ‘everyone does it.’ If we come together and demand that orgs allow us to CHOOSE to opt-in, things will hopefully change one day.”
As LinkedIn evolves its AI capabilities, the debate centers on the balance between enhancing platform features and protecting user privacy.
According to LinkedIn’s FAQs on generative AI, when users engage with AI-powered tools, the platform processes their interactions and feedback, which may include personal data.
For example, the “Profile Writing Suggestions” feature pulls from existing profile information to generate recommended text. Similarly, AI tools that suggest posts use personal data from previous content to craft new recommendations.
UK, EU Users Exempt from AI Model Training
In a notable exception, LinkedIn does not currently use personal data from users located in the UK, European Economic Area (EEA), or Switzerland to train its generative AI models. This means content generated by members in these regions remains untouched by AI training efforts, aligning with stricter privacy regulations like the GDPR.
LinkedIn says it uses privacy-enhancing technologies to redact or remove personal data from training datasets when AI models are trained. This approach demonstrates LinkedIn’s efforts to minimize risk but leaves room for debate about how fully anonymized the data sets really are.
The ICO’s Concerns and Privacy Rights
Almond emphasized the importance of public trust in AI technologies, especially as the field continues to grow. “It is crucial that the public can trust that their privacy rights will be respected from the outset,†he stated. His remarks reflect broader concerns about data collection for AI purposes and the need for strong safeguards to protect personal information.
The ICO plans to continue monitoring companies like Microsoft, LinkedIn’s parent company, to ensure compliance with privacy laws. Companies developing generative AI must be transparent in how they use data, particularly when it involves sensitive user information.
Social media giant Meta faced a similar regulatory hurdle back in June. It was forced to pause its AI training plans after facing criticism and concerns from the Information Commissioner’s Office. Meta’s plan involved using public posts, comments, and images from UK users to train its AI. After month’s of deliberations, Meta last week announced the resumption of AI training in the UK, per the changes suggested by the ICO.
LinkedIn Halts AI Model Training – But The Details Bear Watching
LinkedIn users can opt-out of having their data used for generative AI training. This option is especially relevant for users in regions where LinkedIn currently trains models with member data. Tobac confirmed that people from the U.S., Canada, India, UK, Australia, UAE, and elsewhere have reported being auto-opted in.
Users can adjust settings under “Data for Generative AI Improvement” to prevent their information from being used for AI training purposes going forward. However, past training sessions remain unaffected.
LinkedIn Auto Opt-in Setting (Source: LinkedIn)For users in the EU, EEA, UK, or Switzerland, LinkedIn will not use their data to train or fine-tune AI models without explicit notice. Those outside these regions must take steps to opt out if they wish to avoid being part of AI model training.
To opt out of it – Open LinkedIn app or on desktop > Click profile picture in upper left corner > Click Settings > Click Data Privacy > Click Data for Generative AI Improvement > Toggle Off.
LinkedIn also offers tools for users to review and delete their past interactions with AI features. These tools, accessible via the platform’s data access settings, allow users to maintain control over their data. Users can delete generative AI conversations or request LinkedIn to remove personal information from their account history.
Source: LinkedInNavigating AI Privacy in the Future
The generative AI landscape continues to evolve, with growing attention on how companies handle user data. As AI tools become increasingly integrated into platforms like LinkedIn, the challenge lies in balancing innovation with privacy protection.
While LinkedIn’s suspension of AI model training in the UK signals a positive step toward addressing privacy concerns, it also highlights the ongoing tension between tech innovation and regulatory oversight. As Almond noted, “We will continue to monitor major developers of generative AI, including Microsoft and LinkedIn, to review the safeguards they have put in place.â€
As companies push the boundaries of AI, privacy watchdogs will remain vigilant. Ensuring transparency and accountability in AI model training will be essential to maintaining public trust. LinkedIn’s latest move suggests that the platform recognizes this responsibility, but the road ahead involves navigating complex regulatory landscapes and evolving user expectations.
Source: Read More