LinkedIn Faces Legal Action Over Alleged Data Sharing for AI Training
LinkedIn Premium users have filed a proposed class action lawsuit, claiming the Microsoft-owned platform shared their private InMail messages with third parties without consent.
The complaint, filed in federal court in San Jose, California, alleges the disclosed information was used to train generative artificial intelligence (AI) models.
The legal action targets LinkedIn’s alleged misuse of customer data prior to 18 September 2024, when an updated privacy policy confirmed data could be used for AI training.
Plaintiffs claim LinkedIn knowingly violated user privacy while breaching contractual promises to use personal data solely for platform improvements.
Privacy Settings Criticised as Inadequate
The plaintiffs argue LinkedIn introduced a privacy setting in August 2024, allowing users to control data sharing, but failed to make its implications clear.
By September, the company’s updated privacy policy revealed that opting out of data sharing would not prevent data previously collected from being used for AI training.
In the frequently asked questions (FAQ) section, it states,
Opting out means that LinkedIn and its affiliates won’t use your personal data or content on LinkedIn to train models going forward, but does not affect training that has already taken place.
Screenshot from Linkedin
This has fuelled accusations that LinkedIn deliberately obscured its intentions to avoid public backlash.
Breach of Trust and Legal Ramifications
The lawsuit accuses LinkedIn of breaching users’ trust and violating multiple laws, including California’s unfair competition law and the federal Stored Communications Act.
Plaintiffs seek unspecified damages for these violations, as well as US$1,000 per person for the unauthorised use of their private messages.
Linkedin Denies Allegations
Responding to the lawsuit, a LinkedIn spokesperson told MARKETING-INTERACTIVE,
“These are false claims with no merit.”
The company has stood by its practices, maintaining that its privacy policy updates were communicated transparently.
Hong Kong Privacy Watchdog Raised Similar Concerns
The legal challenge comes months after LinkedIn faced scrutiny in Hong Kong.
The Office of the Privacy Commissioner for Personal Data (PCPD) flagged concerns over LinkedIn’s default opt-in setting for AI training, which the regulator said might not accurately reflect users’ intentions.
Between October 2023 and 7 October 2024, the PCPD received seven complaints about LinkedIn’s data practices, including allegations of unauthorised data sharing and fake accounts.
Ada Chung Lai-ling, the privacy commissioner, advised LinkedIn users to carefully review the platform’s privacy policies and make informed decisions regarding their data.
In response, LinkedIn cited a blog post by its senior vice president and general counsel, Blake Lawit, which stated:
“In our Privacy Policy, we have added language to clarify how we use the information you share with us to develop the products and services of LinkedIn and its affiliates, including by training AI models used for content generation (“generative AI”) and through security and safety measures.”
Data Privacy Controversies Spark Broader Concerns
The lawsuit against LinkedIn echoes wider concerns about data privacy in the tech industry.
A recent case saw Apple agree to a US$95 million settlement over allegations that its voice assistant Siri inadvertently recorded and shared users’ private conversations.
As concerns over data use for AI development rise, surveys show that 93% of consumers worry about the security of their personal information online.
The LinkedIn lawsuit highlights the increasing tension between technological advancement and the need to protect user privacy.