If you’ve recently accessed LinkedIn, it’s likely that your user profile and personal information have been utilized by the platform’s latest AI feature designed for business and employment networking. You may have encountered the prompt on the homepage saying, “Start a post, try writing with AI” upon entering the site. Additionally, while exploring the “Jobs” section, an AI suggestion might have informed you that it had already analyzed job listings and curated those that were most relevant to your LinkedIn profile. Sounds handy, doesn’t it? But what might be the implications?
When LinkedIn launched its AI-driven tools in October 2023, the organization stated these innovations aimed to facilitate connections and growth for both businesses and users on its platform. For example, the AI-enhanced search in Sales Navigator and AccountIQ were designed to streamline lead prospecting and account research. Other products introduced during that time included Recruiter 2024, LinkedIn Learning’s AI-driven Coaching Experience, and Accelerate for Campaign Manager—all purported to enhance productivity and task prioritization.
Nonetheless, the professional networking giant did not merely provide basic AI assistance to its users. Merely a month later, LinkedIn revealed its AI-enhanced LinkedIn Premium service for paying members of the platform. The concept behind this new offering was to serve as a type of copilot for Premium users, helping them stay ahead in their careers by providing exclusive support for career transitions or skill development. Essentially, this AI functionality is designed to carefully assess user input, data, and activities to identify the most suitable opportunities available.
From initial acclaim to distressing reality
The rollout of AI on LinkedIn’s platform initially drew positive feedback from users, especially as CEO Ryan Roslansky’s team began introducing additional tools like the AI-powered writing assistant and their exclusive chatbot. This strategic move to leverage AI for overall platform enhancement proved successful, with Chief Operating Officer Daniel Shapero announcing in March that there had been a 25% increase in LinkedIn Premium subscribers year over year, contributing to the company’s annual revenue of $1.7 billion.
However, beneath LinkedIn’s apparent success with this new initiative, there were whispers regarding how utilizing AI technology could jeopardize user data while training its AI models.
Why LinkedIn’s usage of user data for AI training raises significant concerns
AI systems depend heavily on data. These digital entities learn through data inputs, enabling them to recognize patterns, make forecasts, and generate outputs that give the impression of intelligence based on their learning. It is evident that an AI system performs better with an abundance of data. LinkedIn stands as the largest professional network globally, with over 1 billion users, making it a prime source for AI model training data. Given LinkedIn’s status as a wholly owned subsidiary of Microsoft, which is the main supporter of OpenAI, the integration of AI into its platform was only a question of timing.
Since the introduction of AI features on its platform last year, LinkedIn has seemingly avoided acknowledging that it’s covertly gathering data from users’ posts, articles, preferences, and interactions. However, the company recently updated its generative AI FAQs to clarify that it collects user data to “enhance or develop” its services. Furthermore, LinkedIn took it upon itself to automatically enroll all users in this feature, igniting a backlash on social media from users opposing the harvesting of their personal data without consent.
A breach of user privacy
Among the dissenters opposing LinkedIn’s breach of user privacy was Women in Security and Privacy chair Rachel Tobac, who expressed on X the reasons individuals should withdraw from the new AI feature. She noted that since generative AI tools utilize the inputs they’re trained on to formulate outputs, users might soon notice their original content being “reused,” “rehashed,” or outright plagiarized by AI. “It’s probable that aspects of your writing, images, or videos may be merged with others’ content to create AI outputs,” she remarked.
On the other hand, LinkedIn spokesperson Greg Snapper mentioned that leveraging customer data for AI training could ultimately benefit users. He argued that this could “assist individuals worldwide in creating economic opportunities,” and that it can “help many people on a large scale” when properly executed. LinkedIn’s Chief Privacy Officer Kalinda Raina also tackled the topic in a video shared on LinkedIn’s website last week, stating the organization only uses client data to “enhance both security and our products in the generative AI sphere and beyond.” Nevertheless, many users remain uncomfortable with the possibility of their data not being safeguarded.
Opting out of LinkedIn’s AI training feature
In response to user feedback, LinkedIn has committed to updating its user agreement with changes effective November 20. It has also clarified the practices outlined in its privacy policy and added an opt-out feature for its AI training. To disable LinkedIn’s AI training feature, follow these steps using the desktop version of the platform:
- Log into your LinkedIn account
- Click on the “Me” button/profile avatar
- Navigate to “Settings & Privacy”
- Select “Data privacy”
- Find and click on “Data for Generative AI Improvement”
- Toggle the switch for “Use my data for training content creation AI models” to opt out of this feature
Once the feature is disabled, LinkedIn will cease collecting your data for AI training. However, according to The Washington Post, this opt-out setting does not apply retroactively. Consequently, data amassed prior to the deactivation of the feature will remain in the system. While it’s not possible to reverse this, users can request LinkedIn to remove specific data or activities through the LinkedIn Data Deletion Form.