Recently we are witnessing the power of AI on social networks to identify and remove foul content, and apparently LinkedIn follows Facebook, Twitter and Pinterest in this case.
How does AI spot and remove inappropriate content from LinkedIn?
A post published on Microsoft-owned platform’s blog today explains how LinkedIn finds and removes posts using Artificial Intelligence. Software engineer Daniel Gorham tells that before giving the job to the AI, LinkedIn used a block list, which included human curated words and phrases that would define foulness. But of course it required tremendous effort to identify false positives and delete the content or accounts that are actually foul, since LinkedIn has more than 660 million members and 303 million monthly active users. So the list actually worked up to a small extent.
The blog post detailed how the company deals now profiles with inappropriate content which might be profanity, ads for illegal services etc. Now LinkedIn uses a convolutional (a type of algorithm commonly used for analyzing imagery) neural network trained on public member profiles and already identified the profiles previously flagged false positive and included these profiles in training specifically.
LinkedIn will also use Microsoft translation services to weed out “bad” content in other languages.