practice blog image facebook hate

Facebook’s blunder and the perils of social media advertising

It’s always going to be tricky for advertisers to navigate the realms of social media. After last week’s news of Facebook’s failure to remove pages glorifying violence against women, we at The Practice were keen to learn more about the social network’s role in upholding responsibility when it comes to offensive content on its site.

Creating a safe space for advertisers means social media sites must keep a watchful eye out for inappropriate content and remove any as swiftly as possibly. Unfortunately, Facebook was forced to admit that its systems had not worked effectively to eradicate this, but promised it would improve this for the future. By this time, feminist activists had already created a digital media campaign exposing advertisers whose ads appeared alongside the offending content, forcing those in question to remove ads from the site. Activists sent in excess of 5000 emails to the relevant advertisers, and simultaneously posted more than 60,000 tweets, prompting Nissan and other smaller companies to withdraw their Facebook adverts. The awareness campaign has forced the social platform to address the way it trains its moderators, and how to recognize and remove hate speech.

The Practice team agree that targeting advertisers was probably the most persuasive move for capturing Facebook’s attention towards its problem of offensive content; with advertisers forced to temporarily remove ads, Facebook would have stood to lose an important chunk of revenue. However, while over a dozen companies did swiftly remove adverts from the site, some, including Dove and American Express did not, instead merely issuing online statements across other social networks. From first understanding, it seems that removing adverts can seem unnecessary, but by the way Facebook’s system works, we know that targeted ads follow users around across any pages they visit, including those with harmful or offensive content. Therefore, we’re surprised that larger companies, particularly Dove who market to a largely female demographic, only chose to take half-measures. Fans were clear to pick up on this, with one commenting on its Facebook fan page: “So, Dove, you’re willing to make money off of us, but not willing to lift a finger to let Facebook know violence against women isn’t acceptable?”

So who’s really to blame for allowing matters to escalate? Stacy Janicki, of advertising agency Carmichael Lynch, argues that “advertisers have a responsibility to consumers and media companies have a responsibility to advertisers to make sure they control the content on those sites.” We agree that both can be held accountable for not taking the necessary measures, but luckily, the good news is that such inefficiencies have inspired change. Aside from the subsequent online campaign, the female-based coalition which includes Women, Action and the Media, has seen tenfold growth to include over 100 other women’s movement and organizations. They have since applauded Facebook’s intent to raise awareness and address any further issues of cyber-bullying and hate speech that may occur in the future.

Have you witnessed any offensive content on Facebook that’s failed to have been dealt with by moderators? And do you think it’s Facebook’s responsibility to ensure all users, media consumers and advertisers can operate in a safe environment? We’d love to hear your opinions, so please tweet to us @PracticeDigital and comment via our Facebook page.