Balancing free expression and brand safety can be difficult

From eMarketer:

A leaked document published by The Guardian outlines the guidelines Facebook is using to monitor big topic issues like violence and racism.

Saying “#stab and become the fear of the Zionist,” for example, would be considered a credible threat—and Facebook moderators would be able to remove that particular content. But saying “kick a person with red hair” or “let’s beat up fat kids” is not considered a realistic threat of violence.

Similarly, videos featuring violent deaths will be marked as disturbing, but will not always be deleted because they might raise awareness about issues such as mental illness.

Clearly, there are gray areas in the way content is handled.

What the leak has done is shed light on one simple truth: Publishing mammoths like Facebook and Google (which has also experienced its share of controversy over content) can’t currently provide 100% brand safety.

At scale, user-generated content provides too great of a challenge. And this doesn’t necessarily bode well for advertisers.

Comments

Popular posts from this blog

New York State County ZIP Codes

Starting a Mobile Food Concession Business? Be Sure to Follow the Rules of the Road

Beware credit counseling services like Clear Your Debt LLC