Twitter looks to crowdsource content policies

The app launched a user survey to gather feedback on dehumanizing language, but will it help deliver a safer platform for marketers?

Chat with MarTechBot

Twitter Bird 1920 800x450 1

  • Twitter has launched a user survey asking for feedback on its policies defining dehumanizing language before it makes the policy part of its Twitter Rules.

  • Survey questions include “How would you rate the clarity of the dehumanization policy?” and “How can the dehumanization policy be improved?” — as well as a request citing examples of speech that may violate the policy while still contributing to a healthy conversation.

  • As Twitter continues to improve the health of its platform via such content policies, marketers will ultimately be the arbiters of whether or not these efforts are effective. A healthy platform translates to stronger user engagement and activity — meaning better results for Twitter advertisers.

Twitter is looking to crowdsource rules around what constitutes as dehumanizing language on the platform. The company launched a user survey on Tuesday, asking people to rate the clarity of the proposed dehumanization content policy and give feedback on how the policy can be improved.

Twitter is also asking users to submit specific examples of speech that may be labeled as dehumanizing but could contribute to a healthy conversation. The survey also asks for the user’s age, gender and country and gives the option to submit their username.

As the US November midterm elections draw closer, Twitter has been laser-focused on improving the health of its platform. In July, the company announced it was holding off on expanding its verification process to focus on election integrity. Before that, Twitter launched new rules around election campaign policies and the Ads Transparency Center, a searchable archive of ads that have run on Twitter during the last seven days.

All of these moves demonstrate Twitter’s aim to create a less toxic platform where all users feel safe to engage and interact. The more safe users feel, the more active they will be on the app — a dynamic that benefits brands and marketers wanting to maximize ad campaigns and investments. Not only does a more engaged audience play out better for advertisers, but a safer platform also translates to a brand-safe environment — something that will become increasingly more important for Twitter as it builds out its in-stream video ad business.

In addition to the steps Twitter has taken around election integrity and political ad campaign policy, the company also recently purged locked accounts from follower numbers and changed how conversations happen on the platform, demoting tweets from offensive users. Now that it is trying to define what exactly constitutes as dehumanizing language, the company appears to be making efforts to address an issue that has long been a problem for the company: the mistreatment of women and minorities on the app.

Twitter’s proposed policy for dehumanizing content is stated as follows: “You may not dehumanize anyone based on membership in an identifiable group, as this speech can lead to offline harm.” It then lists the following definitions for dehumanization and identifiable groups:

Dehumanization: Language that treats others as less than human. Dehumanization can occur when others are denied of human qualities (animalistic dehumanization) or when others are denied of human nature (mechanistic dehumanization). Examples can include comparing groups to animals and viruses (animalistic), or reducing groups to their genitalia (mechanistic).
Identifiable group: Any group of people that can be distinguished by their shared characteristics such as their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, serious disease, occupation, political beliefs, location, or social practices.

“There are still Tweets many people consider to be abusive, even when they do not break our rules. Better addressing this gap is part of our work to serve a healthy public conversation,” writes Twitter VP of Trust and Safety Del Harvey on the Twitter blog announcing the survey. The company says it wants users’ feedback to ensure it considers global perspectives and how the policy may impact different communities and cultures.

The survey will be available for users to take through 6:00 a.m. PT October 6. The results will be included along with Twitter’s regular process for creating policy managed by a cross-functional group that includes staff from its policy development, user research, engineering and enforcement teams.


Opinions expressed in this article are those of the guest author and not necessarily MarTech. Staff authors are listed here.


About the author

Amy Gesenhues
Contributor
Amy Gesenhues was a senior editor for Third Door Media, covering the latest news and updates for Marketing Land, Search Engine Land and MarTech Today. From 2009 to 2012, she was an award-winning syndicated columnist for a number of daily newspapers from New York to Texas. With more than ten years of marketing management experience, she has contributed to a variety of traditional and online publications, including MarketingProfs, SoftwareCEO, and Sales and Marketing Management Magazine. Read more of Amy's articles.

Get the must-read newsletter for marketers.