Lies and bias on the rise in marketing and public relations
Even notable academic institutions are having issues with finding the truth in their own academic research.
According to this year’s Relevance Report 2020 from the USC Annenberg Center for Public Relations, half-truths and lies are more noteworthy (and acceptable) than ever.
The report features 17 of the 30 essays that deal with or in deceit, aided and abetted by the advancement of technology and social media in the post-truth age.
Erasing the line between paid and earned media
The rate at which influencer marketers are ignoring disclosure guidelines set by the FTC is alarming.
The FTC’s “Dot Com Disclosure Requirements” [PDF] are designed to help the public understand whether or not someone endorsing a product online was compensated.
But ignorance does not appear to be the cause of these violations.
“According to a study conducted by the influencer marketing agency Mediakix, only about 7% of endorsements on social media from the top 50 celebrity influencers comply with FTC’s guidelines of appropriate disclosure verbiage,” writes Cathy Park, a second-year strategic public relations graduate student at USC Center for Public Relations in the Relevance Report.
“Furthermore, Harvard Business Review reported that 28% of influencers were requested by the sponsoring brand to not disclose the partnership. It seems like the ability to deceive has somehow become tied to an influencer’s worth,” Park says.
More than one in four posts by online influencers ignore their duty to disclose in deliberate, profit-motivated acts of defiance.
Artificial intelligence and bias
According to Gartner Research, by 2023 one-third of all brand public relations disasters will result from data ethics failures. And the Relevance Report gives a concrete example.
With interest in artificial intelligence peaking, Burghardt Tenderich, Ph.D., associate director of the USC Annenberg Center of Public Relations explains the problem of bias in developing human-guided, ethical machine learning.
“…AI algorithms can also lead to false conclusions. In the Facebook example, this is due to the common practice by social media companies to deploy technology that is half-baked, at best. At the core of an ethical examination of AI is the desire to understand how decisions are made and what the consequences are for society at large. For that reason, policy makers are calling for AI to be explainable and transparent so that citizens and businesses alike can develop trust in AI.”– Burghardt Tenderich, Ph.D.
Speak no evil
The essay by a corporate spokesperson from Google says, “Each of our products is designed with an emphasis on privacy and security, including easy user interfaces and features like privacy Check-Ups, which allow people to control their data and keep their accounts safe and secure.”
But there is one stark omission: Project Dragonfly.
Project Dragonfly is a search engine prototype Google created to be compatible with China’s state censorship provisions and which also provided the government with a means for retrieving a users search history by searching their phone number, essentially abandoning their “don’t be evil” motto.
What’s perhaps most disheartening is that even the USC Annenberg School for Communication and Journalism — consistently ranked first according to the QS World University Rankings — is unable to ferret this post-truth omission from its own academic research, and that the search giant saw spinning the truth in academic research as fair game.
But then if Donald Trump can get away with claiming there was never a drought in California and still get elected president, why shouldn’t corporations be able to ignore their blemishes, particularly when criticizing politicians and brands on social media carriers the risk of public interrogation and verbally abuse?
Opinions expressed in this article are those of the guest author and not necessarily MarTech. Staff authors are listed here.