We like to think of algorithms as sets of rules without bias or intent. Bias makes its way into AI because those rules are built on the suppositions and values of their creator. That’s one way bias gets into AI. Bias is also introduced by the data an AI system is trained on.
“If you give (an AI system) access to the internet, it inherently has whatever bias exists,” according to Paul Roetzer, CEO of The Marketing AI Institute. Builders of these systems are aware of that. “In [ChatGPT creator] OpenAI’s disclosures and disclaimers they say negative sentiment is more closely associated with African American female names than any other name set,” says Christopher Penn, chief data scientist at TrustInsights.ai.
OpenAI’s best practices documents also says, “From hallucinating inaccurate information, to offensive outputs, to bias, and much more, language models may not be suitable for every use case without significant modifications.”
There are tools to help eliminate bias: – What-If from Google is an open source tool that helps detect the existence of bias in a model by manipulating data points, generating plots and specifying criteria to test if changes impact the end result. – AI Fairness 360 from IBM is an open-source toolkit to detect and eliminate bias in machine learning models. – Fairlearn from Microsoft designed to help with navigating trade-offs between fairness and model performance.
“What marketers and martech companies should be thinking is, ‘How do we apply this on the training data that goes in so that the model has fewer biases to start with that we have to mitigate later?’” says Penn. “Don’t put garbage in, you don’t have to filter garbage out."
Penn says. “The tools that exist right now are mainly meant for tabular rectangular data with clear outcomes that you’re trying to mitigate against.” The systems that generate content, like ChatGPT and Bard, are incredibly computing-intensive. Adding safeguards against bias will have a significant impact on their performance. That adds to the already difficult task of building AI systems, so don’t expect any resolution soon.
Because of brand risk, marketers can’t afford to wait for the models to fix themselves. "What could go wrong?" is the question marketers should be asking. Diversity, equity and inclusion advocates are the people from whom marketers should be getting input. How companies define and mitigate bias in all these systems will be significant markers of corporate culture.