Artificial Intelligence: A beginner’s guide
Everybody is talking about AI. If you're part of those conversations but have a sinking feeling you don't really know what AI is, start here.
Everyone is talking about artificial intelligence. That’s understandable — after all, suddenly there are free (or cheap) tools readily available to create a variety of AI-generated content, including text and images, in an unlimited range of styles, and seemingly in seconds.
Of course it’s exciting.
But stop for a moment and ask yourself a few questions:
- Do I really know what AI is?
- Do I know how long it has been around?
- Do I know the difference, if any, between AI and machine learning?
- And do I know what the heck is deep learning?
If you answered all those questions affirmatively, this article may not be for you. If you hesitated over some of them, read on.
The AI revolution starts…now?
Let’s start by filling in some background.
Is AI something new?
No. Conceptually, at least, AI dates as far back as 1950 (more on that later). As a practical pursuit it began to flourish in the 1960s and 1970s as computers became faster, cheaper and more widely available.
Is AI in marketing something new?
No. It’s worth bearing in mind that AI has long had many, many applications in marketing other than creating content. Content recommendations and product recommendations have been powered by AI for years. Predictive analytics — used to predict user behavior based on large datasets of past behavior, as well as to predict the next-best-action (show her a relevant white paper, show him a red baseball cap, send an email) — has been AI-powered for a long time.
Well-known vendors have been baking AI into their solutions for almost a decade. Adobe Sensei and Salesforce Einstein date from 2016. Oracle’s involvement with AI goes back at least as far and likely further; it just never gave it a cute name. Another veteran deployer of AI is Pega, using it first to predict next-best-actions in its business process management offering, and later in its CRM platform.
Well…is generative AI something new?
Generative AI. Conversational AI. AI writing tools. All phrases of the moment, all overlapping in meaning. Generative AI generates texts (or images, or even videos). Conversational AI generates texts in interaction with a human interlocutor (think AI-powered chatbots). AI writing tools aim to create customized texts on demand. All of these solutions use, in one sense or another, “prompts” — that is, they wait to be asked a question or set a task.
Is all this new? No. What’s new is its wide availability. Natural language processing (NLP) and natural language generation (NLG) have been around for years now. The former denotes AI-powered interpretation of texts; the latter, AI-powered creation of texts. As long ago as 2015, based on my own reporting, AI-powered NLG was creating written reports for physicians and for industrial operations — and even generating weather forecasts for the Met Office, the U.K.’s national weather service.
Data in, text out. Just not as widely available as something like ChatGPT.
Video too. At least by 2017, AI was being used to create, not just personalized but individualized video content — generated when the user clicks on play, so fast that it appears to be streaming from an existing video library. Again, not widely available, but rather, a costly enterprise offering.
Dig deeper: ChatGPT: A marketer’s guide
What AI is: the simple version
Let’s explain it from the ground up.
Start with algorithms
An algorithm can be defined as a set of rules to be followed in calculations or other problem-solving or task-completing operations, especially by a computer. Is “algorithm” from the Greek? No, it’s actually from part of the name (al-Khwārizmī) of a 9th century Arab mathematician. But that doesn’t matter.
What does matter is that using algorithms for a calculation or a task is not — repeat, not — the same as using AI. An algorithm is easily created; let’s take a simple example. Let’s say I run an online bookstore and want to offer product recommendations. I can write a hundred rules (algorithms) and train my website to execute them. “If she searches for Jane Austen, also show her Emily Bronte.” “If he searches for WW1 books, also show him WW2 books.” “If he searches for Agatha Christie, show him other detective fiction.”
I’ll need to have my volumes of detective fiction appropriately tagged of course, but so far so easy. On the one hand, these are good rules. On the other hand, they are not “intelligent” rules. That’s because they’re set in stone unless I come back and change them. If people searching for WW1 books consistently ignore WW2 books, the rules don’t learn and adapt. They carry on dumbly doing what they were told to do.
Now, if I had Amazon’s resources, I’d make my rules intelligent — which is to say, able to change and improve in response to user behavior. And if I had Amazon’s market share, I’d have a deluge of user behavior that the rules could learn from.
If algorithms can teach themselves — with or without some human supervision — we have AI.
But wait. Isn’t that just machine learning?
AI versus machine learning
To the purist, AI and machine learning are not originally the same thing. But — and it’s a big but — the terms are used so interchangeably that there’s no going back. Instead, the term “general AI” is now used when people want to talk about pure AI, AI in its original sense.
Let’s go back to 1950 (I warned you we would). Alan Turing was a brilliant computer scientist. He helped the Allies beat the Nazis through his code-cracking intelligence work. His reward was to be abominably treated by British society for his (then illegal) homosexuality, treatment that resulted in an official apology from Prime Minister Gordon Brown, more than 50 years after his death: “On behalf of the British government, and all those who live freely thanks to Alan’s work, I am very proud to say: We’re sorry. You deserved so much better.”
So what about AI? In 1950, Turing published a landmark paper, “Computing machinery and intelligence.” He published it, not in a scientific journal, but in the philosophy journal “Mind.” At the heart of the paper is a kind of thought experiment that he called “the imitation game.” It’s now widely known as “the Turing test.” In the simplest terms, it proposes a criterion for machine (or artificial) intelligence. If a human interlocuter cannot tell the difference between responses to her questions from a machine and responses from another human being, we can ascribe intelligence to the machine.
Of course, there are many, many objections to Turing’s proposal (and his test is not even smartly designed). But this did launch the quest to replicate — or at least create the equivalent of — human intelligence. You can think of IBM Watson as an ongoing pursuit of that objective (although it has many less ambitious and more profitable use cases).
Nobody really thinks that an Amazon-like product recommendation machine or a ChatGPT-like content creation engine is intelligent in the way humans are. For one thing, they are incapable of knowing or caring if what they are doing is right or wrong — they do what they do based on data and predictive stats.
In fact, all the AI discussed here is really machine learning. But we’re not going to stop anyone calling it AI. As for the pursuit of human-level or “general AI,” there are good reasons to think it’s not just around the corner. See, for example, Erik J. Larson’s “The myth of artificial intelligence: Why computers can’t think the way we do.”
What about ‘deep learning’?
“Deep learning” is another AI-related term you might come across. Is it different from machine learning? Yes it is; it’s a big step beyond machine learning and its importance is that it greatly improved the ability of AI to detect patterns and thus to handle images (and video) as competently as it handles numbers and words. This gets complicated; here’s the short version.
Deep learning is based on a neural network, a layer of artificial neurons (bits of math) which are activated by an input, communicate with each other about it, then produce an output. This is called “forward propagation.” As in traditional machine learning, the nodes get to find out how accurate the output was, and adjust their operations accordingly. This is called “back propagation” and results in the neurons being trained.
However, there’s also a multiplication of what are known as the “hidden layers” between the input layer and the output layer. Think of these layers literally being stacked up: That’s simply why this kind of machine learning is called “deep.”
A stack of network layers just turns out to be that much better at recognizing patterns in the input data. Deep learning helps with pattern recognition, because each layer of neurons breaks down complex patterns into ever more simple patterns (and there’s that backpropagating training process going on too).
Are there AI vendors in the martech space?
It depends what you mean.
Vendors using AI
There are an estimated 11,000-plus vendors in the martech space. Many of them, perhaps most of them, use AI (or can make a good argument that that’s what they’re doing). But they’re not using AI for its own sake. They are using it to do something.
- To create commerce recommendations.
- To write email subject lines.
- To recommend next-best-actions to marketers or sales reps.
- To power chatbots.
- To write advertising copy.
- To generate content for large-scale multivariate testing.
The list is endless.
The point I want to make is that AI is a bit like salt. Salt is added to food to make it taste better. Most of us, at least, like the appropriate use of salt in our food. But who ever says, “I’ll have salt for dinner,” or “I feel like a snack; I’ll have some salt”?
We put salt in food. We put AI in marketing technology. Aside, perhaps, for research purposes, salt and AI aren’t much used on their own.
So yes, there are countless martech vendors using AI. But are there martech vendors selling AI as an independent product?
Vendors selling AI
The answer is, in the martech space, very few. AI as a product really means AI software designed by engineers that can then be incorporated and used in the context of some other solution. It’s easy to find engineering vendors that are selling AI software, but for the most part they are selling to IT organizations rather than marketing organizations, and selling it to be used for a very wide range of back-office purposes rather than to enable marketing or sales.
There are one or two exceptions out there, clearly targeting their products at marketers. Not enough, however, to create a populous category in a marketing technology landscape.
We scratched the surface
That’s all this article is intended to do: scratch the surface of an enormously complex topic with a rich history behind it and an unpredictable future ahead. There are ethical questions to address, of course, such as the almost inevitable cases where machine learning models will be trained on biased data sets, as well as the equally inevitable plagiarising of human content by generative AI.
But hopefully this is enough to chew on for now.
Get MarTech! Daily. Free. In your inbox.
New on MarTech