Crafting an Ethical Framework for AI in Marketing
Understand the challenges and opportunities that arise when integrating artificial intelligence with ethical marketing standards and practices.
This is the inaugural article by AI Marketing Ethics Digest's new co-editor, Kristina Martin. Kristina is a seasoned professional marketer and a dedicated educator. As co-editor of Marketing AI Ethics Digest, she merges her passion for communication and technology, advocating for the ethical and responsible use of AI tools by marketers, educators, and other communicators. Connect with her on LinkedIn.
At a Glance (TL; DR)
The rapid growth of generative AI tools has sparked ethical concerns in marketing. The Journal of Public Policy & Marketing recently highlighted the need for a debate on these ethical issues.
This article explores the ethical considerations surrounding the use of AI in marketing, using the American Marketing Association’s (AMA) Statement of Ethics as a foundation.
It delves into principles such as honesty, responsibility, equity, transparency, and citizenship and how they can apply to AI marketing practices. Furthermore, the article underscores the critical role of empathy in AI applications, emphasizing that technology should complement, not replace, human judgment and creativity.
Key Takeaways:
Honesty in AI:
AI's generative nature can bypass foundational research.
AI-generated content risks being misleading or inauthentic without proper oversight.
Marketers must ensure AI-generated claims are based on accurate data.
Responsibility & AI:
AI models, trained on human data, can inherit biases.
Marketers must take responsibility for any harm or misinformation caused by AI-driven campaigns.
Equity Concerns:
As seen with Stability AI’s Stable Diffusion, AI can perpetuate societal biases.
Marketers must be vigilant to prevent unfair targeting or exclusion in campaigns.
Transparency Challenges
Consumers deserve clarity on AI interactions and data usage.
The "black box" nature of AI models makes full transparency challenging.
Citizenship & AI's Broader Vision:
Tech leaders, including Elon Musk and Steve Wozniak, have called for ethical guidelines in AI development.
AI should be used for the greater good, promoting sustainable products and addressing societal challenges.
Embracing Empathy in AI:
AI should complement, not replace, human judgment and creativity.
Combining AI's analytical capabilities with human intuition can lead to authentic marketing messages.
Framing the Conversation on Ethical AI in Marketing
As a seasoned content marketer, educator, and scholar concerned with federal policies that may (or may not) be developed soon, I am interested in the responsible and ethical usage of AI technology and its impact on the marketing profession.
Recently, the Journal of Public Policy & Marketing posted a call for papers on “Generative AI: Promises and Perils” (the deadline for submission is January 15, 2024, if you’re interested).
In it, they state:
“The rapid diffusion of generative AI tools has attracted attention to and provoked controversy around the ethical issues surrounding their use … Against this background, there is an urgent call for a wide-ranging debate about the ethical issues associated with generative AI.”
Some of the questions they hope to address include:
Noting that generative AI can be used to create deepfakes, which marketing domains will be most impacted, and how should policymakers react?
How should policymakers protect consumers from misinformation and bias associated with generative AI?
What is the potential impact of addiction to or excessive reliance on ChatGPT and other generative AI tools on users’ social well-being? Is there (negative) impact beyond well-being in domains like (1) problem-solving ability, (2) creativity, and (3) grit?
What are the roles of policymakers, businesses, educators, training providers, and technology developers in educating and preventing the abusive use of generative AI?
What are the legal implications of generative AI regarding intellectual property, copyrights, and patents? These points are valid across business and creative domains such as art, music, etc.
These questions are helpful when thinking about Marketing AI ethics and are some of the topics I hope to explore in future columns.
Using the American Marketing Association’s Statement of Ethics to Shape the Future
For this article, I want to take a broader look at the subject of ethics by using the American Marketing Association’s (AMA) Statement of Ethics as a springboard for developing a Marketing AI Code of Ethics.
Let’s begin by looking at the AMA’s Statement of Ethics as it currently stands:
Summary of the AMA Statement of Ethics
The AMA has a clear stance on ethics in marketing, emphasizing the following values: honesty, responsibility, equity, transparency, and citizenship.
Before we dig in, here’s a summary of the AMA’s Statement of Ethics:
Honesty: Offering valuable solutions, upholding promises, and being truthful in all professional communications and interactions.
Responsibility: Acknowledging social obligations to stakeholders, recognizing, and accepting the consequences of our marketing decisions and strategies.
Equity: Supporting inclusive marketing practices by valuing and embracing stakeholder differences, avoiding stereotypes, and attending to the needs of vulnerable market segments.
Transparency: Creating a spirit of openness in all aspects of the marketing profession, avoiding participation in conflicts of interest, and communicating clearly with all constituencies.
Citizenship: Fulfilling the economic, legal, philanthropic, and societal responsibilities that serve stakeholders and contributing to the overall betterment of marketing and its reputation.
Let’s examine each of these areas and how they could apply to the use of AI in marketing.
Honesty
As marketers, we make claims all the time. Developing a marketing claim comprises four steps: 1.) identifying key insights, 2.) determining differentiators, 3.) crafting possible claim language (e.g., copy), and 4.) testing those claims with consumers — traditionally, these four steps have been based on primary research and gathering data from secondary research.
But one of these things is not like the others. While the first two steps and the last step are rooted in genuine research and understanding, the third step, crafting the claim language, is inherently generative. The introduction of AI has made it tempting for many to leapfrog directly to Step 3 and bypass the foundational research.
Many novice AI users use simple prompts to produce the end goal — copy that converts — without considering the process. When I say simple prompts, I mean a single prompt with zero context, like “Write me a blog post on [insert topic here].”
When we use generative AI ONLY to write copy, it CAN and WILL produce a result. However, the danger is that it may make claims or promises based solely on the patterns and formulas it learned by analyzing websites, ads, and other marketing materials. Without the research context, the AI-generated copy might be generic or untrue, leading to misleading or inauthentic messaging. It might generate exaggerated claims without a clear understanding of a product’s features, benefits, or differentiators.
Marketers have an ethical responsibility to ensure that their claims are accurate, honest, and not misleading. Marketers must ensure that any claims AI generates are based on accurate and reliable data.
However, marketers can use AI to:
Sift through vast amounts of data, including customer reviews, social media mentions, and other feedback mechanisms, to identify patterns, sentiments, and emerging trends.
Predict future consumer behavior based on historical data.
Scan and analyze competitor products, services, and marketing claims to identify gaps and opportunities.
Analyze product specifications, reviews, and other related data to identify unique features of a product or service.
Then and only then should it be used to suggest possible claim language, optimize language for clarity, persuasiveness, and relevance, and craft multiple versions of copy to cater to different audience segments.
Finally, marketers can use AI to automate A/B testing, analyze real-time consumer feedback, and predict the potential success of a claim to the broader market. This can save marketers time and help them understand how well the claim resonates with the target audience.
Of course, you should never upload proprietary data or personally identifiable information to an AI model, but that is just good old common sense.
Responsibility
Everyone has conscious and unconscious biases that show up in how we communicate, think, and react to other human beings. Recognizing these biases is the first step in overcoming them, which requires self-awareness. While we strive to be unbiased, we don’t always hit the mark. Especially in the digital age because everything moves so quickly.
Large Language Models (LLMs) such as ChatGPT have been trained on massive data sets of information humans create. Recognizing the potential biases in AI algorithms and working to mitigate them is a crucial step in the ethical and responsible usage of these tools.
If an AI-driven campaign inadvertently causes harm or misrepresents information, the marketer who used AI to create the campaign should take responsibility by addressing the issue and ensuring that it doesn’t happen again.
Responsibility also means recognizing that your AI model may not have the most up-to-date information. Therefore, nothing AI produces should ever be taken out of the box and shared in the marketplace.
Check out the new online course, “Revolutionize Your Marketing Content Writing With AI,” on Udemy.
The course teaches you how to use groundbreaking generative AI writing tools like ChatGPT and others to be more productive, save time, and improve your marketing writing. Don’t wait. Sign up today!
Equity
AI models have been known to perpetuate societal biases inherent in the data they are trained on. Auditing and adjusting these models is essential to prevent unfair targeting or exclusion in marketing campaigns.
Case in point: Stability AI’s Stable Diffusion is a text-to-image model recently found “guilty” of amplifying stereotypes about race and gender. An analysis of more than 5,000 images created with the AI model found that it takes racial and gender disparities to extremes — when asked to produce pictures of criminals and fast-food workers, most images generated were of men with darker skin.
Likewise, men dominated most occupations contained in the dataset, while lower-paying jobs such as “housekeeper” and “cashier” were dominated by women. There was also a disproportionate number of men with lighter skin tones represented across higher-paying jobs such as politician, lawyer, judge, and CEO. (Bloomberg.com)
As AI-generated images become increasingly difficult to distinguish from actual photos, we could see amplified stereotypes of race and gender find their way into marketing campaigns, which would have potentially damaging and severe implications for society.
According to Heather Hiles, chair of Black Girls Code, “People learn from seeing or not seeing themselves that maybe they don’t belong.” This is not a message that marketers want to perpetuate, nor does it uphold the support of “inclusive marketing practices,” avoiding stereotypes,” or “attending to the needs of vulnerable market segments” (i.e., children), all values that the AMA’s Statement of Ethics emphasizes.
Transparency
Transparency is vital when brands and businesses use any AI tool. If consumers are interacting with a chatbot, for instance, they should be aware of it. Additionally, if AI is used to personalize ads or content, consumers should clearly understand why they are seeing specific content and how their data is being used.
This is difficult when we don’t have a strong understanding of how AI works. Most of us only know the basics: When we input data into ChatGPT, we get a prediction or result, but the exact reasons the model makes a specific decision or prediction can be hard to determine.
University of Michigan-Dearborn Associate Professor Samir Rawashdeh says, “...just like our human intelligence, we have no idea how a deep learning system comes to its conclusions. It ‘lost track’ of the inputs that informed its decision-making long ago. Or, more accurately, it was never keeping track.” (UM-Dearborn)
This is why transparency is such a tricky principle to tackle. When we don’t know how a specific technology works, i.e., the inability to see how an AI model makes a decision, we have what is known as a “black box problem.”
Rawashdeh notes that these black box problems also have an ethical dimension related to bias. Deep learning models can be used to make all kinds of decisions, ranging from who should get approved for a loan to which applicants should be granted an interview for a job. Bias, when present, can skew those decisions to favor certain segments of the population over others.
Being transparent about using AI with our clients, consumers, and each other isn’t necessarily the problem. Many professionals are publicly discussing how they use AI. LinkedIn is filled with marketing professionals talking about AI, and organizations like the Marketing AI Institute are taking great strides in educating marketers about responsible usage.
The problem lies in the transparency of the AI models themselves. For example, a tool like Writesonic allows me to input a URL, analyze the content, and extract the brand voice. I don’t see any harm in this. If I were working with a new client with an established brand voice, this tool would only help me produce better content.
Most clients probably wouldn’t have a problem with this either. However, a tool such as Delve.AI that leverages first-party data, such as Google Analytics, to create a brand persona might pose more of an issue for clients who may not want AI to access their analytics account (even though GA-4 already has built-in AI features).
With ChatGPT Plus, users can toggle on Advanced Data Analysis (formerly called Code Interpreter) to upload all kinds of documents to ChatGPT. Obviously, this has many ethical considerations, and many professions are concerned with users uploading proprietary or personally identifiable information (PII). While it may seem common sense to most, if you upload a 100-page document for analysis that you haven’t read through first, there is always the chance that you could leak sensitive data.
Again, you should NEVER upload proprietary or private information to any AI model. Even if the information in the document is not sensitive, clients should always be asked for their consent before sharing any information with ChatGPT or any other AI model.
In the end, regardless of which AI tool you are using, we have a responsibility as marketers to our clients and consumers to be transparent about how we use their data — explaining exactly how AI uses that data is the more difficult task.
Citizenship
The AMA defines citizenship as fulfilling the economic, legal, philanthropic, and societal responsibilities that serve stakeholders. The key word here, in my opinion, is “serve.” As marketers, we have a duty to serve our clients and their customers and look out for their best interests.
Every company would take a client-centered or customer-centric approach in a perfect world. Unfortunately, while many organizations claim to hold this value, their actions speak differently. When being client-centered is just lip service, it erodes trust, undermines credibility, and ultimately jeopardizes long-term relationships and growth.
In May, industry titans Elon Musk and Apple co-founder Steve Wozniak and thousands of other tech leaders and researchers signed a petition to place a moratorium on training AI models more potent than GPT-4 for a minimum of six months. The petition called for “clear ethical guidelines for AI development that promote respect for human rights, citizen privacy, and social justice.”
Historically, the mantra of many big tech companies has been "move fast and break things." While this approach may have catalyzed rapid innovation in some domains, it's a sentiment they can't recklessly apply to AI technology evolution.
This collective stance underscores a broader vision: To use AI for the greater good.
Using AI to promote sustainable products, support charitable causes, or address societal challenges are all examples of using AI for good. I’m unsure what happened to this petition; it seems to have fallen by the wayside.
It will take more than a few thousand signatures to slow the development of new AI technologies. Even OpenAI’s founder, Sam Altman, has spoken out about the implications of AI on humanity, calling for regulation and oversight by the government to ensure that the "ghost" in the AI "machine" benefits society rather than inadvertently harming it.
If it sounds like something out of a science fiction novel, that’s because it is:
In Isaac Asimov’s I, Robot, Dr. Alfred Lanning says: “Ever since the first computers, there have always been ghosts in the machine. Random segments of code that have grouped to form unexpected protocols. Unanticipated, these free radicals engender questions of free will, creativity, and even the nature of what we might call the soul.”
It's imperative that AI's application in marketing not only adheres to legal benchmarks but also resonates with a commitment to enriching the wider community and maintaining transparency at its core.
The Road Ahead: Embracing Empathy in the AI Era
If I were rewriting the AMA’s Statement of Ethics to consider AI, I would add one more tenet–one that I believe is a cornerstone of ethical behavior — EMPATHY.
One of the critical aspects of marketing is understanding human emotions, needs, and desires. When marketers do not consider human emotions, we miss out on the nuances and emotional connections that human insights bring to the table. AI-generated copy without human oversight often lacks empathy or the emotional resonance that helps a brand connect with consumers. Ultimately, AI should complement human judgment and creativity, not replace it.
That said, combining AI’s analytical capabilities with human intuition and experience can produce effective and authentic marketing messages that resonate with a target audience.
As we navigate the evolving landscape of AI in marketing, we must remember that technology is a tool, not a replacement. By grounding our strategies in empathy and human understanding, we ensure that our messages remain genuine, impactful, and truly connect with those we aim to reach.
Author’s Note: I realize that other marketing organizations have their own Codes of Ethics, such as the International Association of Business Communicators (IABC) and the Public Relations Society of America (PRSA).
In the same vein, many organizations have their recommendations for marketers, such as The Code of Ethics for Data-Driven Marketing and Advertising by the Data & Marketing Association (DMA) and the Institute of Electrical and Electronics Engineers (IEEE) Global Initiative on Ethics of Autonomous and Intelligent Systems, which has developed a set of ethical guidelines that include recommendations for marketing and advertising.
I did not look at these before writing this article, but upon further examination, it’s clear that the principles they base their recommendations on overlap with the AMA. We are far from creating any kind of policy or overarching code of ethics, but strides are being made daily.
Addressing the ethical implications of AI and formulating fair and transparent policies will require a collaborative effort spanning the entire spectrum of marketing professionals–from brand managers, content creators, digital marketers, and market researchers to social media managers, CX/UX designers, data analysts, and product managers, to name a few. Still, seeing a diverse community of AI users working together to reach a consensus is heartening.
Join the Conversation
How do you envision the future of AI and ethics intertwining in marketing? Share your insights, experiences, and suggestions as we collectively shape a responsible and empathetic approach to AI-driven marketing. Comment below or reach out to us directly. Let’s co-create a future where AI technology and humanity harmoniously coexist.
More about Kristina…
With a B.A. in Communication Studies and an M.Ed in Curriculum and Instruction, Kristina’s academic pursuits reflect her commitment to informed communication in education and business. A Lancaster County, Pennsylvania native, she now calls Frederick, Maryland, home, finding solace in its hiking trails and supportive business community.
A proud mother of three, she has two sons — the eldest currently serving in the U.S. Air Force, the youngest, a sophomore at PSU — and a daughter in 9th grade who is also a talented artist.
As the founder of No Fear Marketing + Media, Kristina leads a small but vibrant community of freelance writers and marketers who provide branding and content creation services to small businesses.