See No Evil, Hear No Evil, Speak No Evil
ChatGPT can now see, hear, speak, but can it do so without causing harm? Biden's executive order intends to prevent just that.
After a week away, we’re back with another issue. So much has happened lately, especially where ChatGPT is concerned, that I feel I’m playing catchup. But this promises to be an informative issue, so please read on.
ChatGPT is now one year old, having launched in November 2022. You probably know that the LLM just got its latest round of updates, some of the most dramatic ever released. Essentially, ChatGPT Plus can now see, hear, and speak.
“We are beginning to roll out new voice and image capabilities in ChatGPT. They offer a new, more intuitive type of interface by allowing you to have a voice conversation or show ChatGPT what you’re talking about.”
Users can also design custom chatbots to handle specific tasks. (That could certainly be a boon for marketing. I created one tailored to a client’s brand and our co-editor
has done at least three.)But this newsletter isn’t about describing cool new features or elaborating on the latest AI tools — we’ll leave that to others. Our focus is always on how any of these technologies impact marketing ethics.
While many are excited about the marketing opportunities ChatGPT and similar LLMs present, others fear a tech Pandora’s Box has been opened, threatening creative industry jobs and, worse, causing catastrophic harm to humans if left unchecked. Even OpenAI’s founder Sam Altman signed an open letter warning that unregulated AI could wipe out humanity!
Rather than give way to hysterics and hyperbole, let’s examine a few egregious real-world examples and segue into President Biden’s executive order regulating AI’s use with a lens focused on AI marketing ethics.
Bias
We all know LLMs are susceptible to perpetuating and amplifying biases in the training data, potentially leading to harmful outcomes. LLMs run the risk of exacerbating pre-existing societal biases and inequalities by reinforcing and perpetuating patterns of exclusion and discrimination.
I gave ChatGPT this prompt: “Create an image of patrons at a high-end restaurant being served by a waiter or waitress.” Here is the result:
I followed that with a prompt that said, “Create an image of the kitchen dishwashing staff.” Here was the return:
I don’t know about you, but those images clearly portray bias to me. (Leave a comment if you agree or disagree.)
Copyright
Numerous articles caution against the potential dangers that AI may pose due to its unauthorized use of copyrighted material.
Recently, 17 authors, including the likes of George R.R. Martin, Douglas Preston, and Jonathan Franzen, convinced that their novels were used to train ChatGPT without their permission, sued OpenAI for copyright infringement.
That certainly has marketing implications. Imagine AI developers using your brand’s content to train an LLM without your knowledge or permission.
Privacy and Security
Extracting personally identifiable information from text can compromise privacy. LLMs present privacy and data protection challenges due to their reliance on huge volumes of personal data for efficient operation.
LLMs may also be susceptible to cyberattacks and other security threats. It could have severe repercussions if implemented in critical systems such as banking or healthcare.
Mary Ellen Slayer, founder and CEO of Rep Cap, a content marketing agency, and Managing Editor magazine, recently shared her thoughts on the critical importance of AI ethics and marketing in an interview.
“When I do my workshops and presentations about AI and marketing, I usually start with a governance question. What kind of guardrails do we want to use around this? Is security important? Is privacy important?
“Data governance was important to me because I work with clients in highly regulated industries — financial services, HR tech, and insurance — that handle personally sensitive data.
“When we're talking about payroll data, when we're talking about people's bank accounts, that stuff should not be handled carelessly.”
Biden’s Executive Order to Regulate AI
You know by now that President Biden issued the first-ever AI executive order from the U.S. government on safe, secure, and trustworthy artificial intelligence.
“With this Executive Order, the President directs the most sweeping actions ever taken to protect Americans from the potential risks of AI systems.”
It takes a broad-based approach to safe AI use that includes:
Direct actions for AI safety and security
Privacy protection for Americans
Equity and civil rights advancement
Standing up for consumers, patients, and students
The list goes on — supporting workers, promoting innovation and competition, advancing American leadership abroad, and ensuring responsible and effective government use of AI.
(
, a Substack publication focused on AI, has an in-depth analysis of the executive order that’s well worth reading.)Where marketing is concerned, Search Engine Journal says, “Marketing tools leveraging AI for functions like ad targeting, content generation, and consumer analytics may fall under tighter scrutiny … Marketers should expect more monitoring of opaque AI tools that could enable discrimination or deception.”
Biden’s is not the only governmental regulation intended to safeguard AI use. You may recall that I wrote about the EU’s efforts in an earlier issue. A few weeks ago, 28 countries, including the U.S. and China, met in the UK to sign a declaration to contain AI. Australia and Britain also have regulatory plans in the works; other countries are following suit.
The Future of AI Marketing Ethics
Despite the risks, the AI horse is out of the barn, never to return. Can it be bridled is the question?
Doubtless, its use will continue to affect marketing in the future. However, marketers must exercise caution when employing AI to prevent undermining customer trust. Increasing government regulation only boosts the chances of that occurring.
AI Marketing Ethics Around the Web
Our co-editor
was quoted in Fortune’s Eye On AI newsletter this week in the feature “OpenAI’s next big bet: Custom GPTS for everyone.”“They are fun to build, and I do think they can help with certain tasks … However, those tasks can’t be too complex. As a personal branding and content strategist, I would typically ask my clients a series of questions in a 1:1 interview and ask additional questions based on their responses. I think the GPT builder needs to be more sophisticated to handle those types of tasks.”
The Ethics of AI Marketing: Balancing Personalization and Privacy - This article addresses the ethical challenges in AI-driven marketing, focusing on how AI enhances personalized marketing but raises concerns about privacy and data security.
The Dark Side of AI in Marketing: Uncovering the Ethical Implications of Chat - This article discusses ethical challenges in using AI in marketing, emphasizing the benefits of AI in personalizing ads and campaigns through data analysis but highlights the privacy concerns this entails. (Registration may be required.)
New National Poll on Ethical Data Use in the Age of AI - The Ethical Tech Project teamed up with Ketch, an AI data privacy company, to commission a new survey demonstrating how consumers support ethical data practices in the age of AI.
That’s it for this week’s issue. Next week, we take a deeper dive into The Ethical Tech Project’s survey, highlighting ethical issues related to marketing.
Warm regards,
Paul Chaney, Editor
AI Marketing Ethics Digest
PS: If you like what you read, leave a comment; if you don’t like what you read, leave a comment.
Great post Paul. A very urgent topic that we need to understand more of now than ever.