The Future of Marketing Ethics in an AI-driven World: Interview with Geoff Livingston
Exploring AI marketing expert Geoff Livingston's vision on how artificial Intelligence is reshaping marketing strategies, ethics, and the 'sloppy' road ahead.
Geoff Livingston is the principal analyst and co-founder at CognitivePath, a marketing AI research and advisory firm. He helps marketing organizations and associations adapt AI initiatives through research reports, workshops, and advisory services.
Geoff has over two decades of experience in digital marketing and has advised organizations to develop innovative strategies and deploy technologies that deliver meaningful impact for brands, individuals, and the global community.
Here is a summary of the main points Geoff discussed in the interview, beginning with the impact of AI on marketing and advertising agencies.
The Challenge for Agencies with AI
Geoff articulated a vision of the future where traditional agency models face significant upheaval due to AI. His perspective that "one of the first areas it's going to get hit really hard...is agencies" speaks to a broader trend of automation and optimization in industries reliant on rote tasks.
He suggests that the value agencies provide is being reevaluated in light of AI's capabilities, forecasting a shift towards optimization that may render traditional models obsolete. His prediction invites reflection on the nature of value in marketing services and how it must adapt to survive technological shifts.
Geoff’s ethical concerns about agencies in the context of AI extend beyond the practical into the foundational. He sees AI as a catalyst that will "hold them to the fire," forcing a reckoning with longstanding issues of accountability and transparency.
It’s a viewpoint that sheds light on the complex relationship between technological innovation and ethical business practices, suggesting that AI could serve as a mirror reflecting the industry's moral dilemmas back at itself.
His prediction of a "major collapse of some agencies" within five years is a stark forecast that signals a profound marketing shift. He argues that AI's rapid content generation and optimization capacity will expose "black holes of agency hours without any results."
It’s not just about agencies' efficiency but their fundamental value proposition in an age where AI can replicate many of their services at a fraction of the cost and time.
Accountability and Ethics in AI Marketing
Geoff’s emphasis on accountability reveals a deep concern for the ethical deployment of AI in marketing. By highlighting the perennial ethical issue of "What are you doing? Where's my money going?" he points to a broader issue of trust in the digital age. The advent of AI, with its potential for opacity, exacerbates these concerns, demanding new standards of transparency and ethical conduct.
Geoff’s apprehension about the potential for AI to "annoy their customer base and perhaps tarnish their brand" underscores the delicate balance businesses must strike in leveraging AI.
The conversation about permission and transparency as central ethical concerns illuminates the foundational importance of consent in the digital economy. His perspective that "it's permission" serves as a clarion call for respecting user agency in an era of ubiquitous data collection and processing.
Future Intersection of Ethics and AI in Marketing
Geoff views the road ahead as "sloppy," suggesting a period of uncertainty and adjustment as industries grapple with integrating AI ethically. He is cautiously optimistic about how marketing executives' slower adoption of AI reflects a broader tension between innovation and responsibility.
Disclosing AI's Role in Content Creation
Geoff’s differentiation between automated content and human oversight introduces an important ethical distinction in AI-generated content. He insists on the importance of human checks as a marker of credibility and accountability, highlighting a critical ethical practice for maintaining trust in an increasingly automated media panorama.
Self-regulation and Ethics in the AI and Marketing Industry
Geoff’s proposal for an industry standard or association to oversee ethical AI use in marketing suggests a path forward through collective action and self-regulation. His vision of a "Hippocratic Oath" for marketers is both a call to ethical responsibility and a recognition of the need for industry-wide standards in the face of rapid technological change.
Geoff’s insights throughout the interview offer a nuanced view of the challenges and opportunities at the intersection of AI, marketing, and ethics. His reflections reveal a industry in flux, where traditional models are being reevaluated in light of new technologies, and ethical practices are being foregrounded as essential to sustainable innovation.
The conversation highlights the need for a proactive approach to integrating AI into marketing practices, one that prioritizes ethical considerations and seeks to maintain consumer trust and respect.
Geoff Livingston Interview
Q: What impact will AI have on marketing agencies and why?
GL: AI is going to hit agencies very hard once it really starts to move because their tasks are very rote. Nobody wants to do them — that's why you hire an agency.
Agencies tend to work off of large retainers and project fees, which are often unaccountable in terms of actual results. With AI, people will want to know what the money is going to and what data will create a baseline level of performance.
AI can already write reasonably well, so it can provide multiple iterations of headlines or content very quickly in a way no agency can manually. Either agency performance and pricing models will tighten significantly, or there will be a major collapse of agencies in the next five years as AI optimization exposes weak or repetitive work.
Q: What are the key ethical issues that need to be considered when using AI for marketing?
GL: It gets back to permission and having a value exchange with the customer. I'm okay with a company using my data if I expressly give them permission and if I feel the deal is worth it.
But if there is no value exchange and a company is just extrapolating insights from customer data to target ads or content, that feels ethically problematic. Most companies tend to do this by default with marketing data.
“It gets back to permission and having a value exchange with the customer. I'm okay with a company using my data if I expressly give them permission and if I feel the deal is worth it.”
Q: How can AI help address some of the existing issues around accountability and results that have plagued marketing agencies?
GL: AI will help CMOs hold their agency contractors more accountable. There will be more visibility into what is being created and how fast along with some level of measured performance.
Agencies can no longer spend eight hours on something but not deliver results or reuse the same ideas over and over. The black holes of agency hours without clear outcomes will go away as AI optimization and measurement take hold.
Q: Should marketers disclose when they use AI to assist with creating marketing content?
GL: In five years, everything will be cyber content with some level of AI involved. From a marketing content standpoint like websites, blogs, etc., I don't think a special disclosure is needed because AI augmentation will be assumed.
However, for journalism where credibility is paramount, disclosing the use of AI writers is crucial. For example, The Washington Post discloses that humans review all AI-generated content even though AI helps surface and shape initial story drafts.
Q: Which AI/generative platforms seem most ethical in their development approach so far?
GL: Anthropic with Claude seems thoughtful in wanting to benefit humanity with narrow AI. Adobe has also been fairly transparent about data sourcing and protecting creator rights with image generation. I don't trust OpenAI at all — they constantly overpromise and underdeliver, and everything seems like a PR stunt to get attention, like the firing and rehiring of their CEO.
Q: Does the government need to regulate marketing AI ethics or can the industry self-regulate?
GL: I don't have confidence that the fractured political climate can deliver meaningful long-term regulations. However, procurement policies requiring things like the NIST privacy framework for vendors selling to government agencies will force platforms to adhere to solid security and privacy standards for government use.
But for broader consumer protections, U.S. regulations will likely be much more limited than what the EU enacts which most large tech companies will also need to follow due to scale. So whether officially regulated or not in the US, big tech will likely shift towards EU-style consumer privacy preferences more broadly.
Q: What role should a marketing AI ethics manifesto or association play?
GL: I'd love to see an objective, non-profit marketing AI association focused on creating standards, promoting education, and requiring members to uphold ethical principles.
It needs to be done for the good of the industry with oversight from a board of respected practitioners rather than led by an influencer trying to boost their own business interests. This could raise awareness, help develop best practices, and discourage blatant misuse of the technology through peer accountability.
“I'd love to see an objective, non-profit marketing AI association focused on creating standards, promoting education, and requiring members to uphold ethical principles.”
Q: Where do you see the intersection of AI ethics and marketing in five years?
GL: Realistically, there will likely be a messy transition period over the next several years as a broad understanding evolves about appropriate versus inappropriate uses of AI.
However, on the positive side, brands seem to be embracing these technologies thoughtfully to try to avoid crises rather than rushing in haphazardly, which is encouraging from a social benefit perspective.
Oversight will ramp up over time whether through formal regulation or industry norms. So while the path forward will be bumpy, I'm hopeful we'll land in a reasonable place.
Q: Who bears more responsibility — the AI developer creating the platform or the marketer using it?
GL: The primary responsibility lies with the AI developer creating the platform and the governance process they establish. No one sets out to consciously bias these systems.
However, without the proper controls and testing procedures in place, it is easy for issues to arise unintentionally that can have outsized impacts. The development process and internal standards have to be thoughtful from the beginning for any technology built at scale.
Q: You are based in Washington DC — what is your perspective on potential AI regulations coming at the federal level?
GL: The fractured political climate makes meaningfully bipartisan regulations unlikely. The Biden administration's executive order was a reasonable first step but could easily be rescinded. I am hopeful the EU takes the lead globally given the inability of individual countries to regulate big tech effectively alone.
Large platform companies will likely have to align broadly to EU consumer standards and restrictions given the size of the common EU market. So U.S. consumers and businesses will benefit from those protections and oversights even if official US regulations continue to stall.
What’s your opinion?
If you agree with what Geoff said, leave a comment; if you don’t agree, leave a comment. Either way, leave a comment. We want to hear what you have to say.