Of all the people in tech whom I admire, Charlene Li tops the list. I don’t say that gratuitously either. She is a tech pioneer, bona fide thought leader, bestselling author, and a catalyst for change.
Regarding the ethical use of AI, her latest newsletter, “Ethical AI: A Roadmap for Usage,” couldn’t be timelier.
As AI integration accelerates across industries, too many companies are either flying blind or stuck in the mud, paralyzed by risk or rushing ahead without clear ethical boundaries.
Li offers a middle path: structured, pragmatic, and values-aligned. Her six-quarter planning cycle, “Goldilocks governance” model, and context-aware transparency practices don’t just sound good—they work in the real world.
Here’s why AI ethics matter today, according to Li:
“Typically, when we discuss AI ethics, we’re talking about what big tech companies should do, or what regulations need to exist in the space. But today, I'm talking about something much more personal and immediate: how do you make ethical decisions when using AI every day?
“This matters because AI will reflect our values whether we're intentional about it or not. And given how much ethical complexity AI raises, it’s essential to be deliberate about which values show up.” ~ Charlene Li
She outlines these steps: the “Values-in-Action Model.”
List your values. Place them on the table: both your personal values and your company's values.
Map your AI touchpoints against those values. Where do you use AI most frequently? What are your common workflows? How could you insert your values into each of these?
Identify the tensions. Note any trade-offs you're making between utility and your values, then identify actions to better align with what matters most to you.
Li then lists her core values — openness, curiosity, integrity, and humility — and shows how to incorporate the roadmap concept into organizational use:
➡️ This week: Add an AI ethics check to your project templates. Think of it as "ethics by design"—just like you might have security by design.
➡️ This month: If you don't have responsible and ethical AI use guidelines, write them. And if you do have them, revise them to center on your values rather than just technical requirements. Most organizations I work with either don't have AI guidelines or have ones that are too technical for people to actually use.
➡️ Next time you use AI: Pause and ask, "How does my value of [X] show up here?" You don't have to do this every time, but try it once and see how it feels.
As someone immersed in the intersection of AI, marketing, and ethics, I believe this roadmap deserves serious attention. Here’s why:
1. It’s Operational, Not Aspirational
Many ethical AI discussions spin their wheels in theory. Li provides a concrete timeline: 18 months, with quarterly check-ins. It’s long enough to think strategically, short enough to stay agile.
2. Governance Isn’t the Enemy of Innovation
Her “Goldilocks” principle strikes the right balance—neither too lax nor overly burdensome. You don’t need a 12-person AI oversight committee to move forward. You need just enough structure to support velocity and integrity.
3. Transparency Must Be Smart
Li’s stance on transparency is refreshingly nuanced: full disclosure isn’t always ideal. Internal alignment is crucial, but public disclosure should be strategic, not performative. Trust is built through relevance and context, not overexposure.
4. Values Are the Filter
Li argues that all decisions—what to automate, what to disclose, where to deploy AI—must run through your organization’s values. That’s how you prevent mission drift. That’s how you lead.
Listen to my interview with Charlene Li, recorded in April…
Trust, Leadership, and the Future of AI Marketing
In the rapidly evolving world of artificial intelligence, few voices resonate as clearly and with as much foresight as Charlene Li. A New York Times bestselling author and founder of Quantum Networks Group. Li has long been recognized for her leadership, technology, and transformation expertise.
Why Li’s Roadmap Resonates With Me
At the AI Technostress Institute, I aim to help organizations navigate the human cost of AI adoption.
One of the top stressors? A lack of clarity about how and where to use AI, who governs it, and how to evaluate its use. Li’s roadmap fills that void. It makes ethics actionable, not aspirational.
As I’ve often said: Ethics isn’t about compliance—it’s about competitive advantage. This roadmap turns that principle into a practical plan.
Li asks this question, which I reiterate: What are your core values, and how do they show up in your AI use?
Leave a comment. I'd love to hear your thoughts.
Warm regards,
Paul Chaney, Publisher
AI Workplace Ethics & Wellness
PS: Next week’s issue examines the hidden toll of corporate AI adoption and why leaders have an ethical duty to empower, not abandon, employees.
AI Wellness is HR’s Responsibility
As organizations adopt AI, HR teams are on the front lines of ensuring technology serves people, not the other way around.
Download this FREE AI + WELLNESS STRATEGY TOOLKIT and jumpstart your HR team’s efforts to care for people amid AI transformation.
I've seen too many companies either create AI ethics committees that meet quarterly and accomplish nothing, or rush ahead with 'we'll figure out the ethics later' mentality. Charlene Li's middle path feels like what actually works in practice. I suspect many companies think they know their values until AI forces them to really examine what they prioritize. Thank you Paul.
Identifying values, then decision points, then tensions between the two- sounds like a general process for how to incorporate values based ethics in the workplace- not specific to any technology- AI or otherwise. This is an interesting post with an interesting perspective. In the end though - I think my takeaway is that the issue is less specific to ethical AI - and more that we need to be better at making space for and operationalizing ethics in general in the workplace.