One-Third of the C-Suite Prioritizes AI Innovation Over Responsible Use
A recently published survey of over 2,300 C-suite members worldwide reveals a serious AI responsibility gap
A recent article by fintech expert Rich Turrin — “One-Third of the C-Suite Prioritizes AI” — called attention to the results of a survey by NTT Data of over 2,300 C-suite members worldwide regarding their views on responsible AI use.
The survey — "The AI Responsibility Gap: Why Leadership Is the Missing Link" — reveals how eager leaders are to harness AI’s benefits and how they struggle to balance innovation and responsibility. (Download the full survey report here - PDF)
Below, we summarize the article’s insights, explore why responsible AI is now a C-suite priority, outline practical steps for integrating AI ethically and effectively, and discuss the ethical considerations and risks involved.
Key Insights from the Survey
Boardroom Divide—Innovation vs. Safety: The C-suite is split on approaching AI. Roughly one-third of executives prioritize rapid AI innovation over caution; another third insist that responsibility and safety come first, and the remaining third strive to balance both. Despite these differing views, there’s a consensus that AI will be transformative. The debate is over how to pursue it responsibly.
Racing for AI’s Benefits: There’s an apparent urgency to adopt AI, driven by its vast promised upsides. 97% of CEOs believe generative AI will have a material impact on their operations, with 70% expecting significant transformation by 2025. Many executives view AI as a game-changer for efficiency and data-driven decision-making.
The AI Responsibility Gap: NTT DATA’s report findings highlight a critical disconnect: The breakneck speed of AI innovation is outpacing governance frameworks, leaving a significant gap in responsibility. Over 60% of C-suite leaders describe this gap as “significant”; many organizations struggle to implement clear AI governance policies.
Security and Sustainability Concerns: AI adoption is raising security and sustainability alarms. 89% of C-suite executives are very concerned about AI security risks, yet only 24% believe their organization has a strong framework to manage AI risks effectively. Moreover, three in four leaders worry that AI ambitions conflict with sustainability goals, prompting many to seek lower-energy AI solutions.
Challenges in Workforce Readiness: AI ethics training and governance are major weak points in most organizations. Nearly 50% of executives say employee training on ethical AI use is a top concern, and 44% cite a shortage of skilled personnel capable of managing AI responsibly. Without closing these workforce gaps, companies risk misusing AI and failing to meet ethical standards.
"AI’s trajectory is clear—its impact will only grow. But without decisive leadership, we risk a future where innovation outpaces responsibility, creating security gaps, ethical blind spots, and missed opportunities.
“The business community must act now. By embedding responsibility into AI’s foundation—through design, governance, workforce readiness, and ethical frameworks—we unlock AI’s full potential while ensuring it serves businesses, employees, and society at large equally.” ~ Abhijit Dubey, CEO, NTT DATA, Inc
Responsible AI in Leadership: Why It Matters More Than Ever
C-suite executives are learning that adopting AI isn’t just a tech upgrade – it’s a leadership responsibility. Embracing AI at the top also means setting the tone for how ethically and responsibly AI is used across the organization. Here’s why responsible AI use has become a leadership imperative:
Bias and Fairness: AI systems can inadvertently amplify biases in their training data, leading to unfair or discriminatory outcomes. Nearly half of CEOs are worried about AI bias and decision accuracy. Preventing AI bias requires rigorous testing, diverse training data, and continuous monitoring.
Transparency and Explainability: If AI influences critical decisions (e.g., hiring, credit scoring, medical diagnostics), leaders must ensure explainability and transparency. 78% of executives report maintaining robust documentation for AI models, leaving public understanding and trust gaps.
Workforce Impact and Job Displacement: AI reshapes jobs, and leaders must proactively retrain employees. Instead of merely automating roles, companies should upskill workers for AI collaboration, ensuring employees are part of the AI transformation rather than victims.
Regulatory Compliance and Ethics: As governments enact AI laws, organizations must stay ahead of regulatory changes. More than 80% of executives say unclear government regulations hinder AI investments, making it crucial for leadership to anticipate and comply with evolving legal frameworks.
NTT Data survey by the numbers
A balanced sample of 2,307 GenAI decision-makers (95%) and influencers (5%)
Coverage spans 34 countries in five regions
12 industry sectors
74% of respondents from large enterprises with more than 10,000 employees
68% of participants were from the C-suite; 27% were at Vice President, Head of, or Director level, and 5% were senior managers or specialists
42% of participants were in IT roles; 58% in non-IT roles
The Risks of Ignoring Ethical AI Principles
Neglecting AI ethics isn’t a trivial lapse — it’s a high-stakes risk. The fallout can be severe when companies charge ahead with AI without proper ethical guardrails. C-suite leaders who overlook these principles expose their organizations to a variety of dangers:
Reputational Damage: AI failures can quickly become PR nightmares. A biased hiring algorithm or a faulty AI chatbot can spark public backlash, lawsuits, and loss of customer trust.
Regulatory Fines and Legal Trouble: Companies deploying AI irresponsibly may face significant fines and compliance penalties. For example, violating privacy laws or failing to explain AI decisions could attract regulatory scrutiny.
Public Backlash and Lost Business: If consumers perceive a company using AI recklessly, they may boycott services, file lawsuits, or pressure regulators to intervene.
Ethical Lapses and Trust Erosion: If leadership ignores AI ethics, trust erodes internally and externally. Employees may blow the whistle on unethical AI practices, and customers may lose faith in the brand’s integrity.
Steps for the C-Suite to Integrate AI Responsibly
The NTT DATA report provides a four-step roadmap to help organizations close the AI responsibility gap:
1. Embrace a “Responsible by Design” Philosophy
Organizations must build AI responsibly from the ground up. This includes embedding ethics, fairness, and inclusivity into AI systems from the start rather than fixing issues retroactively.
2. Establish Multilevel AI Governance
Companies must go beyond legal compliance by setting up internal AI ethics committees, risk assessment frameworks, and transparent governance structures.
3. Upskill and Train the Workforce
Investing in AI literacy and ethics training ensures employees use AI safely and effectively. Although 49% of executives cite employee education as a top priority, many lack formal training programs.
4. Collaborate on Global AI Standards
AI transcends borders, requiring cross-industry and governmental partnerships to develop consistent AI safety and ethics standards.
The Path Forward
The “AI Responsibility Gap” is a defining challenge of this era. While AI’s benefits are undeniable, failing to implement ethical AI practices could lead to regulatory crackdowns, public distrust, and business setbacks.
C-suite leaders who balance innovation and accountability will be best positioned to succeed in the AI-driven future.
The time to close the AI responsibility gap is now.
Ensure your marketing team uses AI safely and responsibly. This training workshop can help.
AI Marketing Ethics Training Workshop
Unlock Ethical Marketing with AI: Half-Day Workshop (Zoom/In-Person)
I hope you enjoyed this AI Marketing Ethics Digest issue. If you haven’t already, please consider becoming a paid subscriber.
Your subscription supports:
Our ability to continue devoting time and effort to sharing quality content.
Our dream of becoming the newsletter of record for AI marketing ethics.
Our goal of influencing AI developers and marketers to make ethics a priority.
P.S. It’s only $80/year or $8/month. (That’s less than you spend on two lattes at Starbucks!)
Warm regards,
Paul Chaney, Publisher
AI Marketing Ethics Digest
This is the problem with so many organizations. They recognize an issue or acknowledge it could become one, but when it comes to actually taking action, they don’t. It’s the difference between being reactive and proactive, and too many institutions lean toward the former.
I spent a couple of decades in higher ed, and they were mostly reactionary only making changes when they were forced to. Often, they didn’t at all. I think we’re going to see the same pattern with AI implementation. Until organizations are forced to address issues—whether it’s security updates, usage policies, or something else most will react rather than take the lead.
Microsoft dismissed their entire AI ethics team - clearly seen in how they use AI at LinkedIn
If they are leading the way I despair at how companies will address ethics and AI