5 Comments
User's avatar
Bette A. Ludwig, PhD 🌱's avatar

This is the problem with so many organizations. They recognize an issue or acknowledge it could become one, but when it comes to actually taking action, they don’t. It’s the difference between being reactive and proactive, and too many institutions lean toward the former.

I spent a couple of decades in higher ed, and they were mostly reactionary only making changes when they were forced to. Often, they didn’t at all. I think we’re going to see the same pattern with AI implementation. Until organizations are forced to address issues—whether it’s security updates, usage policies, or something else most will react rather than take the lead.

Expand full comment
David ☕'s avatar

Microsoft dismissed their entire AI ethics team - clearly seen in how they use AI at LinkedIn

If they are leading the way I despair at how companies will address ethics and AI

Expand full comment
Neela 🌶️'s avatar

Aite, so I have some thoughts. Ready?

The three faction divide makes sense to me. The innovation hawks charging ahead, safety conservatives pumping the brakes, and those middle grounders trying to walk the tightrope between both.

What jumps out is the gap between awareness and action. Nearly 90% of executives worry about AI security, yet barely a quarter have solid risk frameworks.

The economics driving this deserve more attention. With most CEOs expecting transformation by 2025, market pressure is creating a dam-break scenario where everyone's rushing forward regardless of readiness.

I wonder how much of this gap is genuine uncertainty versus convenient cover for moving fast with plausible deniability when things go wrong.

The most interesting stat might be that 80% blame unclear regulations - a case of executives waiting for government guardrails rather than setting industry standards themselves.

Whatcha think Paul?

Hope you are having a good Wednesday!

Expand full comment
Paul Chaney's avatar

First, "Aite." You mean, as in "ah-ite"? If so, I can tell you've been hanging around Mack too long. :-) That's a good old Southernism if ever I heard one.

Now that I got that out of the way (ahem). It does follow the bell curve, does it not? Early adopters, the majority in the middle, and the laggards. I see it (as I do most things) as a marketing opportunity. That's one reason I keep pitching my ethics training.

Regarding "genuine uncertainty" versus "convenient cover," the cynic in me says more likely the latter. And it's merely an excuse to throw the governance responsibility onto the government(s).

Given the slow growth of my newsletter -- and perhaps this is my excuse -- ethics isn't on the front burner. People will focus on it when they're forced too. Until then, damn the ethical/responsible use torpedos -- fully speed ahead with innovation.

Expand full comment
Neela 🌶️'s avatar

We use 'aite" a lot in the Caribbean but yea that Mack is a bad bad influence.

The bell curve analogy fits here. I'd add that unlike typical tech adoption cycles, AI's potential harms scale differently. When laggards finally move on ethics, they'll be playing catch up with systems already deeply embedded in their operations.

I have a silly idea now that we are bantering some more.

Maybe the marketing angle isn't selling 'ethics' directly but positioning responsible AI as competitive advantage? Whatcha think?

Also remember you're solving tomorrow's urgent problem before most have recognized it as today's important one. Hence the newsletter growth.

Expand full comment