Paul, thank you for articulating this so clearly and thoughtfully. I deeply resonate with your core point: implementing AI isn't simply about productivity or efficiency, it's fundamentally about how we take care of human well-being, psychological safety, and meaning as we navigate these rapid changes.
In my own experience, I've seen firsthand how easy it is for organizations to chase the productivity gains of AI while neglecting the real cognitive and emotional strain it creates. When we focus too narrowly on efficiency, we overlook the critical skills humans uniquely bring clarity, reflection, judgment, empathy, and coherence. The real question isn't just how quickly we adopt new tools, but how intentionally we create environments that genuinely support people as they integrate those tools.
Your emphasis on a combined approach, merging workflow efficiency with human-centered safeguards, is exactly what we need. Real productivity isn't just about doing more, but about creating space for people to think, reflect, and thrive alongside these powerful tools.
Thank you for sharing this invaluable perspective. It's essential that we keep this conversation alive, explicitly prioritizing human coherence as we adopt AI into our organizations.
Thank you for your kind words, Roi. There is a tension between the two -- productivity vs psychological safety -- but with the right approach, that can be relieved. That's the promise of the AI Technostress Framework (still a work in progress). Of course, it needs to be field tested to be sure.
Most companies I work with hit a wall right around Stage 3. They've got AI tools talking to each other across departments, data flowing between systems, and workflows that span marketing, sales, operations, and customer success. On paper, it looks seamless.
In practice, it's a complete mess.
Just as an eg, Marketing's AI generates leads based on certain criteria, which automatically triggers sales outreach sequences, which feeds into onboarding workflows, which impacts customer success metrics. It all sounds efficient until a major client complains about inappropriate messaging. Who's responsible? Marketing says their AI was working within parameters. Sales points to the automated handoff. Customer success blames the onboarding sequence. Everyone's technically right, but nobody owns the outcome. That's the biggest issue I am seeing.
Traditional org structures, for all their flaws, had clear escalation paths.
So, are you saying what is supposed to be a seamless funnel turns into a Rube Goldberg invention? Or, is it a matter of kicking the can down the street?
Given what you're seeing, is AI causing stress or is it confusion over how the org chart to work chart shift is supposed to function?
Paul, this is such a timely and clarifying read—thank you and Liza for naming the real friction between emerging AI systems and legacy org structures.
What I’m seeing more and more is ROI stress: not just anxiety around the technologies themselves, but the unspoken urgency to justify their existence--almost a kind of organizational panic, especially in companies that invested millions in AI without a clear strategy. Now, the pressure is on to prove those investments weren’t a mistake. That pressure rolls downhill, compounded by the emotional toll of navigating invisible expectations. (Similar to what Bette is describing in her comment.)
Technostress really isn’t just an IT or AI problem. It's a leadership one. When roles are unclear or shifting constantly, it’s on leaders to rebuild clarity and meaning.
Paul, thank you for articulating this so clearly and thoughtfully. I deeply resonate with your core point: implementing AI isn't simply about productivity or efficiency, it's fundamentally about how we take care of human well-being, psychological safety, and meaning as we navigate these rapid changes.
In my own experience, I've seen firsthand how easy it is for organizations to chase the productivity gains of AI while neglecting the real cognitive and emotional strain it creates. When we focus too narrowly on efficiency, we overlook the critical skills humans uniquely bring clarity, reflection, judgment, empathy, and coherence. The real question isn't just how quickly we adopt new tools, but how intentionally we create environments that genuinely support people as they integrate those tools.
Your emphasis on a combined approach, merging workflow efficiency with human-centered safeguards, is exactly what we need. Real productivity isn't just about doing more, but about creating space for people to think, reflect, and thrive alongside these powerful tools.
Thank you for sharing this invaluable perspective. It's essential that we keep this conversation alive, explicitly prioritizing human coherence as we adopt AI into our organizations.
Thank you for your kind words, Roi. There is a tension between the two -- productivity vs psychological safety -- but with the right approach, that can be relieved. That's the promise of the AI Technostress Framework (still a work in progress). Of course, it needs to be field tested to be sure.
Very nice breakdown Paul.
Most companies I work with hit a wall right around Stage 3. They've got AI tools talking to each other across departments, data flowing between systems, and workflows that span marketing, sales, operations, and customer success. On paper, it looks seamless.
In practice, it's a complete mess.
Just as an eg, Marketing's AI generates leads based on certain criteria, which automatically triggers sales outreach sequences, which feeds into onboarding workflows, which impacts customer success metrics. It all sounds efficient until a major client complains about inappropriate messaging. Who's responsible? Marketing says their AI was working within parameters. Sales points to the automated handoff. Customer success blames the onboarding sequence. Everyone's technically right, but nobody owns the outcome. That's the biggest issue I am seeing.
Traditional org structures, for all their flaws, had clear escalation paths.
Hope you are having a good Wednesday!
So, are you saying what is supposed to be a seamless funnel turns into a Rube Goldberg invention? Or, is it a matter of kicking the can down the street?
Given what you're seeing, is AI causing stress or is it confusion over how the org chart to work chart shift is supposed to function?
A bit of both - I explained better in the note :)
Great reading! This piece ties together what too many leaders are treating as separate issues: system design and emotional health.
Thanks, William. You're right.
Paul, this is such a timely and clarifying read—thank you and Liza for naming the real friction between emerging AI systems and legacy org structures.
What I’m seeing more and more is ROI stress: not just anxiety around the technologies themselves, but the unspoken urgency to justify their existence--almost a kind of organizational panic, especially in companies that invested millions in AI without a clear strategy. Now, the pressure is on to prove those investments weren’t a mistake. That pressure rolls downhill, compounded by the emotional toll of navigating invisible expectations. (Similar to what Bette is describing in her comment.)
Appreciate the great read!
Technostress really isn’t just an IT or AI problem. It's a leadership one. When roles are unclear or shifting constantly, it’s on leaders to rebuild clarity and meaning.