The pros and cons of artificial intelligence are often debated, but rarely resolved. I’ve noticed in my travels that when the topic of AI comes up, the people in the room start to wince and grimace a bit. Questions about what AI can do, what it’s being used for, and whether it’s a threat can cause a lot of confusion. Hence, the tension.
What are those tensions? Where do they come from? And how do we rationally, intentionally manage, and optimize the tensions that will ultimately arise from the introduction of AI into your life, your organizations, and the communities of interest that you serve?
Where to Look for Tension
If you’ve ever implemented a new technology in your life or your organization in the past, unless you are a technologist yourself, you’ve probably relied on technologists to guide you through the adoption process.
If you want to order take-out for dinner, DoorDash provides you with a nice, easy-to-use app for that. If you want to publish and sell a book, Amazon provides you with a marketplace complete with advertising and royalty payments. It’s not that those technologies were tension-free, but the processes and trouble-resolution approaches were generally transparent to the user… simple and predictable.
AI, on the other hand, doesn’t have a clearly defined purpose for its existence, while at the same time promises to change everything about our everyday lives.
From the developers’ point of view, the use cases for AI are endless. AI will allow society to tackle its most complex, intractable, problems by harnessing insights from very large and disjointed data sets with the application of the most advanced analytics, modeling, and mining tools. AI enables unprecedented efficiency and effectiveness by automating tasks, streamlining workflows, and reducing human error.
From the users’ point of view, AI is, well, a bit scary.
Tensions arise over the likelihood of bias in how machines are trained. AI can be unpredictable. IBM has even noticed that an AI can “hallucinate,” trying to give the user the answer they think will be acceptable, even if incorrect. AI raises cyber issues, ethics issues, and sustainability issues from the energy consumption required for the massive computing power of AI.
How do we balance all these potential tensions to drive toward an optimal outcome for a particular process or application and the people who will interact with the technology?
Well, I think that before you tackle such a challenge, it will help if you can visualize it first. See the tensions. Map them out.
Tension Mapping™
A colleague and I created such a visualization tool that we call Tension Mapping™. By mapping tensions, we look for sources of value creation by reducing or removing impedances and identifying organizational disconnects where a lack of tension suspends progress. Meaning, we want some amount of tension to make progress, but not so much that we get bogged down in strife or we are pushed off the path toward our ultimate goal.
How does it work? Let’s say I want to tension map an idea I had to introduce AI into an emergency communications call handling application, when people call into 9-1-1, the operators have software systems in front of them to gather and transmit data for that incident. These folks sit in a dark room for twelve hours a day taking calls from people in their community having possibly the worst day of their lives. This is 9-1-1.
So, now imagine I’m the CTO of a company that builds these applications for these emergency communication centers that are overwhelmed and understaffed, yet one thousand percent committed to answering every call within seconds and dispatching the appropriate resources with only the information they are getting from a panicked caller.
To help them in that effort by putting the best available data and tools at their fingertips, I want to know what a 9-1-1 call handling process would look like if artificial intelligence were in the mix. My engineering team has worked with the AI vendors enough to understand the large language models they are using that would underpin our applications, but they don’t trust that our product management team has developed a governance structure yet to prevent bias and test recommendation results.
The product management team believes that an AI front end could help communication centers to filter and triage much of their non-emergency calls, but they don’t trust the sales team to explain the risks fully to the client, treating AI as another product feature and not a prototype of a promising concept.
This tension mapping goes on and on across our company until we can fully visualize and embrace all of the cascading influences and unintended consequences that we will inject into our clients’ processes, the way they work, and, in turn, their cultures.
Why endure such an exercise that will most surely open old wounds and expose vulnerabilities, both technical and personal? Because we operate today in a society that lives and breathes off one brutal truth…trust.
One of the goals of tension mapping is, of course, to get all stakeholders working from the same page. Mind-share, as we might say. The language of AI is so complex and different that everyone involved likely enters the field with a unique point of view about what it is and is not.
AI lives in a highly dimensionalized space that those sitting around the table may not even begin to comprehend.
But we humans still have the upper hand in how we adapt — not adopt — such technologies. We possess creativity that an AI, so far, cannot. ChatGPT needs to map out every pixel of an image of a bird to identify it as one, whereas we recognize a bird in the blink of an eye. We know love and joy in a way that a machine can’t begin to comprehend conceptually, structurally, or yes, across any multidimensional space and time.
And we know how to manage tensions and build trust amongst our fellow human beings, as hard as it may be sometimes. We may have to map it out on a whiteboard. We may have to rely on intuition, vigilance, risk assessments, testing, compassion, and compromise. But we either roll up our sleeves, pick up a dry-erase marker, and have a plan, or we surrender to the unknown.

Dr. Gleb Tsipursky – The Office Whisperer
Nirit Cohen – WorkFutures
Angela Howard – Culture Expert
Drew Jones – Design & Innovation
Jonathan Price – CRE & Flex Expert














