When AI Is the Giant, EI Chooses the Battlefield
The giant arrives
The giant has arrived in the workplace, and its name is AI.
No one could be unaware of its arrival. It is powerful, confident, capable, fast, loud, large, limitlessly scalable, and appears invincible.
It is accompanied by a framing that says it is unavoidable and you must adopt it now or fail. If you do not get on board, you will fall behind. You will become irrelevant.
It is being forced on us. It promises efficiency, precision, automation, optimisation, consistency and competitive advantage. It is also causing widespread anxiety, with people asking. “Will I be replaced?
The power of AI cannot be ignored. It can analyse vast amounts of information, detect patterns humans would never see, and execute tasks at a scale and speed that no organisation can ignore. Used well, it reduces cognitive load, improves consistency, and frees people from repetitive work.
But when something hits us with such force, it can lead to actions that have not been given due consideration. Leaders respond with knee-jerk reactions before they have had time to think.
Leaders will make rushed decisions and, without due consideration, acquire and adopt tools.
Technology gets deployed with little understanding, because leaders see other organisations doing it.
It is a frenzy driven by a fear of missing out. Analysis and conclusions are dangerously handed over to algorithms because they appear objective, neutral, and “smarter” than we mere humans.
The giant has established itself through momentum. This is when we need good leadership the most.
When leaders respond to power with fear, they are fighting on the giant’s terms of speed, automation and optimisation. Humans will never win that battle.
The leadership we need is not the one that asks not whether AI should be used, but where, when, and how. It is the leadership that decides where, when and how it is used.
This is the question defining the battlefield.
The illusion of invincibility
When something this big, powerful, scalable and fast comes at us, it can feel unstoppable.
It can feel like being in a pressure cooker, and when we are under pressure, the rhetoric around AI can be easy to believe.
AI is neutral, free from bias, emotion, and human error. It is data-driven, consistent, reliable and rational. It's invincible.
In comparison, humans are slow, subjective, error-prone, emotional and inconsistent. They are vulnerable.
But invincibility is an illusion.
AI does not understand context.
It does not understand consequences.
It does not understand people.
It hallucinates. Hallucinations are pieces of information that sound convincing but are incorrect, made up or irrelevant. AI can generate confident but false or misleading information presented as fact.
It recognises patterns, not meaning.
It predicts outcomes, not intent.
It optimises for what it is measured on - nothing more.
In the words of Dr Maria Randazzo from Charles Darwin University, Australia, “AI is not intelligent in any human sense at all. It is a triumph in engineering, not in cognitive behaviour.”
This is where the risk lies.
When leaders treat AI outputs as the definitive truth rather than as input to their own evaluation processes, they remove judgment from decision-making.
When efficiency is the primary measure of success, leaders will lose sight of what is really going on.
AI doesn’t operate in isolation. It reflects the data it is trained on, the assumptions built into it, and the values of the people who deploy it.
When bias shows up, when harm occurs, or when trust erodes, this isn’t a technology problem; it’s a leadership one.
The issue isn’t that AI is powerful. The problem is when leaders mistake power for judgment.
And by the time that happens, the battlefield has already been set.
The cost of choosing AI alone
When leaders are under pressure to perform, scale, mitigate risk, and increase efficiency and productivity, AI can seem to have all the answers. It can become a panacea promising certainty, consistency, and control. Over time, leaders begin to defer.
This is when AI stops supporting leadership and instead replaces it.
If you use AI to automate people problems, monitor employee behaviour rather than trust them, use algorithms rather than have conversations, and treat efficiency as the ultimate measure of success, you end up with a surveillance culture, digital bullying, and a loss of psychological safety.
When this happens, the fundamental nature of leadership moves from human-centric leadership to algorithmic, data-driven execution.
What is lost is leadership that puts people first and centre, with a focus on building trust and creating positive relationships.
Human-centred leaders have essential skills, including empathy, authenticity, adaptability, creativity, and emotional intelligence. When leaders lack these skills and defer to AI, it creates toxic environments with low morale, high stress, poor communication, and decreased productivity. It results in micromanagement, conflict, and a lack of psychological safety.
EI steps onto the battlefield
EI steps onto the battlefield because there is a crisis.
Leaders cannot be blind to what is happening with and to their people.
Data can indicate what is happening, but not why. The algorithms can predict outcomes, but not the consequences. Systems can enforce rules, but they cannot sense fear, hesitation, or silence.
Leaders need more.
Emotional Intelligence (EI) enables good leadership judgment and decision-making.
Leaders with EI can read context, understand impact, and recognise what data can’t. Leaders with EI ask questions and become sense-makers. They know when to slow down and when to speed up. They know to ask questions whenever they can, issue directives when they must, and always provide direction. They understand when human conversation matters more than an automated outcome.
EI steps onto the battlefield to define the role of AI, not to defeat it.
Leaders with EI can decide what to automate and what to keep in human hands. Leaders with EI ask the poignant questions. When does efficiency create risk? When does optimisation undermine trust? EI ensures that psychological safety takes priority over speed or scale.
AI can be used to approve or reject leave requests based on rules and patterns. AI calls this efficiency. EI says this is a lack of accountability, as there is no discretion for employee-specific needs.
AI can optimise using algorithm-driven workload allocation. It can assign work to “maximise utilisation.” Workloads cannot be sustained, burnout increases, and employees are not heard. EI says that the algorithm may optimise output, but it undermines trust, which is not acceptable.
When leaders implement AI tools rapidly, without any consultation, to keep up with competitors, they sacrifice psychological safety for scale. EI says this is not acceptable.
AI doesn’t create these problems. It exposes leadership choices.
EI creates trust, safety, and accountability. AI cannot.
The slingshot moment
David’s defeat of Goliath was not because he tried to be stronger, but because he chose a different approach. When facing the giant Philistine warrior, David did not respond with force, but with judgment, timing and purpose. He did not engage in combat; instead, he used his agility and accuracy to his advantage.
David defeated Goliath with a sling (often referred to as a slingshot) and a stone.
EI is the slingshot that allows leaders to pause, step back and ask the right questions before acting. It brings discernment into moments that would otherwise default to speed, scale, or automation.
The slingshot moment is when a leader chooses to intervene. They do not kill the technology, but they direct it to their advantage.
This is the moment the leader says:
· This decision needs context, not just data.
· This situation requires a conversation, not a system response.
· This may be efficient, but it isn’t right.
· This tool supports our people - it does not replace them.
The slingshot moment is when leaders stop reacting to the power of the giant and start to shape how that power is used. They provide intent rather than instruction, guardrails rather than control and direction rather than directives.
AI remains powerful, but it is not in charge.
The win
AI will continue to change how work gets done. It is a given.
What is not a given is leaders surrendering judgment, ethics, and humanity in the process. Leaders have a choice.
It is not a battle between AI and EI. It is not a competition. The purpose of EI is to direct AI.
It decides where automation belongs and where it doesn’t.
It recognises when efficiency creates risk.
It protects trust, psychological safety, and human dignity.
The leaders who will thrive in an AI world are not those who adopt the technology the fastest, but those who use it wisely.
Because when AI is the giant, the real leadership work is choosing the battlefield.
Are you leading AI, or is it leading you?