By Sarah Nicastro, Creator, Future of Field Service
In my opinion, we all have a quite serious responsibility at the moment to take part in creating an appropriate, ethical, and delicate balance between AI advancement and protecting what is uniquely human potential. Recently, I invited Faisal Hoque onto the podcast for an insightful discussion around what this delicate balance will take.
Hoque is a serial entrepreneur, business strategist, technology innovator, and best-selling author whose insights have been featured in CNN, Fast Company, Forbes, Harvard Business Review, and Yahoo. He’s held corporate leadership roles at companies like GE, Pitney Bowes, and Dun & Bradstreet and he has built multiple companies focused on innovation and transformation. His latest book, Transcend, explores how organizations can harness AI's potential while protecting the human experience. As a dedicated philanthropist, Hoque donates all book proceeds to charity, and I found his uniquely balanced perspective on the AI revolution to be full of food for thought.
#1: Distinguish Enterprise AI from Consumer Applications
Hoque warns against conflating consumer AI tools with enterprise initiatives, as they serve fundamentally different purposes. Enterprise AI has been evolving for decades through automation, predictive modeling, and process optimization, while tools like ChatGPT that began as consumer AI represents just the visible tip of the iceberg for what’s now and next.
Business leaders must understand that enterprise AI will fundamentally reshape companies and work models as we move toward general intelligence systems that can think independently. As AI becomes a true "coworker" rather than just a passive tool, it will bring both unprecedented challenges and opportunities. This demands careful evaluation of where AI can remove inefficiencies, while preserving human value.
For us to do justice in preserving human value, Hoque suggests we must start by defining humanity. “Humanity is about freedom - freedom to be creative, freedom to pursue something, and it's love. It's love for your craft, love for your family, society, whatever. Love is driven by passion, because that's how you become fully fulfilled as a human being,” he says. Keeping these definitions in mind is how we set the stage to harness the potential of AI without risking what’s special about the human experience.
#2: Approach an AI-Centric Future with Neutrality
One of the aspects of our conversation that has stuck with me is discussing the power of neutrality when it comes to approaching an AI-centric future. I pointed out that, on one hand, it seems there are individuals and organizations that are overly excited to go all-in on AI with a hyperfocus on how it can cut costs and maximize profits; the risk here is being driven by greed.
On the other hand, there are leaders and businesses who are so hesitant to embrace this technology that is undoubtedly changing the way we work forever. The risk here is not only falling behind, but quickly becoming irrelevant. Perhaps the healthiest mindset is a more neutral one – welcoming and even being excited about what AI can do and how it will evolve businesses and work, but with a very keen eye on where caution need be applied and where the utmost risk to humanity lies.
Hoque agreed, bringing to light that this is how a Buddhist philosophy can be applied. “You have to develop this mentality of devotion and detachment in the sense that you have to be devoted to things that actually are helpful, that's regenerative in the sense that it's regenerating something that's helpful to humanity. You have to detach yourself from greed and from things that could be harmful, and also from fear,” he says.
He goes on to share a framework that is explored in Transcend, called the “open and care” framework to help provide a balanced approach to AI adoption. At its core, this framework promotes being radically open to possibilities while deeply caring about humanity and helps organizations identify opportunities while remaining mindful of risks and ethical considerations. Hoque reinforces that implementation should focus on augmenting human capabilities rather than wholesale replacement; the goal is transcending current limitations while protecting what makes us uniquely human.
“This divergent framework, Open and Care, is about being radically open to possibilities because there's so much good we can do with AI. But then, also, you have to be catastrophically focused on risk, and you have to care about humanity deeply if you want to maintain some level of balance,” says Hoque.
#3: Consider Reverse Innovation Risk
Hoque shared some thoughts around what we stand to lose if a balance isn’t struck between AI innovation and humanity. He spoke of the concept of "reverse innovation," describing how some technological advances can actually reduce human capabilities and critical thinking skills over time.
Business leaders must keep this in mind and be sure to evaluate whether automating certain processes might erode important foundational knowledge and skills their teams need. This requires maintaining core competencies even while leveraging advanced tools; I shared how this reminded me of our experience when our son was diagnosed with Type 1 diabetes when the doctors insisted we learn manual calculations and care before relying on automated systems.
Careful consideration must be given to which activities truly benefit from automation versus which ones contribute to skill development and engagement.
#4: Prioritize Regenerative Leadership
So much of how AI’s impact will unfold in the coming years has to do with how it is approached by leaders, and how important they see it to strike a balance. Hoque speaks about regenerative leadership, which is focused on creating sustainable systems that help people reach their full potential rather than defaulting to automation.
Leaders must help employees develop new skills and capabilities as technology evolves, rather than simply reducing headcount. This approach views AI as a way to expand human potential and organizational capacity rather than just cut costs and centers the goal around multiplication of capabilities, not elimination of human contribution. "When I talk about regenerative leadership, I really mean that you have to be able to create ecosystem just like nature does, that regenerates resources. You have to do stuff as a leader that allows the resources to be regenerated so that they can live up to their full potential. If you're going introduce automation, you need to help people to regenerate their next level of contribution and skill set,” says Hoque.
Regenerative leadership relies heavily on empathy. "Greed is one risk factor, but a lack of empathy is another. If we have no empathy, then we don't really care about humanity. Empathy plays a huge role in terms of how you think about AI, how you design AI, and how you deploy and execute,” says Hoque. The successful integration of AI requires balancing technological capability with human empathy and mindful leadership; leaders must remain focused on how AI deployment impacts their people and organizational culture, not just efficiency metrics. This requires maintaining strong human connections and understanding while leveraging AI's analytical power. Organizations should evaluate AI initiatives through the lens of both business value and human experience. The key is finding ways to advance technology while strengthening rather than diminishing human relationships and purpose. "Think about a knife - you can use it in the kitchen, or you can use it to harm somebody. AI isn't any different, except it's million zillion times more powerful than a knife. It's up to you how you use it,” cautions Hoque.