How leaders can guide organisations through ethical AI challenges

Picture of Richard van Hooijdonk
Richard van Hooijdonk
Companies often rush to deploy AI without adequate ethical frameworks in place. How do leaders innovate responsibly without sacrificing their competitive advantage?

Organisations across every sector are racing to integrate AI into their operations, driven by competitive pressure and the promise of remarkable operational gains. The speed of adoption has been remarkable: what took months to evaluate just two years ago now gets deployed in weeks, sometimes even days. Yet this urgency creates a dangerous blind spot. While technical teams focus on implementation timelines and performance metrics, the ethical implications often get relegated to future considerations or compliance checklists. Companies deploy AI systems for hiring decisions, customer interactions, and operational processes without fully grappling with the broader consequences of these choices.

AI systems without adequate ethical guardrails can perpetuate discrimination, erode privacy, manipulate consumer behaviour, and concentrate power in ways that undermine social trust. Leaders who recognise this moment as pivotal have an opportunity to shape how their organisations navigate these challenges responsibly. The following steps are intended to serve as a useful roadmap for building ethical AI practices that protect both business interests and broader social values, ensuring that rapid innovation doesn’t come at the expense of the principles that sustain healthy organisations and communities.

“We need to be sure that in a world that’s driven by algorithms, the algorithms are actually doing the right things.”

Marco Iansiti, Harvard Business School Professor

The ethics of AI

The use of AI within business raises a host of ethical concerns that need to be properly addressed.

Much of the anxiety surrounding AI in business centres on workforce disruption – more specifically, the legitimate fear that machines will render human workers obsolete. Consider, for example, that a 2024 survey by ResumeTemplates revealed that 30% of US companies have already replaced a portion of their workforce with AI tools. World Economic Forum (WEF)  projections go even further, estimating that AI could displace around 85 million jobs globally. Yet the same WEF analysis points to something often overlooked in these discussions: approximately 97 million new positions are expected to emerge, roles that demand both advanced technical competencies and distinctly human capabilities like leadership, creativity, and emotional intelligence.

Doing the right thing

The workforce transformation, however substantial, forms just one piece of a larger ethical puzzle. Algorithms power AI’s capacity to streamline and optimise business operations, but they can often introduce subtle forms of discrimination that can ripple through an organisation. Consider how an AI-powered resume screening system might work. The technology promises to efficiently identify qualified candidates by scanning thousands of applications against specific criteria, freeing recruiters to focus on deeper evaluation of candidates.

But when that system learns from historical hiring data – data that reflects decades of gender imbalances in finance or nursing – it inevitably begins to perpetuate those same patterns. Male candidates might, for example, receive preferential treatment for financial analyst positions, while qualified women get filtered out before human eyes ever see their credentials. “We need to be sure that in a world that’s driven by algorithms, the algorithms are actually doing the right things,” says Harvard Business School Professor Marco Iansiti. “They’re doing the legal things. And they’re doing the ethical things.”

A big privacy problem

Privacy presents another layer of complexity that organisations often underestimate. In order to function, AI systems must collect and analyse enormous volumes of personal and professional data about employees. Each data point creates potential vulnerabilities: unauthorised access, data breaches, or misuse by bad actors both inside and outside the organisation. “We’ve got a big privacy problem as our economy becomes increasingly digital,” adds Iansiti. “And interestingly, in some ways, the privacy and the cybersecurity problems are becoming increasingly tied together because one of the big challenges with data isn’t necessarily what the company will do on purpose, but what some rogue agents might do as they get in on the company’s networks from the outside illegally and start pilfering all kinds of personal data that they might use in all kinds of nefarious ways.”

Deloitte’s 2024 survey of over 1,800 professionals found that 54% of respondents rated generative AI as posing the highest ethical risk among emerging technologies. Data privacy topped their specific concerns at 40%, followed by bias at 34% and job displacement at 22%. However, despite this widespread recognition of risk, only 27% of organisations maintained distinct ethical standards for generative AI. This gap between acknowledging ethical challenges and implementing concrete safeguards reveals how many companies remain unprepared for the responsibilities that come with AI deployment.

“When ethics become an afterthought, you’re not innovating but automating your worst biases at scale.”

Gurpreet Bajaj Singh, Master Trainer and Facilitator at Kaleidoskope

The challenges with AI governance

What prevents companies from ensuring ethical AI implementation?

While the principles of AI governance sound straightforward in theory, implementing effective oversight has proven remarkably complex in practice. Organisations quickly discover that establishing robust governance structures requires navigating a landscape filled with moving targets, conflicting standards, and unprecedented challenges that traditional management frameworks struggle to address. The fundamental difficulty stems from the fact that regulations often lag several steps behind the technology’s relentless pace of development.

AI capabilities evolve at unprecedented speed, creating a persistent gap between what’s technically possible and what’s legally or ethically defined. Policymakers and regulators find themselves perpetually catching up to innovations that have already been deployed and integrated into business operations. Organisations operating in this regulatory vacuum face heightened exposure to AI misuse, accountability gaps, and ethical dilemmas that emerge without warning or precedent. The IBM Institute for Business Value found that 80% of business leaders cite the absence of standards around AI ethics, explainability, trust, and bias as a major obstacle to generative AI adoption.

A Wild West situation

Compounding these timing issues, the global community as a whole has yet to reach a consensus on how AI should be governed. Different nations approach AI regulation through distinctly different philosophical lenses, creating a patchwork of requirements that organisations must navigate. The EU has embraced comprehensive regulation through its AI Act, establishing strict compliance requirements and oversight mechanisms. Meanwhile, the US tends toward industry self-regulation, expecting companies to develop and enforce their own standards.

These divergent approaches make it nearly impossible for multinational organisations to anchor their governance strategies to any universal standard. “We’re seeing a kind of Wild West situation with AI and regulation right now. The scale at which businesses are adopting AI technologies isn’t matched by clear guidelines to regulate algorithms and help researchers avoid the pitfalls of bias in datasets,” explains Dr. Timnit Gebru, Founder and Executive Director of The Distributed AI Research Institute.

The lack of transparency

Technical opacity adds another layer of complexity. Most AI systems operate as ‘black boxes’; their decision-making processes remain essentially incomprehensible even to their creators. When an algorithm makes a hiring decision, approves a loan, or flags a security risk, understanding exactly how it reached that conclusion often proves impossible. Think of it like trying to govern a highly capable employee who consistently delivers results but can never explain their reasoning. The lack of transparency erodes trust and makes meaningful oversight extremely difficult.

Liability questions further complicate governance efforts. When an AI system causes harm, whether through biased decisions, privacy violations, or operational failures, determining responsibility becomes a legal and ethical puzzle. Should accountability rest with the software developers who created the algorithms, the organisation that deployed them, or the managers who relied on their outputs? Current legal frameworks provide little clarity, particularly when dealing with autonomous systems that make decisions independently of direct human oversight.

AI innovation trap

But perhaps the most challenging governance issue centres on data privacy, security, and risk management. AI systems demand enormous amounts of information to function effectively, raising fundamental questions about how personal and professional data is collected, stored, and utilised. Each dataset becomes both a valuable business asset and a potential liability, especially as cybersecurity threats grow more sophisticated. The stakes intensify when considering that data breaches involving AI systems can expose not just stored information, but also reveal patterns and insights that were never intended for external access.

Recent research illuminates how organisations are struggling with these challenges. A 2024 PwC survey of US executives found that while 73% currently use or plan to implement generative AI, only 58% have conducted even preliminary risk assessments. “Working with leaders embracing AI, I see too many fall into the ‘AI innovation trap’: charging ahead without navigating the ethical minefields beneath,” says Gurpreet Bajaj Singh, Master Trainer and Facilitator at Kaleidoskope. “When ethics become an afterthought, you’re not innovating but automating your worst biases at scale. That’s why I challenge teams: Resist deployment pressure. Pause. Interrogate yourself, teams, and AI vendors with the hard questions first. Because being first to market means nothing if you’re first to fail ethically.”

Best practices for ethical AI implementation

How to build an AI ethics framework that actually works.

Leadership accountability forms the foundation of any effective ethical AI strategy. When executives merely delegate ethics to technical teams or compliance departments, they signal that these concerns occupy a secondary priority status at best. An afterthought, if you will. Effective leaders actually take personal responsibility for AI outcomes, establishing clear chains of accountability that connect boardroom decisions to algorithmic impacts. This means designating specific executives as AI ethics stewards, individuals with both the authority to influence major decisions and the responsibility to report directly to senior leadership about ethical risks and performance.

Integrating AI ethics into corporate governance structures can help ensure that AI ethics receives sustained attention rather than sporadic concern. The board and C-suite should examine ethical risk reports with the same rigour they apply to financial statements. Consider how this might work in practice: quarterly AI ethics briefings become standard agenda items, led by executives specifically tasked with responsible AI implementation. These sessions review not just what AI systems accomplished, but how they achieved those results and what unintended consequences emerged along the way.

From oversight to foresight

Building effective oversight also requires assembling diverse perspectives around the decision-making table. This could entail creating cross-functional AI governance committees consisting of engineers, ethicists, HR professionals, legal and regulatory experts, business strategy experts, and risk management specialists, who meet regularly to evaluate AI projects from multiple angles. Before deployment, they assess technical performance alongside ethical implications, as well as legal compliance alongside customer experience. After launch, they monitor real-world impacts against original intentions. The committee structure creates accountability while distributing expertise – no single person carries the burden of omniscience.

One of the most powerful yet underutilised tools for ethical AI implementation involves systematic foresight – that is, deliberately anticipating how things might go wrong before they actually do. Most organisations excel at post-incident analysis but struggle with proactive risk identification. Conducting scenario planning exercises or ‘pre-mortems’ for AI systems allows teams to map potential vulnerabilities and ethical failure points while they still have time to build in safeguards. “An algorithm could help a company achieve its financial objectives, but hurt it in the long run,” observes Walid Hejazi, a professor of economic analysis and policy at the Rotman School of Management. “Senior management needs to set the guardrails about how AI will be used. You need to ask questions like what data will be used, and for what purposes? What permissions will be obtained, and how will data be secured?”

Quantifying ethics

The final piece of effective implementation involves aligning incentive structures with ethical objectives. Organisations measure what they value, and they reward what they measure. Adding metrics for user trust, algorithmic fairness, and system transparency to traditional KPIs like revenue growth and market share sends a clear signal about priorities. Project teams evaluated and compensated based partly on ethical performance will naturally incorporate these considerations into their decision-making processes. A machine learning engineer who knows their bonus depends on reducing bias incidents approaches model development differently than one measured solely on accuracy metrics.

Building these measurement frameworks requires careful thought about what ethical performance actually looks like in specific contexts. How do you quantify fairness in hiring algorithms? What constitutes acceptable levels of transparency in recommendation systems? How do you measure user trust in ways that reflect genuine sentiment rather than superficial satisfaction? Answering these questions forces organisations to move beyond abstract ethical commitments toward concrete, measurable standards that can guide daily decision-making and long-term strategic planning.

How to implement AI ethically

Leaders seeking to implement AI ethically can follow several concrete steps.

Before deploying any AI system, leaders must first thoroughly assess their organisational context and needs. Consider how your company’s sector, size, and composition could influence which AI solutions make sense, and which might create unnecessary risks. This assessment phase requires asking fundamental questions about effectiveness and appropriateness. Which specific business problems are you trying to solve, and do AI tools truly offer the best available solutions? If you choose to proceed, then who within your organisation should have access to these systems, and what training or oversight will they need? How can you maximise AI’s potential while maintaining safety guardrails? These questions help prevent the common mistake of implementing AI simply because it’s available, rather than because it serves genuine business needs.

Acknowledging AI’s unpredictable nature

Successful implementation also demands robust corporate governance that acknowledges AI’s dual transformative and unpredictable natures. Unlike traditional software that behaves predictably once deployed, AI systems evolve and learn over time, potentially developing capabilities or biases that weren’t present during initial testing. Effective governance structures anticipate this evolutionary quality and build in mechanisms for ongoing oversight and adjustment. Establishing formal AI policies within your compliance framework creates accountability and consistency across the organisation. These policies should address not just what AI systems can do, but how they should do it – defining acceptable use cases, data handling requirements, approval processes for new implementations, and protocols for monitoring system performance over time.

Leaders play a crucial role in setting the tone for ethical AI adoption throughout the organisation. This means understanding enough about AI capabilities and limitations to make informed strategic choices, even if you’re not writing code yourself. Responsible leadership includes ensuring transparent implementation processes, clearly communicating AI’s advantages to both employees and investors, and establishing robust monitoring systems to identify potential risks before they become problems. In essence, leaders should approach this like any major operational change – you wouldn’t implement a new manufacturing process or financial system without understanding its implications and maintaining oversight of its performance.

Bringing employees on board

Perhaps no aspect of AI implementation generates more anxiety than workforce concerns about job displacement. Employees often fear that AI systems will render their roles obsolete, creating resistance that undermines even well-intentioned implementations. Clear, transparent communication becomes essential for addressing these concerns honestly while building support for necessary changes. When people understand how innovations will affect them personally and feel they have some control over adapting to those changes, they’re more likely to embrace, rather than resist, new technologies.

Personnel selection and training complete the ethical AI implementation framework. Navigating AI’s integration successfully requires employees equipped with essential new competencies. When evaluating candidates for hiring or promotion, prioritise those who demonstrate adaptability, intellectual curiosity, and openness to change. Additional valuable traits include willingness to challenge misinformation, commitment to ongoing learning, strong critical thinking abilities, and ethical awareness. These characteristics arguably matter even more than existing technical knowledge, which can be taught, because they indicate how individuals will approach AI tools and decisions over time.

Each of these steps reinforces the others, creating a comprehensive approach to ethical AI leadership. Understanding use cases informs implementation strategies. Proper implementation enables responsible leadership. Leadership commitment drives preventive controls. Strong controls enable honest communication. Effective communication facilitates workforce development. A properly developed workforce enhances organisational understanding of AI possibilities. The cycle continues, each iteration deepening organisational capacity for ethical AI deployment while maintaining focus on human values and organisational purpose.

Share via
Copy link