CIOs address the ethics of implementing AI (2024)

With ethical considerations around AI use increasingly top of mind, IT leaders are developing governance frameworks, establishing review boards, and coming to terms with the difficult discussions and decisions ahead.

CIOs address the ethics of implementing AI (1)

Credit: Olena Yakobchuk / Shutterstock

AI has whet the appetites of organizations across nearly every sector. As AI pilots move toward production, discussions about the need for ethical AI are growing, along with terms like “fairness,” “privacy,” “transparency,” “accountability,” and the big one —”bias.”

But ensuring those and other measures are taken into consideration is a weighty task that CIOs will be grappling with as AI becomes integral to how people work and conduct business.

For many CIOs, implementations may be nascent, but mitigating biases in AI models and balancing innovation with ethical considerations are already among their biggest challenges. What they are finding is that the line between advancing technologically and ensuring AI doesn’t result in detrimental outcomes is thin.

Christoph Wollersheim, a member of the services and artificial intelligence practices group at global consulting firm Egon Zehnder, pinpoints five critical areas most organizations need to address when implementing AI: accuracy, bias, security, transparency, and societal responsibility.

Unfortunately, achieving 100% accuracy with AI is “impossible,” says Wollersheim, who recently co-authored The Board Member’s Guide to Overseeing AI. “The real ethical concern lies in how companies safeguard against misinformation. What’s the plan if customers are presented with false data, or if critical decisions are based on inaccurate AI responses? Companies need both a practical plan and a transparent communications strategy in their response.”

Bias can be inadvertently perpetuated when AI is trained on historical data, he notes.
“Both executive management and boards must ensure fairness in the use of AI and guard against discrimination.” Research is under way to correct biases, using synthetic data to address attributes such as gender, race, and ethnicity, he says, “but there will always be a need for a human-centric lens to be applied.”

The need to secure sensitive information is paramount for ethical AI deployment because AI’s heavy dependency on data increases the risk of breaches and unauthorized access, Wollersheim says. “Companies must fortify against attacks that could mislead AI models and result in ill-informed decisions. Ensuring the security of sensitive information is paramount for ethical AI deployment,” he says.

As for transparency, it’s not just about algorithms, but building trust, he says. “Stakeholders need to comprehend how AI makes decisions and handles data. A transparent AI framework is the linchpin for ethical use, accountability, and maintaining trust.”

Organizations must also consider what values guide them, and what obligations they have in terms of retraining, upskilling, and job protection. “Ethical AI is about shaping a responsible future for our workforce,” Wollersheim says.

To address these issues, establishing an AI review board and implementing an ethical AI framework are critical, Wollersheim says. “An ethical AI framework provides clear guidance on monitoring and approval for every project, internal or external. An AI review board, comprised of technical and business experts, ensures ethical considerations are at the forefront of decision-making.”

Here is a look at how CIOs are addressing ethical AI in their organizations.

Making ethical AI a team sport

Plexus Worldwide is one organization using AI to identify fraudulent account creation and transactions, says Alan McIntosh, CIO and CTO of the $500 million global health and wellness company. As McIntosh sees it, bias is fundamentally a data problem. “We attempt to eliminate bias and incorrect results by leveraging and validating against multiple, complete data sources,”he says.

Plexus IT is also in the analysis phase of using AI within the company’s e-commerce platform “to gain better insights for predicting and optimizing the customer experience and enhancing personalization,” McIntosh says. “We also see automation opportunities to eliminate many legacy manual and repetitive tasks.”

CIOs address the ethics of implementing AI (2)

Plexus Worldwide

To ensure ethical AI practices are adhered to, Plexus Worldwide has formed a team of IT, legal, and HR representatives responsible for the development and evolution of AI governance and policy, he says. This team establishes the company’s risk tolerance, acceptable use cases and restrictions, and applicable disclosures.

Even with a team focused on AI, identifying risks and understanding how the organization intends to use AI both internally and publicly is challenging, McIntosh says. Team members must also understand and address the inherent possibility of AI bias, erroneous claims, and incorrect results, he says. “Depending on the use cases, the reputation of your company and brand may be at stake, so it’s imperative that you plan for effective governance.”

With that in mind, McIntosh says it’s critical that CIOs “don’t rush to the finish line.” Organizations must create a thorough plan and focus on developing a governance framework and AI policy before implementing and exposing the technology. Identifying appropriate stakeholders, such as legal, HR, compliance and privacy, and IT, is where Plexus started its ethical AI process, McIntosh says.

“We then created a draft policy to outline the roles and responsibilities, scope, context, acceptable use guidelines, risk tolerance and management, and governance,” he says. “We continue to iterate and evolve our policy, but it is still in development. We intend to implement it in Q1 2024.”

McIntosh recommends seeking out third-party resources and subject matter expertise. “It will greatly assist with expediting the development and execution of your plan and framework,” McIntosh explains. “And, based on your current program management practices, provide the same level of rigor — or more — for your AI adoption initiatives.”

Treading slowly so AI doesn’t ‘run amok’

The Laborer’s International Union of North America (LIUNA), which represents more than 500,000 construction workers, public employees, and mail handlers, has dipped its toes into using AI, mainly for document accuracy and clarification, and for writing contracts, says CIO Matt Richard.

As LIUNA expands AI use cases in 2024, “this gets to the question about how we use AI ethically,” he says. The organization has started piloting Google Duet to automate the process of writing and negotiating contractor agreements.

CIOs address the ethics of implementing AI (3)

LIUNA

Right now, union officials are not using AI to identify members’ wants and needs, nor to comb through hiring data that might be sensitive and return biases on people based on how the models are trained, Richard says.

“Those are the areas where I get nervous: when a model tells me about a person. And I don’t feel we’re ready to dive into that space yet, because frankly, I don’t trust publicly trained models to give me insights into the person I want to hire,” he says.

Still, Richard expects a “natural evolution” in which, down the road, LIUNA may want to use AI to derive insights into its members to help the union deliver better benefits to them. For now, “it’s still a gray area on how we want to do that,” he says.

The union is also trying to grow its membership and part of that means using AI to identify prospective members efficiently, “without identifying the same hom*ogenous people,” Richard says. “Our organization is pushing very hard and does a good job of empowering minorities and women, and we want to grow those groups.”

That’s where Richard worries about how AI models are used, because avoiding “the rabbit hole of finding the same stereotypical demographic” and introducing biases means humans must be part of the process. “You don’t just let the models do all the work,” he says. “You understand where you are today, and then we stop and say, ‘OK, humans need to intervene here and look at what the models are telling us.’”

“You can’t let AI run amok … with no intervention. Then you’re perpetuating the problem,” he says, adding that organizations shouldn’t take the “easy way out” with AI and only delve into what the tools can do. “My fear is people are going to buy and implement an AI tool and let it go and trust it. … You have to be careful these tools aren’t telling us what we want to hear,” he says.

To that end, Richard believes AI can be used as a kick-starter, but IT leaders must use your team’s intuition “to make sure we’re not falling into the trap of just trusting flashy software tools that aren’t giving us the data we need,” he says.

Taking AI ethics personally

Like LIUNA, Czech-based global consumer finance provider Home Credit is early in its AI journey, using GitHub Copilot for coding and documentation processes, says Group CIO Jan Cenkr.

“It’s offered a huge advantage in terms of time-saving, which in turn has a beneficial cost element too,” says Cenkr, who is also CEO of Home Credit’s subsidiary EmbedIT. Ethical AI has been top of mind for Cenkr from the start.

CIOs address the ethics of implementing AI (4)

Home Credit

“When we started rolling out our AI tool pilots, we also had deep discussions internally about creating ethical governance structures to go with the use of this technology. That means we have genuine checks in place to ensure that we do not violate our codes of conduct,” he says.

Those codes are regularly refreshed and tested to ensure they are as robust as possible, Cenkr adds.

Data privacy is the most challenging consideration, he adds. “Any information and data that we feed into our AI platforms absolutely has to comply with GDPR regulations.” Because Home Credit operates in multiple jurisdictions, IT must also ensure compliance in all those markets, some of which have different laws, adding to the complexity.

Organizations should develop their governance structures “in a way that reflects your own personal approach to ethics,” Cenkr says. “I believe that if you put the same care into developing these ethical structures that you do into the ethics you apply in your personal, everyday life, these structures will be all the safer.”

Further, Cenkr says IT should be prepared to update its governance policies regularly. “AI technology is advancing daily and it’s a real challenge to keep pace with its evolution, however exciting that might be.”

Put in guardrails

AI tools such as chatbots have been in use at UST for several years, but generative AI is a whole new ballgame. This fundamentally changes business models, and has made ethical AI part of the discussion, says Krishna Prasad, chief strategy officer and CIO of the digital transformation company, while admitting that “it’s a little more theoretical today.”

Ethical AI “doesn’t always come up” in implementation considerations, Prasad says, “but we do talk about … the fact that we need to have responsible AI and some ability to get transparency and trace back how a recommendation was made.”

CIOs address the ethics of implementing AI (5)

UST

Discussions among UST leaders focus on what the company doesn’t want to do with AI “and where do we want to draw boundaries as we understand them today; how do we remain true to our mission without producing harm,” Prasad says.

Echoing the others, Prasad says this means humans must be part of the equation as AI is more deeply embedded inside the organization.

One question that has come up at UST is whether it is a compromise of confidentiality if leaders are having a conversation about employee performance as a bot listens in. “Things [like that] have started bubbling up,” Prasad says, “but at this point, we’re comfortable moving forward using [Microsoft] Copilot as a way to summarize conversations.”

Another consideration is how to protect intellectual property around a tool the company builds. “Based on protections that have been provided by software vendors today we still feel data is contained within our own environment, and there’s been no evidence of data being lost externally,” he says. For that reason, Prasad says he and other leaders don’t have any qualms about continuing to use certain AI tools, especially because of the productivity gains they see.

Even as he believes humans need to be involved, Prasad also worries about their input. “At the end of the day, human beings inherently have biases because of the nature of the environments we’re exposed to and our experiences and how it formulates our thinking,” he explains.

He also worries about whether bad actors will gain access to certain AI tools as they use clients’ data to develop new models for them.

These are areas leaders will have to worry about as the software becomes more prevalent, Prasad says. In the meantime, CIOs must lead the way and demonstrate how AI can be used for good and how it will impact their business models, and bring leadership together to discuss the best path forward, he says.

“CIOs have to play a role in driving that conversation because they can bust myths and also execute,” he says, adding that they also have to be prepared for those conversations to at times become very difficult.

For example, if a tool offers a certain capability, “do we want it to be used whenever possible, or should we hold back because it’s the right thing to do,” Prasad says. “It’s the most difficult conversation,” but CIOs must present that a tool “could be more than you bargained for. To me, that part is still a little fuzzy, so how do I put constraints around the model … before making the choice to offer new products and services that use AI.”

CIOs address the ethics of implementing AI (2024)

FAQs

CIOs address the ethics of implementing AI? ›

Christoph Wollersheim, a member of the services and artificial intelligence practices group at global consulting firm Egon Zehnder, pinpoints five critical areas most organizations need to address when implementing AI: accuracy, bias, security, transparency, and societal responsibility.

How do you address ethical issues in AI? ›

Collaborate with legal and compliance teams to develop clear policies and guidelines around AI adoption, including data privacy rules and regulations. Train employees on AI ethics, creating channels for reporting ethical concerns and encouraging open dialogue on ethical considerations.

What are the 5 ethics of AI? ›

5 key principles of AI ethics
  • Transparency. From hiring processes to driverless cars, AI is integral to human safety and wellbeing. ...
  • Impartiality. Another key principle for AI ethics is impartiality. ...
  • Accountability. Accountability is another important aspect of AI ethics. ...
  • Reliability. ...
  • Security and privacy.
Oct 24, 2023

How are industries addressing ethical issues in AI? ›

Including a wide range of data that represents different demographics and perspectives can minimize biases. Regular ethical audits of AI systems can identify and rectify biased outcomes. Implementing policies and guidelines that promote diversity and inclusivity in AI development teams can reduce bias.

What are the EC ethics guidelines for trustworthy AI? ›

Trustworthy AI has three components, which should be met throughout the system's entire life cycle: (1) it should be lawful, complying with all applicable laws and regulations (2) it should be ethical, ensuring adherence to ethical principles and values and (3) it should be robust, both from a technical and social ...

What are the three big ethical concerns of AI? ›

But there are many ethical challenges:
  • Lack of transparency of AI tools: AI decisions are not always intelligible to humans.
  • AI is not neutral: AI-based decisions are susceptible to inaccuracies, discriminatory outcomes, embedded or inserted bias.
  • Surveillance practices for data gathering and privacy of court users.
Apr 21, 2023

How to mitigate AI ethical concerns? ›

8 Ways to Help Ensure Your Company's AI Is Ethical
  1. Define a common agreement of what AI ethics means. ...
  2. Build ethical AI into the product development and release framework. ...
  3. Create cross-functional groups of experts to guide all decisions on the design, development and deployment of responsible ML and AI.

What is AI ethics in simple words? ›

AI ethics are the set of guiding principles that stakeholders (from engineers to government officials) use to ensure artificial intelligence technology is developed and used responsibly. This means taking a safe, secure, humane, and environmentally friendly approach to AI.

How can AI be ethical? ›

Ten core principles lay out a human-rights centred approach to the Ethics of AI.
  1. Proportionality and Do No Harm. ...
  2. Safety and Security. ...
  3. Right to Privacy and Data Protection. ...
  4. Multi-stakeholder and Adaptive Governance & Collaboration. ...
  5. Responsibility and Accountability. ...
  6. Transparency and Explainability.

What are basic AI ethics? ›

Examples of AI ethics issues include data responsibility and privacy, fairness, explainability, robustness, transparency, environmental sustainability, inclusion, moral agency, value alignment, accountability, trust, and technology misuse.

What are some ethical dilemmas in AI? ›

Some of the biggest ethical issues facing the field of AI today include: Bias and discrimination: AI algorithms are only as unbiased as the data they are trained on, which means that if the data contains biases or discriminatory patterns, the AI system will also be biased.

Why is AI controversial? ›

Often, controversies in AI relevant to participation lead to ex-post attempts to fix problems that AI applications cause. These include concerns around bias, privacy, value alignment, safety, existential risk, workforce disruption, and de-skilling.

Can AI be ethical and moral? ›

Ethical challenges

If decisions that were once the purview of humans are delegated to algorithms, it could threaten the capacity for moral reasoning. The person or institution using AI could be considered to be abdicating their moral responsibility. This argument finds echoes in Isaac Asimov's 'Three Laws of Robotics'.

Why the ethics of AI are complicated? ›

AI technologies often require access to vast amounts of personal data, raising privacy concerns. Developers must ensure that data is collected and used responsibly and ethically. They must also ensure that users have control over their data and can opt out of data collection if they so choose.

What is the EC proposal for AI regulation? ›

The AI Act is the first-ever comprehensive legal framework on AI worldwide. The aim of the new rules is to foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing risks of very powerful and impactful AI models.

What are the seven pillars of AI? ›

To this end, we propose seven pillars for the future of AI (Fig. 1), namely: Mul- tidisciplinarity (Section I), Task Decomposition (Section II), Parallel Analogy (Section III), Sym- bol Grounding (Section IV), Similarity Measure (Section V), Intention Awareness (Section VI), and Trustworthiness (Section VII).

How to handle ethics in AI? ›

A strong AI code of ethics can include avoiding bias, ensuring privacy of users and their data, and mitigating environmental risks. Codes of ethics in companies and government-led regulatory frameworks are two main ways that AI ethics can be implemented.

How do you address ethical dilemmas in the age of AI? ›

Addressing these ethical dilemmas require a thoughtful and proactive approach. Here are some strategies that Customer Support Leaders may consider implementing: 1. Ethics Training: Invest in ethics training for employees involved in AI deployment to ensure a deep understanding of ethical principles and practices.

How do you address ethical issues? ›

Ask yourself if you need more information, clarification, or ideas from others who have had a similar problem. Make a list of possible actions and their positive and negative consequences. Make a plan that you can defend professionally and ethically—and one that meets the requirements of the regulations.

Top Articles
Latest Posts
Article information

Author: Twana Towne Ret

Last Updated:

Views: 5956

Rating: 4.3 / 5 (44 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Twana Towne Ret

Birthday: 1994-03-19

Address: Apt. 990 97439 Corwin Motorway, Port Eliseoburgh, NM 99144-2618

Phone: +5958753152963

Job: National Specialist

Hobby: Kayaking, Photography, Skydiving, Embroidery, Leather crafting, Orienteering, Cooking

Introduction: My name is Twana Towne Ret, I am a famous, talented, joyous, perfect, powerful, inquisitive, lovely person who loves writing and wants to share my knowledge and understanding with you.