Ethics and governance are getting lost in the AI frenzy (2024)

Mike Ananny is an assistant professor of Communication and Journalism at the USC Annenberg School for Communication and Journalism. Taylor Owen is assistant professor of Digital Media and Global Affairs at the University of British Columbia.

On Thursday, Prime Minister Justin Trudeau announced the government's pan-Canadian artificial intelligence strategy.

This initiative, which includes a partnership with a consortium of technology companies to create a non-profit hub for artificial intelligence called the Vector Institute, aims to put Canada at the centre of an emerging gold rush of innovation.

There is little doubt that AI is transforming the economic and social fabric of society. It influences stock markets, social media, elections, policing, health care, insurance, credit scores, transit, and even drone warfare. AI may make goods and services cheaper, markets more efficient, and discover new patterns that optimize much of life. From deciding what movies get made, to which voters are valuable, there is virtually no area of life untouched by the promise of efficiency and optimization.

Related: Government, business leaders launch Toronto-based AI initiative

Yet while significant research and policy investments have created these technologies, the short history of their development and deployment also reveals serious ethical problems in their use. Any investment in the engineering of AI must therefore be coupled with substantial research into how it will be governed. This means asking two key questions.

First, what kind of assumptions do AI systems make?

Technologies are not neutral. They contain the biases, preferences and incentives of their makers. When technologists gather to analyze data, they leave a trail of assumptions about which data they think is relevant, what patterns are significant, which harms should be avoided and which benefits should be prioritized. Some systems are so complex that not even their designers fully understand how they work when deployed "in the wild."

For example, Google cannot explain why certain search results appeared over others, Facebook cannot give a detailed account of why your newsfeed may look different from one day to the next, and Netflix is unable to explain exactly why you got one movie recommendation over another.

While the opacity of movie choices may seem innocuous, these same AI systems can have serious ethical consequence. When a self-driving car decides to choose the life of a driver over a pedestrian; when skin colour or religious affiliation influences criminal-sentencing algorithms; when insurance companies set rates using an algorithm's guess about your genetic make-up; or, when people and behaviours are flagged as 'abnormal' by algorithms, AI is making an ethical judgment.

This leads to a second question: how should we hold AI accountable?

The data and algorithms driving AI are largely hidden from public view. They are proprietary and protected by corporate law, classified by governments as essential for national security, and often not fully understood even by the technologists who make them. This is important because the existing ethics that are embedded in our governance institutions place human agency at their foundation. As such, it makes little sense to talk about holding computer code accountable. Instead, we should see AI as a people-machine hybrid, a combination of human choices and automated decisions.

Who or what can be held accountable in this cyborg mix? Is it individual engineers who design code, the companies that employ them and deploy the technology, the police force that arrests someone based on an algorithmic suggestion, the government that uses it to make a policy? An unwanted movie recommendation is nothing like an unjust criminal sentence. It makes little sense to talk about holding systems accountable in the same way when such different types of error, injustice, consequences and freedom are at stake.

This reveals a troubling disconnect between the rapid development of AI technologies and the static nature of our governance institutions. It is difficult to imagine how governments will regulate the social implications of an AI that adapts in real time, based on flows of data that technologists don't foresee or understand. It is equally challenging for governments to design safeguards that anticipate human-machine action, and that can trace consequences across multiple systems, data-sets, and institutions.

We have a long history of holding human actors accountable to Canadian values, but we are largely ignorant about how to manage the emerging ungoverned space of machines and people acting in ways we don't understand and cannot predict.

We welcome the government's investment in the development of AI technology, and expect it will put Canadian companies, people and technologies at the forefront of AI. But we also urgently need substantial investment in the ethics and governance of how artificial intelligence will be used.

Ethics and governance are getting lost in the AI frenzy (2024)

FAQs

What are 3 main concerns about the ethics of AI? ›

But there are many ethical challenges: Lack of transparency of AI tools: AI decisions are not always intelligible to humans. AI is not neutral: AI-based decisions are susceptible to inaccuracies, discriminatory outcomes, embedded or inserted bias. Surveillance practices for data gathering and privacy of court users.

What are the ethical dilemmas of artificial intelligence? ›

AI and Bias: One of the primary ethical concerns with AI revolves around bias. AI systems are trained on vast amounts of data, and if that data is biased, the resulting algorithms can perpetuate or even amplify societal biases.

What is ethical governance in the age of AI? ›

The term 'ethical governance' provides emphasis on people-centred governance to promote the highest standards of human behaviour around AI development and applications. AI offers astonishing new capabilities for human activities and undertakings.

Why is governance important in AI? ›

Since AI is a product of highly engineered code and machine learning created by people, it is susceptible to human biases and errors. Governance provides a structured approach to mitigate these risks, ensuring that machine learning algorithms are monitored, evaluated and updated to prevent flawed or harmful decisions.

Why is AI so controversial? ›

The Bad: Potential bias from incomplete data

“AI is a powerful tool that can easily be misused. In general, AI and learning algorithms extrapolate from the data they are given. If the designers do not provide representative data, the resulting AI systems become biased and unfair.

Is artificial intelligence a threat to humans? ›

Can AI cause human extinction? If AI algorithms are biased or used in a malicious manner — such as in the form of deliberate disinformation campaigns or autonomous lethal weapons — they could cause significant harm toward humans. Though as of right now, it is unknown whether AI is capable of causing human extinction.

Is AI ethical or unethical? ›

AI projects built on biased or inaccurate data can have harmful consequences, particularly for underrepresented or marginalized groups and individuals.

What steps can be taken to mitigate ethical issues in generative AI? ›

These are 10 essential aspects to carry out ethical uses and developments of AI systems:
  • Develop a code of ethics. ...
  • Ensure diversity and inclusion. ...
  • Monitor the AI system. ...
  • Educate employees. ...
  • Transparency. ...
  • Address privacy concerns. ...
  • Consider human rights. ...
  • Anticipate risks.

Can we have an AI system without any ethical concern why? ›

- The data used to train AI can be biased. If the data is not representative of the whole population, the AI will be biased too. For example, if a system is trained on a data set of men, it will be biased against women. - People who design AI systems can also be biased.

Who is the father of AI? ›

John McCarthy is considered as the father of Artificial Intelligence. John McCarthy was an American computer scientist. The term "artificial intelligence" was coined by him. He is one of the founder of artificial intelligence, together with Alan Turing, Marvin Minsky, Allen Newell, and Herbert A.

What is the ethics of artificial intelligence AI? ›

The use of AI systems must not go beyond what is necessary to achieve a legitimate aim. Risk assessment should be used to prevent harms which may result from such uses. Unwanted harms (safety risks) as well as vulnerabilities to attack (security risks) should be avoided and addressed by AI actors.

Who is responsible for AI governance? ›

The key for all organizations is that defining AI governance needs to be done carefully and with significant thought. The responsibility of this has to be with the CEO.

What is an example of AI governance? ›

One example of an AI governance policy could be guidelines implemented by a healthcare organization for the ethical use of AI in patient care. This policy might include principles such as: Ensuring patient data privacy. Requiring informed consent to use patient data in AI models.

What are the pillars of AI governance? ›

For AI solutions to be transformative, trust is imperative. This trust rests on four main anchors: integrity, explainability, fairness, and resilience. These four principles (enabled through governance) will help organizations drive greater trust, transparency, and accountability.

What is the AI governance strategy? ›

An AI governance strategy shall consider: Planned EU political and legislative initiatives. All applicable existing legislation including on non-discrimination, accessibility, information security, and data protection. Best practices and examples from industry, and at both the national and international levels.

What are the concerns of artificial intelligence? ›

Concerns About How AI is Applied in the Real World: Misuse, fraud, and scams. With the increasing sophistication of large language models, image generation systems, and more, it is becoming harder to distinguish human from machine.

What are the 4 main concerns inhibitors and fears companies have on adopting generative AI? ›

Like other forms of AI, generative AI can influence a number of ethical issues and risks surrounding data privacy, security, policies and workforces. Generative AI technology can also potentially produce a series of new business risks like misinformation, plagiarism, copyright infringements and harmful content.

Top Articles
Latest Posts
Article information

Author: Dean Jakubowski Ret

Last Updated:

Views: 5524

Rating: 5 / 5 (70 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Dean Jakubowski Ret

Birthday: 1996-05-10

Address: Apt. 425 4346 Santiago Islands, Shariside, AK 38830-1874

Phone: +96313309894162

Job: Legacy Sales Designer

Hobby: Baseball, Wood carving, Candle making, Jigsaw puzzles, Lacemaking, Parkour, Drawing

Introduction: My name is Dean Jakubowski Ret, I am a enthusiastic, friendly, homely, handsome, zealous, brainy, elegant person who loves writing and wants to share my knowledge and understanding with you.