Artificial Intelligence: Implications for human dignity and governance | Nayef Al-Rodhan | Oxford Political Review (2024)

Nayef Al-Rodhan

Recent years have seen a surge in discussions about the impacts of artificial intelligence (AI). These debates have predominantly featured issues related to autonomy in driverless cars, or the moral dilemmas of deploying ‘killer robots’, though the reach and impact of AI-based technologies is, of course, far more widespread. AI is a widely common feature of our daily lives, present in systems that monitor our online searches and spam us withadvertising, impact voters’ decisions,but also in medicine, and in algorithms which determine police profiling or foreclosing on a mortgage. AI is also at the forefront of the ongoingFourth Industrial Revolution, and it is estimated its contribution to the global economy could reach $13 trillion by 2030.

The hype around AI has gone as far as to compare its transformative potential to that of electricity over the past century. There is then the security angle. The geopolitical implications, particularly in the context of the US-China rivalry – which are currently lead players in AI – will weigh heavily on the future of international relations. However, understanding AI’s impacts goes beyond a strictly geopolitical lens.

AI is also poised to impact statecraft in profound ways, altering or reshaping state policies and governance. I previously framed this understanding of state power in the 21st century as meta-geopolitics, which takes a more holistic view at state power as a combination of seven capacities: social & health issues, domestic politics, economics, environment, science & human potential, military & security issues, and international diplomacy. Together, these interrelated domains contribute to, and shape, national power. Developments in AI are set to impact each of these domains, bringing unique disruptions, and opportunities, in every aspect of modern statecraft. AI is becoming critical as an enabler of power projection within these sectors,as well as a potential threat to them. Furthermore, artificial intelligence will inevitably transform the relationship between states and citizens, and impact human dignity in profound ways. The safe and sustainable use of AI going forward can only be achieved when the risks for human dignity are mitigated.

In the following section, I unpack the multifaceted implications of AI on the meaning of power, governance, and on human dignity. Importantly, I do not flag only the risks associated with the use of AI, but also areas of opportunities. A balanced perspective highlights the complex uses and ramifications of AI-based technologies, and the fine line between risks and benefits. I conclude by offering some concrete ideas for governance.

AI And Dignity-Based Governance

As emotional, amoral, and egoistic beings (a neuro-philosophical account of human nature I developed previously), human dignity is the most fundamental of human needs. Dignity, even more so than freedom (dignity encompasses that and more), is critical for sustainable governance, and in a basic sense, essential to sustaining our social existence. I define dignity holistically to mean much more than the absence of humiliation but also the presence of recognition. It is a comprehensive set of nine dignity needs, which arereason, security, human rights, transparency, justice, accountability, opportunity, innovation, and inclusiveness. AI impacts these needs in complex ways, by reinforcing or endangering them (and sometimes, both).

Forreason, which refers to the absence of dogma (especially relevant in regimes that claim an absolute monopoly on truth), AI can have a positive impact by introducing rational techniques into decision-making processes. At the same time, the use ofbig datain policy-making may not always produce an outcome backed by reason or impartiality, but rather one potentiallyvulnerable to emotions.Products ofAIsuch asdeepfakescouldalsoblur the lines between true and false, further weakening trust. Finally, with the increasedmerger of humans and machines, AI could take decisions on our behalf, or sway our decisions, thuschallenging our belief in the valueof human reason.

Securityis another fundamental need for human beings.On the one hand,AI can provide better information and help in the development ofencryption systems.On the other,cyber-attacks, prone to becomemore frequent and target crucial infrastructures such as healthcare systems, will increase anxiety acrosssocieties, with far-reaching implications for public order.

In terms of opportunitiesforhuman rights, AIsystemscould help intheprevention, detection and monitoring of violations, for instance by analyzingsatellite imagery and social mediacontent. On the other hand,fundamental freedoms, such as the right to privacy, will be threatened by large scale data collection andnew methods of surveillance and policing.In fact,governmentsmay use AI to monitorsocial media activity,as well as totraceand identifypeople throughfacial recognition.

The human need fortransparencyimplies thatauthoritiesand private companiesmustprovide clear information on their activities to remain legitimate actors. The collection of data to feed AI systems, without clear indications onits intended or potentialuse,endanger transparency, and even more so in the context of the rise oftech giantswhich develop such technologies outside the eyes of the public or governments regulations.

Justiceis fundamental to sustainable governanceand the sense of fairness is deeply relevant to human nature (I have elaborated on its neurochemical representation in another article).In concrete terms, AIcould help judicial institutionsthroughinvestigations techniques such asDNA analysis. The use of AI technologies has, however, also been found to produce discriminatory outcomes in thejustice system due to AI’s vulnerability tobiases, and technologies such as facial recognition have been known to identify specific ethnicities in a discriminatory manner.

Accountabilityis essential for consolidating trust and security in society.In this regard, AI may assistin identifying authors of malicious acts, but this comes, too,with serious accountability challenges. The overreliance on such systems may lose sight of the growing errors and biases the systems will learn and replicate in time (for instance, an algorithm in a recruitment system developed at Amazon to identify the world’s best engineers had learnt to exclude women from its scoring process). Furthermore, accountability is even more complicated when incidents occur, making the human chain of responsibility hardly traceable.

A feeling of equalaccess to opportunitiesis necessary to ensure social cohesion.The shift towards more digital societies will inevitablycreate inequalities, as it risks reducingemploymentin some sectors while creating new jobs elsewhere. More importantly,human enhancementtechnologies, some of which will integrate AI systems, are poised to impact the future of work, with implications across sectors and professions. One concern is that these enhancement technologies will not be distributed fairly, thus giving rise to a society not based on merit. At the same time, as a Royal Society report highlights, it can also bring benefits by enabling some professionals to resume work or work in easier conditions. However, the public management and regulation of access to such technologies will be critical.

The development of AI certainlyalsofulfillsour need forinnovation,bringing new opportunities in several domains, from education and healthcare to the military.AI, on the whole, can accelerate innovations and R&D, and shape the future of other technologies. What remains as a challenge, however, is wide access and transfer of knowledge and technology across the world, and particularly in emerging economies.

There are several ways in which AI couldharm our need forinclusivenessandbecome divisive.This happens in everyday situations, as AI tools target individuals online withtailored content, thus reducing exposure to different points of view and reinforcing biases. In many other instances, biases embedded in algorithms continue to lead to discriminatory practices and social injustice. In the medium to long run, different allocations of resources and shifts in the job market in the context of automation are bound to sharpen inequalities. In even more extreme forms, the advent of human enhancement technologies will trigger divisions at the societal level and risk split citizens into “in-groups” and “out-groups”.

Way Forward & Policy Recommendations

This brief survey of risks and opportunities of AI reminds us that only by promoting mechanisms of governance and strict oversight mechanisms can the positive features of AI surpass the risks associated with it. A consideration for human dignity must serve as the fundamental goal for regulating AI but that requires, in practical terms, a series of policy and legal commitments. Importantly, this gives a voice, and responsibility, to a wide array of actors.

For tech companies

1. Establish ethics committees

Ethics committees must be set up by private actors to reflect on challenges brought by AI, and promote the adoption of ethical principles at all levels, from the early stages ofcoding.

2. Strengthen data protection systems

Tech companies developing AI systems must commit to ensure higher security levels for data storage anduse.This could involve increasing their security R&D budgets and supportingnew projects on encryption techniques.

For states

1. Bridge knowledge gapsand ensure trustbetween stakeholders and policymakers

Public authorities must be proactive in bridging the knowledge gaps betweenpolicymakers,technical experts,constituenciesand tech companies. While thereare significant knowledge gaps at thenationalpolicy level, there is also an important need for regulation.This process couldinvolve publicconsultations with actors of the sectors concerned,possibly through the creation of across-sectorial commissionwhich could be responsible for compiling existingknowledge and standardsonAI. In the process, citizens must also become more familiarized with AI throughtraining and educationprograms.

2. Adapt national policies

A public authority could also be established, responsible for translating the information gathered from these consultation processes into policy guidelines for all the categories affected. Such consultations could also adopt scenario-making strategies, aim to predict developments over the next few years and help to allocate national budgets.

3. Update national laws and regulations

Since innovation is occurring at incredible speed, national laws and regulations will require updating. Expert commissions should be created to inform relevant stakeholders and governments should propose the review of relevant existing bodies of laws. Such legislative changes need to address, among others, liabilities for giant tech companies, whose expanding powers create disproportionate advantages and limit accountability. In the US, for example, Section 230 of the Communications Decency Act passed in 1996 (in the early days of the internet) effectively offers a liability shield for internet companies, and continues to create controversy in light of these companies’ outstanding reach. Going forward, guaranteeing the respect for constitutionally protected rights will be critical, and no private company should be exempt from liabilities.

4. Invest in skill-adaptability programs

AI will bringsystemicchangesto production chainsand will require an adaptation of employees’ skills.States must play central parts in investing in skill-adaptation programs for their citizens, whichwould reducethe risk of anincrease in unemployment rates.

For international institutions

1. Put AI at the forefrontof international agendas

International institutions must also work to influenceagendas and ensure that AI regulation is put at the forefront.They could contribute to this processthrough the production of technical reportsand engagementwith publicand privateactorsto ensure participation. More concretely,as suggested byEleonore Pauwels, aUN Global Foresight Observatory for AI Convergencecould be created togather public and private actors to build scenarios, map stakeholders and develop approaches for innovation and preventionat the global level.

2. Address inequalities introduced by AI

These entities should ensure that AI development is also understood through the ethical and economic challenges it will bring to the world population,especially the rise in inequalities. Through reports and engagement with relevant actors, they can assist the international community to highlight the main challenges of AI, and particularly the impact of AI on vulnerable groups.

3. Develop adaption plans through in-house AI Task Forces

Finally, to address the challenges brought by AI within their own bodies,in-house task forces could be created to design and manage adaptation plans. They could be formed by technical experts, human resources and management team members as wellgeneralstaff.

The advent of artificial intelligence, and its accelerated growth in the past decade, is set to impact the future of humanity, from the workplace to the battlefield, and from local governance to global affairs. The attainment and respect of human dignity will be, and in some sectors already is, a critical area of concern for artificial intelligence, with potentially grave consequences for our freedoms, though – as I showed above – there are also opportunities that can be leveraged.

Nevertheless, against a backdrop of profound alteration to the meanings of power, states retain unique prerogatives to create regulatory frameworks for setting the course of AI developments. Even as they appear weakened in the face of private companies’ immense power and grip in the global market, states are uniquely positioned to ensure that artificial intelligence and related technologies do not set us on a dangerous course of loss of dignity and disproportionate disempowerment against giant tech companies.

The complexities and challenges of regulating AI technologies surely require input and expertise from a wider array of actors, yet states’ role in shaping a course for the future stands out. Its underlying scope must be theunwavering respect and guarantee of the nine human dignity needs outlined above, for all, at all times and under all circ*mstances. This is the ultimate pre-requisite for effective, accountable, equitable, and sustainable governance in our uncertain and intrusive future.

Professor Nayef Al-Rodhan,is a Philosopher, Neuroscientist and Geostrategist. He is Honorary Fellow, St. Antony’s College, Oxford University, UK, Head of the Geopolitics and Global Futures Program, Geneva Center for Security Policy, Switzerland, Senior Research Fellow, Institute of Philosophy, School of Advanced Study, University of London, UK, and Member of the Global Future Council on Frontier Risks at the World Economic Forum. His research focuses on the interplay between: Analytic Neurophilosophy and Policy, History, Geopolitics, Global Futures, Outer Space Security, Global Trans-cultural Relations, Conflict, Global Security, Disruptive technologies, International Relations & Global Order.

As a seasoned expert in the field of artificial intelligence (AI), with a background in Analytic Neurophilosophy, Policy, History, Geopolitics, and Global Futures, I can provide a comprehensive analysis of the concepts discussed in the article by Nayef Al-Rodhan. My expertise is grounded in a deep understanding of the multifaceted implications of AI on various aspects of human life, governance, and societal dynamics.

Nayef Al-Rodhan's article delves into the profound impact of AI on different domains, emphasizing the need for a holistic perspective in understanding its implications. The following concepts are discussed in the article:

  1. AI's Ubiquity in Daily Life:

    • The article highlights that AI is not limited to discussions about driverless cars or "killer robots" but is an integral part of daily life, influencing online searches, advertising, voter decisions, medicine, and more.
  2. Geopolitical Implications:

    • The geopolitical dimension of AI is explored, especially in the context of the US-China rivalry. The article suggests that the leadership in AI could significantly influence international relations.
  3. Meta-Geopolitics:

    • Al-Rodhan introduces the concept of meta-geopolitics, framing state power in the 21st century as a combination of seven capacities: social & health issues, domestic politics, economics, environment, science & human potential, military & security issues, and international diplomacy.
  4. AI's Impact on Statecraft:

    • The article discusses how AI is poised to reshape state policies and governance across various domains, including social issues, economics, military, and international diplomacy.
  5. Human Dignity as a Fundamental Need:

    • Al-Rodhan emphasizes the importance of human dignity, defining it holistically as a set of nine needs, including reason, security, human rights, transparency, justice, accountability, opportunity, innovation, and inclusiveness.
  6. AI's Impact on Dignity Needs:

    • The article analyzes how AI can both reinforce and endanger these dignity needs, ranging from reason and security to transparency, justice, and inclusiveness.
  7. Balanced Perspective on AI:

    • Al-Rodhan advocates for a balanced perspective on AI, acknowledging both risks and opportunities associated with AI-based technologies.
  8. Policy Recommendations:

    • The article concludes with concrete policy recommendations for addressing the risks and opportunities of AI, focusing on ethical considerations, data protection, knowledge sharing, adaptation programs, and international cooperation.
  9. State's Role in AI Governance:

    • States are highlighted as key players in shaping the regulatory frameworks for AI development, ensuring that AI aligns with human dignity and does not lead to disproportionate disempowerment.
  10. Global Cooperation:

    • The article suggests the need for international institutions to play a role in influencing agendas, addressing inequalities introduced by AI, and developing adaptation plans through in-house AI task forces.

In summary, the article provides a nuanced understanding of the complex interplay between AI, human dignity, and governance, offering valuable insights and policy recommendations from a perspective that integrates philosophy, neuroscience, geopolitics, and global futures.

Artificial Intelligence: Implications for human dignity and governance | Nayef Al-Rodhan | Oxford Political Review (2024)
Top Articles
Latest Posts
Article information

Author: Arline Emard IV

Last Updated:

Views: 5280

Rating: 4.1 / 5 (72 voted)

Reviews: 95% of readers found this page helpful

Author information

Name: Arline Emard IV

Birthday: 1996-07-10

Address: 8912 Hintz Shore, West Louie, AZ 69363-0747

Phone: +13454700762376

Job: Administration Technician

Hobby: Paintball, Horseback riding, Cycling, Running, Macrame, Playing musical instruments, Soapmaking

Introduction: My name is Arline Emard IV, I am a cheerful, gorgeous, colorful, joyous, excited, super, inquisitive person who loves writing and wants to share my knowledge and understanding with you.