An ethicist explains his 4 chief concerns about artificial intelligence (2024)

Elon Musk'swarning about artificial intelligence(AI) and Facebook bots' creation ofa language that humans can't understandcan conjure images of robots conquering the world in one's mind. While such an apocalypse may be far-fetched, a more realistic consequence of AI already exists and warrants serious concern: AI'sethical impact.

AI works, in part, becausecomplex algorithmsadeptly identify, remember, and relate data. Although such machine processing has existed for decades, the difference now is that very powerful computers process terabytes of data and deliver meaningful results in real-time.Moreover, some machines can do what had been the exclusive domain of humans and other intelligent life: Learn on their own.

It's this automated learning that introduces a critical question:Can machines learn to be moral?

As a researcher schooled in scientific method and an ethicist immersed in moral decision-making, I know it's challenging for humans to navigate concurrently the two disparate arenas.It's even harder to envision how computer algorithms can enable machines to act morally.

Both academia and business use positive science to identify correlations and causality. The results of such study are reams of objective information.A mortgage lender's algorithm, for instance, might find that borderline borrowers are most responsive to ads suggesting an interest rate increase the day after their favorite football team loses a big game. Data-crunching computers can identify that association and learn to deliver relevant ads.

Moral choice, however, doesn't ask whether an action will produce aneffectiveoutcome; it asks if it is agooddecision.In other words, regardless of efficacy, is it therightthing to do?Such analysis does not reflect an objective, data-driven decision but a subjective, judgment-based one.

Placing manipulative ads before a marginally-qualified and emotionally vulnerable target market may be very effective for the mortgage company, but many people would challenge the promotion's ethicality.Humans can make that moral judgment, but how does a data-driven computer draw the same conclusion?Therein lies what should be a chief concern about AI.

Individuals often make moral decisions on the basis of principles like decency, fairness, honesty, and respect.To some extent, people learn those principles through formal study and reflection; however, the primary teacher is life experience, which includes personal practice and observation of others.Some would go a step further and argue that these values are innate, i.e., we are born with them.

Related stories

Can computers be manufactured with a sense of decency?Can coding incorporate fairness?Can algorithms learn respect?It seems incredible for machines to emulate subjective, moral judgment, but if that potential exists, at least four critical issues must be resolved:

Advertisem*nt

1. Whose moral standards should be used?

While there's a shortlist of commonly-held ethical principles, individual interpretations of those principles often varies.For instance, there's a broad range of opinion in terms of what constitutes decent versus indecent language and attire.

2. Can machines converse about moral issues?

Even though people may subscribe to different moral standards, they often overcome their differences through discussion and debate, sometimes by engaging external parties.How does technology replicate such free-flowing dialogue and, more importantly, embrace a desire for reconciliation?

3. Can algorithms take context into account?

One of the hallmarks of moral decision-making is that it's highly nuanced.For instance, in most cases it's reasonable for a company to increase prices if its products are in short supply and there's very high demand.However, what if the product is a life-saving pharmaceutical and industry attrition has made the company the only remaining supplier, all while a national recession puts extra financial pressure on the target market.Can a computer know to incorporate this specific information and make a subjective assessment of the very unique situation?

4. Who should be accountable?

It's common for people embroiled in organizational scandals to point the finger at others in an effort to absolve themselves.What happens, then, when a mistake is traced to a self-learning machine?If a computer can't be held responsible, who should be — the company that made the machine, the firm that wrote the software, or the organization that used them?More importantly, which specific people should be accountable?

Advertisem*nt

It's encouraging to see artificial intelligence already positively impacting many lives, and it's exciting to imagine more advanced applications multiplying that influence.It's also imperative that those developing the technology cultivate 'moral sustainability': a future in which choosing what's right is somehow coded into AI.

Dr. David Hagenbuch is a Professor of Marketing at Messiah College, the author of Honorable Influence, and the founder MindfulMarketing.org, which aims to encourage ethical marketing. Before entering higher education, he worked as a corporate sales analyst for a national broadcasting company and as a partner in a specialty advertising firm. More information is available atwww.davidhagenbuch.com.

An ethicist explains his 4 chief concerns about artificial intelligence (2024)
Top Articles
Latest Posts
Article information

Author: Domingo Moore

Last Updated:

Views: 6054

Rating: 4.2 / 5 (53 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Domingo Moore

Birthday: 1997-05-20

Address: 6485 Kohler Route, Antonioton, VT 77375-0299

Phone: +3213869077934

Job: Sales Analyst

Hobby: Kayaking, Roller skating, Cabaret, Rugby, Homebrewing, Creative writing, amateur radio

Introduction: My name is Domingo Moore, I am a attractive, gorgeous, funny, jolly, spotless, nice, fantastic person who loves writing and wants to share my knowledge and understanding with you.