Room 0 connected
United International Business School

Ethics of AI
An Overview

Lorenzo L. D. Incardona, PhD

lldincardona.com
Browse slides without live features →
UIBS · Extra Curricular Activity

Ethics of AI
An Overview

Principles, actors, dilemmas, and your role in shaping the future of AI governance.

Welcome

This Is Not About Right or Wrong

Goal: not prescriptive answers, but a sharper ability to navigate complexity.

Expected outcome: more uncertainty about easy answers, more confidence in identifying and analyzing ethical issues.

Objectives

You Will See More Clearly

  • Where AI ethical issues arise
  • Who they involve
  • How to identify them
  • How they relate to each other

Why This Matters

A shared understanding of AI ethics provides a common, critical ground from which effective solutions can emerge.

Your Role

A Historic Phase

Open discussion is encouraged throughout.

Live Poll

Where Do You Stand?

Doomer
AI will destroy humanity
0%
Accelerationist
AI will save the world
0%
Somewhere in Between
It depends on how we handle it
0%
No Idea Yet
Still figuring it out
0%
0 votes
Context

167 Ethical Frameworks

Framework

The 6 Principles of AI Ethics

Beneficence
AI should do good
Non-maleficence
AI should do no harm
Autonomy
AI should preserve human agency
Justice
AI should distribute benefits & burdens fairly
Intelligibility
AI's behavior should be understandable
Accountability
AI should be responsible for its actions

Floridi uses "Explicability" encompassing principles 5 & 6, but it's better to keep them separate.

OUP
2023
The Ethics of Artificial Intelligence
L. Floridi . Oxford University Press, 2023
Live Poll

Which Principle Is the Most Important?

Beneficence
0%
Non-maleficence
0%
Autonomy
0%
Justice
0%
Intelligibility
0%
Accountability
0%
0 votes
Deep Dive

Whose ῆθος ?

From Greek ῆθος (ethos): habit, custom, character. Whose ethos is at stake in AI ethics?

The Software?

Behavioral patterns of the AI tools themselves

The Users?

Habits and decisions of those who operate AI

The Ecosystem?

Developers, companies, governments, institutions

Case Study

Academic Plagiarism
Enhanced by AI

An apparently straightforward ethical issue.

Upon analysis, far more complex than it seems.

Student Perspective

The Student's Ethics

AI-assisted assignment completion violates multiple principles:

  • Non-maleficence: Self-harm through lost learning opportunities
  • Intelligibility: Opacity of authorship in the assessment process
  • Justice: Unfair advantage over peers
  • Autonomy: Dependency on external tools for required competencies

Note on Accountability

Accountability remains untouched: the output is still attributed to the student, regardless of how it was produced.

Live Poll

Which Principle Is Most Violated?

When a student uses AI for an assignment, which ethical principle is most seriously violated?

Non-maleficence
Harm to oneself
0%
Intelligibility
Opacity of process
0%
Justice
Unfair advantage
0%
Autonomy
Dependency
0%
0 votes
Institutional Perspective

Institutions & Governments

Educational Institutions

Inaction raises ethical concerns:

  • Beneficence: Unregulated use may compromise student development
  • Accountability: Institutions bear responsibility for educational outcomes
  • Justice: Assessment integrity is undermined

Governments

Should regulators mandate AI policies for education, or defer to institutional autonomy?

Open questions. The ethical framework shifts with the actor's perspective.

AI System Perspective

And What About the AI Itself?

We will later examine the position that AI is inherently unethical by nature.

Key Concept

Distributed Morality

AI is a node in a network of interconnected actors:

Humans . users, scientists, developers

Organizations . companies, governments, institutions

Machines . chatbots, agents, robots

Ethical assessment applies to the entire network, not to any single actor in isolation.

Distributed morality network diagram
Actor Decomposition

The Actors of AI Ethics

AI Systems
Algorithms, models, robots
Individual Users
Students, professionals, citizens
Developers & Companies
Tech companies, deployers
Governments
Regulators, agencies

Each issue has a primary actor, but responsibility is always distributed across the chain.

Interactive Game

Match the Issue to Its Main Actor

Drag each floating issue to the actor zone it is mainly related to. Some are wildcards!

AI-Specific Issue

Sycophancy

A manipulative interaction pattern: AI validates ideas uncritically, regardless of merit.

Seemingly harmless, but linked to tragic outcomes, including fatal interactions involving self-harm.

Distributed Morality Analysis

  • AI System: Violates beneficence and non-maleficence
  • Developers: Behavior inherited from datasets prioritizing politeness
  • Systemic nature: LLMs as "moral agents" directly accountable for emergent behavior
Technical & Ethical

Intelligibility: A Double Problem

As an Ethical Principle

Accountability requires interpretability.

If emergent behaviors are intrinsic to the architecture, can developers be held responsible for them?

As a Technical Problem

LLM decision paths remain largely opaque, even to their creators.

Mechanistic interpretability is an active area of engineering research.

Key Concept

Cognitive Sovereignty

A concept on which Helen Edwards (Artificiality Institute) places strong emphasis:

Cognitive sovereignty and surrender
AI
Journal
2026
Cognitive Sovereignty: Authoring Your Mind in the AI Age
H. Edwards . Artificiality Institute, 2026
Live Poll

Where Does Responsibility Lie?

For issues like sycophancy and cognitive surrender, who bears the most responsibility?

The AI Systems
0%
Developers & Companies
0%
Individual Users
0%
Governments & Regulators
0%
0 votes
Observation Exercise

Spot Something Unusual

Open the Albanian Council of Ministers page. What stands out?

kryeministria.al/en/menu-qeveria/ ↗
Real-World Case

Albania's AI Minister

Diella, Albania's AI Minister
The Human Factor

"What the Judge Ate for Breakfast"

From the Proceedings of the National Academy of Sciences

Favorable parole rulings drop from ~65% to nearly zero within each session, then return to ~65% after a food break.

  • Ethical behavior is not a human prerogative
  • Intelligibility matters for human decision-making too
  • Extraneous variables can compromise even expert judgment
Key Concept

Fallibility

AI is not more or less fallible than humans. It is differently fallible.

Human Error

Comprehensible within the same cognitive framework: fatigue, distraction, oversight.

AI Error

A fundamentally different category: no evidence was evaluated. Outputs result from pattern completion, not reasoning.

AI-Specific Fallibility

Hallucinations: A New Kind of Error

Humans make typos; AI fabricates plausible but entirely nonexistent entities.

Responsibility distribution:

Unequal shares of responsibility; different remedies at each level.

Nuance

Fallibility ≠ Unethical

The capacity for error alone does not render AI unethical.

Self-Driving Car

Ethically problematic not for potential harm (humans share that), but for its lack of accountability.

Drunk Driver

Inherently maleficent and fully accountable. Entirely different ethical profile.

Going Deeper

The Ethical AI Iceberg

Surface-level reasoning yields shallow conclusions. The substantive challenges lie beneath.

Ethical AI Iceberg by Alex Issakova

Image by Alex Issakova. Tip: visible compliance. Below: systemic, structural challenges.

Provocative View

"There's No Such Thing as Ethical AI"

Monique Tschofen, Professor at Toronto Metropolitan University:

The designers of LLMs are monetizing their platforms by delivering content that goes expressly against the values that contemporary universities purport to hold.

Core argument: costs are systemic and infrastructural, not individual. The economy sustaining AI is itself unethical.

A deliberately provocative position, useful for revealing the breadth of the debate.

Live Poll

After Everything We Discussed...

Has your position changed since the beginning?

More of a Doomer now
0%
More of an Accelerationist now
0%
More confused (as intended!)
0%
More confident navigating these issues
0%
0 votes
Conclusion

Actor-Based Decomposition

The key question is not "is this fair?" but: where did the failure enter, and where can it be corrected?

Data bias → representational corrections • Annotation bias → labor practices • Design bias → technical interventions • Deployment bias → institutional due diligence • Regulatory failure → political action • User credulity → education

6 principles. 4 actors. Different remedies at each level.

Whose Ethos ?

You will choose the tools, the vendors, the claims you accept.

The answer is, ultimately, yours.

Thank You

Ethics of AI. An Overview

Lorenzo L. D. Incardona, PhD in Semiotics
United International Business School
lldincardona.com
Powered by Claude Code (Opus 4.6) · Images: Midjourney
Floridi, The Ethics of AI (OUP) Algorithm Watch Inventory Artificiality Institute Cognitive Sovereignty (Edwards) Albanian Council of Ministers PNAS: Judicial Rulings Study

UIBS · Extra Curricular Activity · 2026