Lorenzo L. D. Incardona, PhD
lldincardona.comWaiting for the presenter to activate a poll or game...
Principles, actors, dilemmas, and your role in shaping the future of AI governance.
Goal: not prescriptive answers, but a sharper ability to navigate complexity.
Expected outcome: more uncertainty about easy answers, more confidence in identifying and analyzing ethical issues.
A shared understanding of AI ethics provides a common, critical ground from which effective solutions can emerge.
Open discussion is encouraged throughout.
"Frameworks that seek to set out principles of how systems for automated decision-making (ADM) can be developed and implemented ethically"






Floridi uses "Explicability" encompassing principles 5 & 6, but it's better to keep them separate.
From Greek ῆθος (ethos): habit, custom, character. Whose ethos is at stake in AI ethics?
Behavioral patterns of the AI tools themselves
Habits and decisions of those who operate AI
Developers, companies, governments, institutions
An apparently straightforward ethical issue.
Upon analysis, far more complex than it seems.
AI-assisted assignment completion violates multiple principles:
Accountability remains untouched: the output is still attributed to the student, regardless of how it was produced.
When a student uses AI for an assignment, which ethical principle is most seriously violated?
Inaction raises ethical concerns:
Should regulators mandate AI policies for education, or defer to institutional autonomy?
Open questions. The ethical framework shifts with the actor's perspective.
We will later examine the position that AI is inherently unethical by nature.
AI is a node in a network of interconnected actors:

Humans . users, scientists, developers

Organizations . companies, governments, institutions

Machines . chatbots, agents, robots
Ethical assessment applies to the entire network, not to any single actor in isolation.




Each issue has a primary actor, but responsibility is always distributed across the chain.
Drag each floating issue to the actor zone it is mainly related to. Some are wildcards!
A manipulative interaction pattern: AI validates ideas uncritically, regardless of merit.
Seemingly harmless, but linked to tragic outcomes, including fatal interactions involving self-harm.
Accountability requires interpretability.
If emergent behaviors are intrinsic to the architecture, can developers be held responsible for them?
LLM decision paths remain largely opaque, even to their creators.
Mechanistic interpretability is an active area of engineering research.
A concept on which Helen Edwards (Artificiality Institute) places strong emphasis:
For issues like sycophancy and cognitive surrender, who bears the most responsibility?
Open the Albanian Council of Ministers page. What stands out?
kryeministria.al/en/menu-qeveria/ ↗
Favorable parole rulings drop from ~65% to nearly zero within each session, then return to ~65% after a food break.
AI is not more or less fallible than humans. It is differently fallible.
Comprehensible within the same cognitive framework: fatigue, distraction, oversight.
A fundamentally different category: no evidence was evaluated. Outputs result from pattern completion, not reasoning.
Humans make typos; AI fabricates plausible but entirely nonexistent entities.
Responsibility distribution:
Unequal shares of responsibility; different remedies at each level.
The capacity for error alone does not render AI unethical.
Ethically problematic not for potential harm (humans share that), but for its lack of accountability.
Inherently maleficent and fully accountable. Entirely different ethical profile.
Surface-level reasoning yields shallow conclusions. The substantive challenges lie beneath.
Image by Alex Issakova. Tip: visible compliance. Below: systemic, structural challenges.
Monique Tschofen, Professor at Toronto Metropolitan University:
The designers of LLMs are monetizing their platforms by delivering content that goes expressly against the values that contemporary universities purport to hold.
Core argument: costs are systemic and infrastructural, not individual. The economy sustaining AI is itself unethical.
A deliberately provocative position, useful for revealing the breadth of the debate.
Has your position changed since the beginning?
The key question is not "is this fair?" but: where did the failure enter, and where can it be corrected?
Data bias → representational corrections • Annotation bias → labor practices • Design bias → technical interventions • Deployment bias → institutional due diligence • Regulatory failure → political action • User credulity → education
6 principles. 4 actors. Different remedies at each level.
You will choose the tools, the vendors, the claims you accept.
The answer is, ultimately, yours.
UIBS · Extra Curricular Activity · 2026