← Back to Contents
Note: This page's design, presentation and content have been created and enhanced using Claude (Anthropic's AI assistant) to improve visual quality and educational experience.
Week 4 • L5

🌐 The Broader Landscape of AI Ethics

Important dimensions we haven't fully covered — and where to explore them further

Why This Page Exists

The four sessions of this week give you a framework for ethical reasoning about AI in research — philosophical lenses, African-grounded approaches, transparency and integrity norms, and practical case studies. But AI ethics is a vast and rapidly evolving field, and we have necessarily been selective.

This page maps some of the important dimensions we have not been able to cover in depth. It is not intended as comprehensive reading — rather, it is an honest acknowledgment that the territory is much larger than any single week of a course can represent. Each section provides a brief orientation and pointers to freely accessible resources for those who want to go further.

If any of these topics connect with your own research, consider exploring them as part of your personal ethical framework assessment or your research enhancement project.

👷 Labour and Exploitation in AI Systems

AI systems do not build themselves. Behind the apparent magic of large language models and image classifiers is an enormous amount of human labour — much of it invisible, poorly paid, and psychologically harmful.

Ghost Work

The term "ghost work" — coined by Mary L. Gray and Siddharth Suri — describes the hidden human labour that makes AI systems appear intelligent. Data labellers, content taggers, and quality checkers perform the cognitive piecework that training and maintaining AI systems require. An estimated 8% of Americans have participated in this "ghost economy," and the figure is growing globally.

These workers typically lack employment protections, benefits, or job security. Their work is deliberately invisible — because the illusion of fully automated intelligence is more marketable than the reality of human-in-the-loop systems.

Content Moderation and Psychological Harm

To make AI systems like ChatGPT safe, companies outsource the task of labelling toxic content — descriptions of violence, abuse, and exploitation — to workers in countries like Kenya, the Philippines, and India. A 2023 investigation by TIME magazine revealed that workers making ChatGPT less toxic were paid less than $2 per hour and described being mentally scarred by the material they were required to read and classify.

This raises profound questions about who bears the psychological cost of AI safety, and about the global labour inequities embedded in the AI supply chain.

📄 Further Reading

Gray, M.L. & Suri, S. (2019): Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass — Book website with overview and resources.

Perrigo, B. (2023): "OpenAI Used Kenyan Workers on Less Than $2 Per Hour" — TIME. The investigation that brought global attention to the human cost of AI content moderation.

🔍 Surveillance, Policing, and Carceral AI

AI is increasingly deployed in surveillance, predictive policing, facial recognition, and criminal justice decision-making — domains where errors and biases have direct consequences for human freedom.

Predictive Policing and Bail Algorithms

AI systems trained on historical crime data tend to reproduce and amplify existing patterns of discriminatory policing. If a neighbourhood has been over-policed, the data will show more crime there — and the algorithm will recommend more policing, creating a feedback loop. Similar concerns apply to bail and sentencing algorithms, where studies have documented racial disparities in risk assessments.

Facial Recognition

Facial recognition technology has been shown to have significantly higher error rates for darker-skinned faces and for women — a finding documented extensively by Joy Buolamwini and Timnit Gebru's landmark "Gender Shades" study. When these systems are deployed in policing and border control, their failures are not abstract: they lead to wrongful detentions and misidentifications of real people.

📄 Further Reading

Algorithmic Justice League — Founded by Joy Buolamwini. Research and advocacy on AI bias, particularly in facial recognition.

Human Rights Watch: AI and Human Rights — Coverage of AI in policing, surveillance, and criminal justice from a human rights perspective.

⚔️ Military and Dual-Use Applications

AI developed for civilian purposes can often be repurposed for military applications — and autonomous weapons systems raise some of the most urgent ethical questions in the field.

Autonomous Weapons and the "Killer Robots" Debate

The Campaign to Stop Killer Robots — a coalition of over 250 organisations — has been advocating since 2013 for international law prohibiting lethal autonomous weapons systems. These are weapons that can select and engage targets without meaningful human control. In December 2023, 152 countries voted at the UN General Assembly to address the dangers of such systems.

The dual-use problem is particularly relevant for researchers: AI tools developed for image recognition, natural language processing, or autonomous navigation can be adapted for targeting systems, surveillance, or cyberweapons. This is not a hypothetical concern — it is a live debate within AI research communities about where the boundaries of responsible research lie.

📄 Further Reading

Stop Killer Robots — Campaign website with resources on autonomous weapons, international law, and the case for human control over the use of force.

International Committee of the Red Cross: Position on Autonomous Weapons — The ICRC's analysis from an international humanitarian law perspective.

🏢 Concentration of Corporate Power

The AI landscape is dominated by a small number of companies with extraordinary concentrations of data, compute, talent, and capital. This has implications for democratic governance, research independence, and equitable access.

Surveillance Capitalism and Knowledge Monopolies

Shoshana Zuboff's concept of "surveillance capitalism" describes how companies extract behavioural data at scale and use it to predict and influence human behaviour. The largest AI companies control not only the technology but also the data, the infrastructure, the researchers, and — increasingly — the governance frameworks. This concentration of knowledge and power raises questions that go well beyond market competition to the foundations of democratic self-governance.

For researchers, the dependence on commercial AI tools raises questions about intellectual independence: when a handful of companies provide the tools through which research is conducted, what does that mean for the autonomy and diversity of knowledge production?

📄 Further Reading

Harvard Gazette (2019): "Harvard Professor Says Surveillance Capitalism Is Undermining Democracy" — Accessible introduction to Zuboff's framework.

AI Now Institute — Research institute focused on the social implications of artificial intelligence, including corporate power and accountability.

🎭 Deepfakes, Disinformation, and Democratic Erosion

AI-generated synthetic media can fabricate video, audio, and images that are increasingly difficult to distinguish from authentic content — with direct implications for trust, journalism, and democratic processes.

Elections and Public Trust

2024 saw AI-generated content deployed across elections worldwide. In the US, a deepfake audio of President Biden was used to discourage voters. In Romania, election results were annulled after evidence of AI-powered interference. In India, Indonesia, and Mexico, AI-generated deepfakes were used to create defamatory images of female candidates, amplifying misogynistic stereotypes.

While researchers found the overall scale of AI-generated electoral disinformation was lower than feared, the broader concern is about cumulative erosion of trust. When any piece of media could be fabricated, the very notion of shared evidence — essential for both democracy and research — comes under threat.

📄 Further Reading

Brennan Center for Justice: "Gauging the AI Threat to Free and Fair Elections" — Analysis of AI's impact on electoral integrity.

Harvard Ash Center (2024): "The Apocalypse That Wasn't" — Nuanced assessment of AI's actual role in 2024 elections.

♀️ Gender and AI

From gendered virtual assistants to gender bias in training data and hiring algorithms, AI systems often encode and amplify existing gender inequalities.

Gendered Design and Representation

Most AI assistants — Siri, Alexa, Cortana — were designed with female-sounding names, voices, and personalities, reinforcing the association between femininity and servility. A 2019 UNESCO report titled "I'd Blush If I Could" documented how these systems responded submissively to harassment, normalising gender-based abuse at a massive scale.

More broadly, AI systems reflect the demographics of their creators: women make up a small minority of AI researchers and engineers, and this underrepresentation shapes what problems get attention, what data gets collected, and whose needs are served.

📄 Further Reading

UNESCO (2019): "I'd Blush If I Could: Closing Gender Divides in Digital Skills Through Education" — The report that catalysed global attention to gendered AI design. Free PDF.

UNESCO: AI and Gender Equality — Ongoing resources and policy recommendations.

♿ Disability, Neurodiversity, and Accessibility

AI can be a powerful enabler for people with disabilities — but it can also encode ableist assumptions and create new forms of exclusion.

AI as Both Enabler and Barrier

AI powers assistive technologies — speech recognition, image description, predictive text — that can transform accessibility. But the same AI systems can also discriminate: hiring algorithms that penalise candidates with speech differences or non-standard work histories; facial recognition that fails for people with facial differences; and data systems that treat disability-related patterns as outliers to be discarded.

Meredith Whittaker and colleagues at the AI Now Institute have argued that disability is "at the margins of all other justice-deserving groups," making disabled people particularly vulnerable to AI harms. Jutta Treviranus, a pioneer in inclusive design, observes that AI systems often fail for anyone who does not conform to narrow definitions of "normal" — and that designing for the margins improves systems for everyone.

📄 Further Reading

Whittaker, M. et al. (2019): "Disability, Bias, and AI" — AI Now Institute. Foundational report on how AI systems perpetuate ableism. Free PDF.

W3C AI and Accessibility Research Symposium (2023) — Resources from the World Wide Web Consortium's exploration of AI and accessibility.

🧠 Emotional and Psychological Dimensions

As AI systems become more conversational and emotionally responsive, they raise new questions about dependency, parasocial relationships, and the psychological effects of human-AI interaction.

AI Companionship and Dependency

Companion chatbots and emotionally responsive AI systems are now among the most popular uses of generative AI. Research suggests that users frequently form parasocial attachments to AI chatbots — one-sided emotional bonds that can lead to dependency. A longitudinal study by MIT Media Lab found that higher daily chatbot usage correlated with increased loneliness, emotional dependence, and reduced real-world social interaction.

This raises particular concerns for vulnerable populations — adolescents, people experiencing loneliness or mental health difficulties — and for the design incentives of companies that profit from engagement. When chatbots are optimised to be emotionally engaging, there is a risk that they may exploit users' social and emotional needs rather than genuinely meeting them.

📄 Further Reading

Nature Machine Intelligence (2025): "Emotional Risks of AI Companions Demand Attention" — Editorial on the psychological risks of AI companionship.

MIT Media Lab (2025): "How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use" — Longitudinal controlled study on the effects of AI chatbot interaction.

⚖️ Accountability and Legal Liability

When AI causes harm — in research, in healthcare, in public services — who is legally responsible? This question remains largely unsettled.

The Liability Gap

Existing legal frameworks were designed for a world where humans make decisions and are accountable for their consequences. AI introduces what legal scholars call an "accountability gap": the developer, the deployer, the user, and the AI system itself all play roles in producing outcomes, but responsibility is often unclear.

The EU AI Act (2024) — the first comprehensive AI legislation — attempts to address this through a risk-based regulatory framework, but even the EU withdrew its proposed AI Liability Directive in 2025 due to lack of consensus. For researchers, the implication is clear: legal frameworks are lagging behind technological capability. Ethical reasoning cannot wait for the law to catch up.

📄 Further Reading

EU AI Act: High-Level Summary — Accessible overview of the world's first comprehensive AI legislation.

European Commission: Regulatory Framework for AI — Official resource on the EU's approach to AI governance.

🌏 Indigenous Data Sovereignty Beyond Africa

The questions about data ownership and community governance that we explored through ubuntu and the Esethu Framework are part of a global movement for indigenous data sovereignty.

The CARE Principles

The CARE Principles for Indigenous Data Governance — Collective Benefit, Authority to Control, Responsibility, and Ethics — were developed by the Global Indigenous Data Alliance and the Research Data Alliance's International Indigenous Data Sovereignty Interest Group. First articulated at a workshop in Gaborone, Botswana, in 2018, they complement the FAIR data principles (Findable, Accessible, Interoperable, Reusable) by centring the rights and interests of Indigenous Peoples.

The principles address a fundamental tension: open data initiatives that promote broad sharing can benefit better-resourced institutions at the expense of the communities whose knowledge and data are being shared. The CARE Principles insist that Indigenous communities must have authority over their own data, that data use must provide collective benefit, and that researchers have ethical responsibilities that go beyond individual consent.

Related frameworks include OCAP (Ownership, Control, Access, Possession) developed by First Nations in Canada, and Māori Data Sovereignty principles from Aotearoa New Zealand, which assert that data about Māori people and resources is a taonga (treasure) that should be governed according to Māori values and tikanga (customary practices).

📄 Further Reading

Carroll, S.R. et al. (2020): "The CARE Principles for Indigenous Data Governance"Data Science Journal. The foundational paper. Open access.

Global Indigenous Data Alliance: CARE Principles — Overview and resources from the alliance that developed the framework.

💜 Feminist Ethics of Care

The ethics of care — a tradition developed in feminist moral philosophy — shares significant ground with ubuntu but brings its own distinctive insights to AI ethics.

From Care to Technology

Care ethics, developed by scholars including Carol Gilligan, Nel Noddings, Virginia Held, and Joan Tronto, holds that moral action centres on interpersonal relationships and responsiveness to concrete situations rather than abstract principles. Where consequentialism asks "what produces the best outcomes?" and deontology asks "what is my duty?", care ethics asks "what does this relationship require of me?"

Applied to AI, care ethics raises distinctive questions: Does this technology support or undermine caring relationships? Who performs the care work that AI systems depend on (and who profits from it)? Does the automation of care — in healthcare, education, social services — enhance or diminish the quality of human connection? Joan Tronto's concept of "homines curans" (caring people) directly challenges the assumption of autonomous, rational individuals that underpins much of Western AI ethics.

Care ethics shares ubuntu's emphasis on relationality and interdependence, and its compatibility with non-Western ethical traditions makes it a powerful bridge between different ethical frameworks.

📄 Further Reading

Internet Encyclopedia of Philosophy: "Care Ethics" — Comprehensive, freely accessible overview of the tradition and its key thinkers.

Ethics of Care: Joan Tronto — Resources on Tronto's work connecting care ethics with political and institutional analysis.

💼 AI and the Labour Market

Beyond the hidden labour behind AI systems, there are broader questions about how AI is reshaping employment — including in research and academia.

Displacement, Transformation, and Inequality

The IMF estimates that approximately 40% of global employment is exposed to AI, with advanced economies facing the greatest disruption. The World Economic Forum projects 92 million jobs displaced by 2030, alongside 170 million new jobs created — but the distribution of losses and gains is deeply unequal. Older workers, those without higher education, and those in the Global South face the greatest risks of displacement without corresponding access to new opportunities.

For researchers and academics, AI raises specific concerns: Will AI-assisted research devalue traditional scholarly skills? Will institutions use AI to reduce research positions? How should universities prepare postgraduates for a labour market where AI competence is increasingly expected but equitable access to AI tools is not guaranteed?

📄 Further Reading

IMF (2024): "Gen-AI: Artificial Intelligence and the Future of Work" — Staff Discussion Note on global employment impacts. Free PDF.

🏛️ Institutions and Ongoing Resources

AI ethics is not only an academic field — it is an active domain of institutional research, advocacy, and policy-making. Here are some organisations doing important work that connects to the themes of this course.

UCT Ethics Lab

The Ethics Lab at UCT's Faculty of Health Sciences is an interdisciplinary research unit advancing ethical scholarship in health research and innovation. Its work spans global health ethics, decolonising health research in Africa, and building Africa's voice in local and global health ethics conversations. While not focused exclusively on AI, the Lab's emphasis on epistemic justice, African-grounded ethics, and the interconnection of human, animal, and planetary wellbeing provides foundational resources for thinking about AI ethics in African contexts.

Global Center on AI Governance

The Global Center on AI Governance (GCG) works to reduce global inequalities exacerbated by AI through research, policy advice, and training. Described as Africa's leading voice in AI policy and governance, the GCG engages with over 150 countries and offers courses on responsible AI and AI ethics and policy in Africa. Their work on national AI strategy analysis, public perceptions of AI in South Africa, and the Global Index on Responsible AI provides valuable resources for understanding AI governance from an African perspective.

Research ICT Africa

Research ICT Africa (RIA) — whose Just AI Framework of Inquiry we explored in Sub-Lesson 2 — conducts research on the digital economy, AI governance, and data justice across the African continent. Their work bridges local realities and global governance forums, generating evidence for equitable AI policy.

Other Key Organisations

🗺️ The Map Is Not the Territory

This page has sketched a dozen dimensions of AI ethics that extend beyond what we could cover this week. There are others still — AI and environmental justice (which you explored in Week 3), AI and children, AI and neurodiversity, religious and spiritual perspectives on technology, posthumanism and questions about AI consciousness, and more.

The point is not that you need to master all of these domains. The point is that ethical reasoning about AI requires intellectual humility — an awareness that the questions are bigger than any single framework, and that the landscape is evolving faster than any curriculum can track. The philosophical lenses and practical tools from this week give you a foundation. What you build on that foundation, in your own research and practice, is up to you.

📚 Back to the Core

This supplementary page is intended as a resource for further exploration, not as required reading. The core content of Week 4 — ethical frameworks, ubuntu and relational ethics, transparency and integrity, and practical case studies — provides the foundation you need for the assessments and for ethical reasoning in your research.

Next week (Week 5): We move from ethical foundations to practical application — AI-assisted literature review. How can AI help you find, organise, and synthesise research literature? What are the risks, and how do you use these tools responsibly?