This project, by Wesam Hassan, is funded by the Department’s RIIF fund since 2025. It explores how artificial intelligence (AI) is being introduced, understood, and used in everyday healthcare settings. Rather than treating AI as a purely technical innovation, the project examines AI as a social and ethical intervention that reshapes how medical decisions are made, how responsibility is distributed, and how care is experienced by both clinicians and patients. Drawing on ethnographic fieldwork in Egypt and Turkey, the research follows doctors, nurses, medical trainees, and hospital administrators as they encounter AI tools such as diagnostic software, triage systems, and decision-support platforms. These tools are often presented as objective, efficient, and future oriented solutions to healthcare pressures. In practice, however, they are embedded in existing systems marked by resource constraints, regulatory uncertainty, professional hierarchies, and uneven trust in institutions.
At this early stage of the project, the central question is how AI changes the experience of uncertainty in medicine. Clinical work has always involved judgement under conditions of incomplete information. AI promises to reduce uncertainty by offering predictions, risk scores, or recommended actions. Yet these systems also introduce new forms of uncertainty. Clinicians must decide when to trust an algorithm, when to override it, and who is responsible if something goes wrong. Patients, in turn, must navigate unfamiliar forms of expertise that are not fully human, but not fully transparent either. By comparing healthcare settings in Egypt and Turkey, the project highlights how AI adoption is shaped by local political economies, medical training systems, and moral expectations of care. In both contexts, AI is not simply adopted or rejected. It is adapted, negotiated, sometimes resisted, and often repurposed in ways that reflect broader struggles over accountability, professional authority, and the future of public healthcare. The project is ongoing and aims to contribute to debates on digital health, global inequalities in medical innovation, and the ethics of automation. The project is designed to be implemented on different stages addressing different questions and participants and build on the findings from each stage. It also aspires to offer grounded observations and feedback for policymakers, educators, and healthcare institutions seeking to introduce AI technologies in ways that support, rather than undermine, meaningful healthcare.