Amanda Maria Horzyk
Email: A.M.Horzyk@sms.ed.ac.uk
Website: https://linktr.ee/amandahorzyk
LinkedIn: https://www.linkedin.com/in/amanda-horzyk-73819016a/
Research keywords: Algorithmic Transparency, AI Law, Regulation, Policy, Explainable Artificial Intelligence (XAI)
Bio:
Amanda is a doctoral researcher with an LLM in Innovation, Technology, and the Law and Law with a Business and Management (LLB) background in Internet Law, Artificial Intelligence Regulation and AI Ethics, and Virtual Reality. Her growing expertise lies at the nexus of the law and data-driven technologies affecting copyright, data protection and privacy, and the right to non-discrimination. Her research interests include work on by-design laws, the legal and ethical context of adversarial data poisoning and data scraping, watermarking, predictive analytics, AI-driven targeted advertisement, legal implications of ADM on meaningful human decision-making, and a growing interest in international AI policy development. Her ongoing contributions include founding the AI, Law and Regulation (‘ALR’) Pioneers Group, coordinating the ALR Section of the International Neural Networks Society, and facilitating annual ALR Special Sessions and supporting XAI Special Sessions.
Amanda is motivated by interdisciplinary research that brings technical and socio-legal expertise together. Highlight contributions include participation in the Munich Convention White Paper on AI and Data and Human Rights Working Group, Centre for Digital and AI Policy Research Group, Globethics Forum and Geneva Peace Week, Scotland’s Ethical Digital Nation Expert Workshops, and affiliation with the Centre for Technomoral Futures.
PhD research:
Amanda investigates how Explainable AI (XAI) can advance policy objectives behind multi-dimensional transparency obligations across high-risk AI applications. It examines AI governance approaches to bridge policy aspirations and technical solutions to operationalize algorithmic transparency obligations in evolving legislation. Her project will scrutinize these solutions through participatory, human-centered approaches to consider how different actors interact with Algorithmic Decision-Making and Decision Support Systems. It seeks to examine XAI and governance frameworks’ limitations in addressing individual and collective needs for meaningful explanations given varying levels of technical literacy. Its dual output aspires to inform policy and technical communities navigating the AI regulatory landscape.
Supervisors: Lachlan Urquhart, Burkhard Schafer