Between Law and Technology: Explainable AI and AI Assurance
Venue
Seminar Room 2Chrystal Macmillan Building
George Square
Description
Controversies in the Data Society 2026 series - Session 8
This session will include:
- A talk by Dr Vaishek Belle called The Future is Neuro-Symbolic: Where has it been, and where is it going?
- A talk by Professor Lilian Edwards called Faithful or Traitor? The right of explanation in a generative AI world
About this session
Professor Edwards describes her talk (with numbered references to the reading list below):
The right to an explanation is having another moment. Well after the heyday of 2016-2018 when scholars tussled over whether the GDPR (in either art 22 or arts 13-15) conferred a right to explanation, the CJEU case of Dun and Bradstreet[1] has finally confirmed its existence, and the Platform Work Directive has wholesale revamped art 22 in its Algorithmic Management chapter. Most recently the EU AI Act added its own Frankenstein-like right to an explanation (art 86) of AI systems[2].
None of these provisions however pin down what the essence of the explanation should be, given many notions can be invoked here; a faithful description of source code or training data; an account that enables challenge or contestation; a ‘plausible’ description that may be appealing in a behaviouralist sense but might be actually misleading when operationalised, e.g. to generate a medical course of treatment. Agarwal et al[3] argue that the tendency of UI designers, and regulators and judges alike, to lean towards the plausibility end, may be unsuited to large language models which represent far more of a black box in size and optimisation than conventional machine learning, and which are trained to present encouraging but not always accurate accounts of their workings. Yet this is also the direction of travel taken by CJEU Dun & Bradstreet , above. This paper argues that ‘chain of thought’ is not a good way to generate explanations of large model outputs in high-stakes areas and that the law may, contra-intuitively, have to rethink the longstanding presumption of European law in favour of ‘easy to understand’ explanations in ‘clear and plain language’[4].
About the speakers
Dr Vaishak Belle is Reader in Logic and Learning at the University of Edinburgh’s School of Informatics, an Alan Turing Fellow, and a Royal Society University Research Fellow. He has made a career out of doing research on the science and technology of AI. He directs a research lab on artificial intelligence, specialising in the unification of logic and machine learning, with a recent emphasis on explainability and ethics. He is also Director of Research and Innovation at the Bayes Centre. He has published close to 120 peer-reviewed articles, won best paper awards, and consulted with banks on explainability.
Professor Lilian Edwards is a Visiting Fellow at the University of Edinburgh School of Law. She is a Scottish UK-based academic and frequent speaker on issues of internet law, intellectual property and artificial intelligence. She is on the Advisory Board of the Open Rights Group and the Foundation for Internet Privacy Research and is the Emeritus Professor of Law, Innovation and Society at Newcastle Law School at Newcastle University. She has co-edited (both with Charlotte Waelde and alone) four editions of a textbook, Law and the Internet (later Law, Policy and the Internet); the fifth edition is due out later this year. She is Associate Director, and was co-founder, of the Arts and Humanities Research Council (AHRC) Centre for IP and Technology Law (now SCRIPT. Professor Edwards has consulted for the EU Commission, the Organisation for Economic Co-operation and Development (OECD), and World Intellectual Property Organization (WIPO). She co-chairs GikII, an annual series of international workshops on the intersections between law, technology and popular culture. Edwards is Deputy Director of CREATe, the Centre for Creativity, Regulation, Enterprise and Technology, a Research Councils UK research centre about copyright and business models.
Reading
Vaishak Belle*, Gary Marcus 2026 The future is neuro-symbolic: Where has it been, and where is it going? Proceedings of the 40th AAAI Conference on Artificial Intelligence https://www.research.ed.ac.uk/en/publications/the-future-is-neuro-symbo…;
[1] C-203/22 - Dun & Bradstreet Austria
[2] See discussion in S. Demkova “The AI Act’s Right to Explanation: A Plea for an Integrated Remedy”, Oct 31 2024, at https://www.medialaws.eu/the-ai-acts-right-to-explanation-a-plea-for-an-integrated-remedy/ ; G.Malgieri and M.Kaminski “The Right to Explanation in the AI Act” U of Colorado Law Legal Studies Research Paper No. 25-9, 2025.
[3] C.Agarwal et al “Faithfulness vs. Plausibility: On the (Un)Reliability of Explanations from Large Language Models”, v3, 2024 at https://arxiv.org/abs/2402.04614
[4] See GDPR, recital 58
Key speakers
- Vaishek Belle
- Lilian Edwards