Ensuring Responsible Governance of AI and Autonomous Systems

Professor Shannon Vallor is working with partners from across the disciplinary spectrum to deliver research programmes that will help build a responsible and humane AI ecosystem.

Artificial Intelligence (AI) is transforming the ways we all live and work. From home robots and virtual personal assistants like Alexa, to driverless vehicles and software decision-making systems in medical diagnostics and finance, to GPT-4’s integration in Microsoft Office tools, AI and autonomous systems are revolutionising domestic, workplace, healthcare and industrial settings. Huge strides in technological capability are being made at pace, but their potential benefits can only be fully realised if all parts of society, including the public, can trust that these technologies have been developed responsibly from the outset. This requires a co-ordinated and considered approach.

Image
Image of Shannon Vallor face to face with a robot

A collaborative approach

Professor Shannon Vallor, Director of the Centre for Technomoral Futures at the Edinburgh Futures Institute, has been appointed to direct BRAID (Bridging Responsible AI Divides), a £3.5 million research programme to enable a responsible AI ecosystem, delivered in collaboration with the Ada Lovelace Institute. Professor Vallor is leading the programme with Professor Ewa Luger, who holds a personal chair in Human-Data Interaction within the School of Design, is Co-Director of the Institute of Design Informatics, and Director of Research Innovation for Edinburgh College of Art.

The Arts and Humanities Research Council (AHRC), part of UK Research and Innovation (UKRI) is supporting the three-year project. The first large-scale research programme on AI ethics and regulation in the UK, it focuses on translating knowledge across different communities in the Responsible AI ecosystem, fostering wider embedding and adoption of Responsible AI research and practices, and enhancing accountability in the AI ecosystem. It also seeks to bring fresh perspectives and voices from the creative arts, humanities and social sciences into the AI ecosystem to enrich its capacity to deliver more humane, inspired, equitable and resilient innovation.

By combining the expertise of researchers and innovators working across philosophy, human computer interaction, law, art, health, and informatics, as well as BBC researchers, the programme addresses the need for a responsible AI ecosystem that is more responsive to the needs and challenges faced by policymakers, regulators, technologists, and the public. Key to this is bridging knowledge and communication gaps between these stakeholders to help build confidence in AI, enable innovation, stimulate economic growth and deliver wider public benefit.

Trustworthy Autonomous Systems

Professor Vallor also leads the £690,000 project “Making Systems Answer: Dialogical Design as a Bridge for Responsibility Gaps in Trustworthy Autonomous Systems”. The multidisciplinary project is part of the £33 million UKRI Trustworthy Autonomous Systems (TAS) Programme, a collaborative UK-based platform that brings together research communities and key stakeholders to drive forward cross-disciplinary, fundamental research to ensure that autonomous systems are safe, reliable, resilient, ethical and trusted. Vallor is also co-Investigator on a separate £2.3 million TAS node on governance and regulation.

Autonomous systems are increasingly part of our lives, but as such systems move into high-stakes domains like health care and transportation, it’s vital that social trust is maintained. Prof. Vallor and her team are making sure that systems maintain that trust by delivering answerability; meaning that people who rely on these systems can get the answers they need and would expect from trustworthy, responsible partners in society. The project draws on cognitive science, sociolegal studies and philosophy to understand the many ways that human agents can answer for actions, and uses AI expertise to translate this knowledge to autonomous systems used in health, finance and government.

Working with partners at the NHS AI Lab, enterprise software company SAS, and Scotland’s Digital Directorate, the project will deliver tools and guides for enhancing system answerability in these sectors through dialogical design; scholarly publications that explore the philosophical, legal and technical dimensions of system answerability; and industry, regulatory and public sector events to help disseminate novel design techniques and frameworks for making autonomous systems more answerable to people.

Holding one another responsible for our actions is a pillar of social trust. A vital challenge in today’s world is ensuring that AI systems strengthen rather than weaken that trust. Our innovative multidisciplinary collaborations are interweaving diverse bodies of knowledge and practice, sectors and publics to enable the emergence of a healthy AI ecosystem in the UK, one that is responsible, ethical, and accountable by default.

Professor Shannon Vallor
Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence

Related links:

Centre for Technomoral Futures

The UKRI Trustworthy Autonomous Systems Hub and Research Programme

Making Systems Answer Project Homepage

Ada Lovelace Institute