For Karen Yeung, a Professor of Law at King’s, Artificial Intelligence is a double-edged sword. These systems, which use vast amounts of data to learn from example and make automated decisions, can provide enormous benefits in areas ranging from genomics, particle physics, through to health and cancer research.
I think that one thing lots of people don’t realise is just how widespread and commonplace machine learning systems have become in our society
Professor Karen Yeung, Director of the Centre for TELOS (Technology, Ethics, Law and Society), King’s College London
What many don’t realise, Professor Yeung adds and what some of her research is focused on, is the sheer scale at which such powerful and complex systems infiltrate into our day-to-day lives. From access to employment, educational opportunities, insurance, financial loans or even housing, critical decisions made in our society are more driven by AI than we may realise. Companies gather profiles of our lives, Professor Yeung explains, allowing systems to make automated decisions. ‘One of the problems is we might not even be aware that these kinds of decisions are being made about us as we go about our daily life,’ she says.
In learning from example, the machines may pick up on biases in the data sample which, Professor Yeung explains, could have a profound effect:
‘There’s a real problem concerning the potential for these systems to discriminate. They learn by example from existing data – historic examples that may reinforce social divisions, place vulnerable groups in positions of further vulnerability and perpetuate stereotypes. These kinds of biases can creep into our environment that will inevitably reinforce systematic discrimination that is built into our social structure,’ she says.
To counter this, Professor Yeung is working on trying to understand the consequences of the rise of algorithmic power, which she hopes will ultimately lead to some accountability and stronger regulation.
Recent policy initiatives include a report published by The Royal Society which draws attention to the need for skills development around data governance; The Royal Society and British Academy identifying a series of challenges of Data Governance in the 21st Century; the Nuffield Foundation’s commitment to setting up a Convention on Data Ethics; and the government’s recent announcement in the Autumn Budget that it will set up a Centre for Data Ethics and Innovation.
As Director of TELOS, a group concerned with exploring the legal, regulatory and ethical dimensions of emerging technology, Professor Yeung recently published work on Algorithmic Regulation and, together with colleagues Roger Brownsword and Eloise Scotford, published The Oxford Handbook of Law, Regulation and Technology. Last month she presented evidence to the House of Lords, specifically the Select Committee on Artificial Intelligence and was one of the authors of the Royal Society-British Academy data governance report.
Devising solutions, however is still in early stages according to Professor Yeung. The challenge is to come up with effective and legitimate mechanisms that will properly govern these systems, whilst allowing us to reap the benefits of our ever-advancing technology. This, she says, must include mechanisms that will extend to the practices and policies of the global corporations who collect and currently govern our data.
‘I think there is going to be a really important role for civil society to come with applications that could be potentially democracy-enhancing. There is real scope for changing how power operates in our globally networked environment, but there are serious challenges in developing strategies that enable more democratic participation in the design and operation of our digital infrastructure and the complex data ecosystem it has spawned,’ she says.
Find out more about the work done by TELOS here.