ReEnTrust: Rebuilding and Enhancing Trust in Algorithms
As interaction on online Web-based platforms is becoming an essential part of people's everyday lives and data-driven AI
algorithms are starting to exert a massive influence on society, we are experiencing significant tensions in user perspectives
regarding how these algorithms are used on the Web. These tensions result in a breakdown of trust: users do not know when
to trust the outcomes of algorithmic processes and, consequently, the platforms that use them. As trust is a key component
of the Digital Economy where algorithmic decisions affect citizens' everyday lives, this is a significant issue that requires
addressing.
ReEnTrust explores new technological opportunities for platforms to regain user trust and aims
to identify how this may be achieved in ways that are user-driven and responsible. Focusing on AI algorithms and large scale
platforms used by the general public, our research questions include: What are user expectations and requirements regarding
the rebuilding of trust in algorithmic systems, once that trust has been lost? Is it possible to create technological solutions
that rebuild trust by embedding values in recommendation, prediction, and information filtering algorithms and allowing for
a productive debate on algorithm design between all stakeholders? To what extent can user trust be regained through technological
solutions and what further trust rebuilding mechanisms might be necessary and appropriate, including policy, regulation, and
education?
ReEnTrust builds on the work of the successful UnBias
project. It will develop an experimental online tool that allows users to evaluate and critique algorithms used by online
platforms, and to engage in dialogue and collective reflection with all relevant stakeholders in order to jointly recover
from algorithmic behaviour that has caused loss of trust. For this purpose, we will develop novel, advanced AI-driven mediation
support techniques that allow all parties to explain their views, and suggest possible compromise solutions. Extensive engagement
with users, stakeholders, and platform service providers in the process of developing this online tool will result in an improved
understanding of what makes AI algorithms trustable. We will also develop policy recommendations and requirements for technological
solutions plus assessment criteria for the inclusion of trust relationships in the development of algorithmically mediated
systems and a methodology for deriving a "trust index" for online platforms that allows users to assess the trustability of
platforms easily.
The project is led by the University of Oxford in collaboration with the Universities of
Edinburgh and Nottingham. Edinburgh develops novel computational techniques to evaluate and critique the values embedded in
algorithms, and a prototypical AI-supported platform that enables users to exchange opinions regarding algorithm failures
and to jointly agree on how to "fix" the algorithms in question to rebuild trust. The Oxford and Nottingham teams develop
methodologies that support the user-centred and responsible development of these tools. This involves studying the processes
of trust breakdown and rebuilding in online platforms, and developing a Responsible Research and Innovation approach to understanding
trustability and trust rebuilding in practice. A carefully selected set of industrial and other non-academic partners ensures
ReEnTrust work is grounded in real-world examples and experiences, and that it embeds balanced, fair representation of all
stakeholder groups.
ReEnTrust will advance the state of the art in terms of trust rebuilding technologies for algorithm-driven
online platforms by developing the first AI-supported mediation and conflict resolution techniques and a comprehensive user-centred
design and Responsible Research and Innovation framework that will promote a shared responsibility approach to the use of
algorithms in society, thereby contributing to a flourishing Digital Economy.
Selected Publications
-
"Money makes the world go around": identifying barriers to better privacy in children’s apps from developers’ perspectives
Anirudh Ekambaranathan‚ Jun Zhao and Max Van Kleek
In Proceedings of the ACM Conference on Human Factors in Computing Systems. 2021.
Details about "Money makes the world go around": identifying barriers to better privacy in children’s apps from developers’ perspectives | BibTeX data for "Money makes the world go around": identifying barriers to better privacy in children’s apps from developers’ perspectives | Download (pdf) of "Money makes the world go around": identifying barriers to better privacy in children’s apps from developers’ perspectives
-
" It's your private information. it's your life." young people's views of personal data use by online technologies
Liz Dowthwaite‚ Helen Creswick‚ Virginia Portillo‚ Jun Zhao‚ Menisha Patel‚ Elvira Perez Vallejos‚ Ansgar Koene and Marina Jirotka
In Proceedings of the Interaction Design and Children Conference. Pages 121–134. 2020.
Details about " It's your private information. it's your life." young people's views of personal data use by online technologies | BibTeX data for " It's your private information. it's your life." young people's views of personal data use by online technologies | DOI (doi.org/10.1145/3392063.3394410) | Link to " It's your private information. it's your life." young people's views of personal data use by online technologies
-
Developing A Measure of Online Wellbeing and User Trust
Liz Dowthwaite‚ Elvira Pérez Vallejos‚ Helen Creswick‚ Virginia Portillo‚ Menisha Patel and Jun Zhao
In Paradigm Shifts in ICT Ethics: Proceedings of the ETHICOMP 2020. 2020.
Details about Developing A Measure of Online Wellbeing and User Trust | BibTeX data for Developing A Measure of Online Wellbeing and User Trust