International Digital Rights Days: A Conversation With Dr. Richard Whittle

Dr. Richard Whittle is an economist at the University of Salford. His research at Salford Business School explores how individuals and society interact and make decisions in online spaces.

Richard’s work focuses on how people understand the decisions they make and how Artificial Intelligence can impact these decisions. More broadly, Richard studies the digital economy investigating the economic impacts of Artificial Intelligence and institutional preparedness for AI. 

To celebrate International Digital Rights Days, we asked Richard to share his insights on the impact of Artificial Intelligence on decision-making, privacy and individual autonomy, as well as the role of cities can play in safeguarding digital rights and online freedoms.

Manchester’s Digital Strategy: This year, your secondment from the University of Salford has brought your expertise in Artificial Intelligence to Manchester’s Digital Strategy. Could you tell us a little more about your work and academic interests, and what initially inspired you to pursue your research around AI?

Dr. Richard Whittle: My work focuses on human behaviour and Artificial Intelligence. I investigate how we make decisions online - decisions about what to buy, whether we understand documentation or the terms and conditions we all agree to, and how websites and apps can trick users into spending money or giving away data and privacy. I research the economics of Artificial Intelligence and study institutional AI preparedness, focusing on how universities, financial institutions, and local governments should prepare for AI.

My inspiration to pursue research around AI started from a bottom-up perspective. I have always been interested in how people make decisions. For me, AI is a huge influence on decision making - whether through social media algorithms; the personalisation of information; or tools which directly make decisions for us, such as those used to assess CVs during hiring processes.

MDS: We’re proud to join the International Digital Rights Days campaign for the first time this year, joining cities from across three different continents to advocate for the protection of human rights in the digital world. Would you be able to share some insights on what the term ‘digital rights’ should mean in practice?

RW: ‘Digital rights’ refer to the extension of fundamental human rights into the digital world. In practice, this encompasses privacy, freedom of expression, equitable access to technology, and protection against discrimination and unlawful activity in digital spaces. Ultimately everyone should have fair and equitable access to a safe internet.

As AI develops, I suspect these rights will also develop. For instance, whilst discussions around privacy are not new, AI may broaden these discussions beyond data about what we do (such as what we like or repost) to include data about how we actually make these decisions.

My research at Salford University investigates how we make decisions online and how we interact with Artificial Intelligence. I've seen how technological advancements can empower individuals, but also magnify risks such as surveillance, bias, and the manipulation of choice through deceptive design patterns. My research into dark patterns, for instance, highlights how poorly designed systems can erode user autonomy, emphasizing the need for regulatory frameworks to protect individuals in digital ecosystems.

MDS: At a city level, what actions can we take to help more people access the Internet in a safe and fulfilling way, and safeguard users from online exploitation and manipulation? How would you define a ‘safe’ Internet?

RW: Access to the Internet is a cornerstone of modern life, enabling education, economic opportunity, and social inclusion. However, ensuring this access is safe and fulfilling requires thoughtful city level action, particularly as the risks of exploitation and manipulation grow in the digital space.

Cities could invest in accessible and affordable digital infrastructure, particularly in underserved areas. UK 5G coverage is quite poor and cities should ensure connectivity for their residents. Partnering with community organizations to provide digital skills training is vital. Teaching basic literacy alongside advanced skills, such as recognizing online threats, equips users to navigate the web safely and confidently. Cities could also develop programs that educate users on identifying online harms such as dark patterns, phishing attempts, and misinformation.

Cities have a significant voice in the national discussion, as they could advocate for and enforce ethical standards in digital services. For example, local governments can ensure vendors adhere to data protection regulations and ethical AI guidelines.

 For me though, the primary role of a city should be to ensure that all of its citizens benefit from a digitally enabled world.  A safe Internet is one where users are able to engage freely and securely, without fear of harm, exploitation, or undue influence. This may mean different things to different people, but fundamentally, if navigating the online world is as essential and commonplace as navigating the physical one, it shouldn’t be any more risky.

MDS: While AI has the potential to be a force for good, it can also be misused. As AI technologies have evolved, so too do the threats they pose to digital rights. In your opinion, what are the biggest risks that people should be aware of, and how can we mitigate them effectively?

RW: AI undoubtedly has huge potential to positively impact our lives, but as it evolves, it also presents complex threats to digital rights that demand immediate attention.

In my research on behavioural science and AI, I've explored how these technologies can shape, and sometimes undermine, individual autonomy and societal fairness. AI-powered surveillance tools can track individuals, aggregate sensitive data, and predict behaviour, often without informed consent. These tools risk normalizing invasive monitoring and disproportionately targeting vulnerable groups.

AI systems often replicate and amplify biases embedded in training data, leading to discriminatory outcomes in areas like hiring, policing, or credit scoring. These biases not only harm individuals but can erode trust in technology. As AI develops, new and unexpected biases could emerge.

AI-driven algorithms can exploit behavioural vulnerabilities, steering users toward harmful decisions. This manipulation can lead to financial harm, addiction, loss of privacy or misinformation spread. On a simple level, techniques like making it unnecessarily hard to cancel a subscription - such as an endless series of ‘are you sure’ checks - can manipulate people into keeping and paying for services they no longer want. AI also has the potential to accelerate the development of deepfakes and misinformation, increasing the risk of exploitation.

AI capabilities are often concentrated in a few corporations, leading to a lack of accountability over critical digital infrastructures. Who actually controls the services we now all rely on?

In my view, AI can be two sides of the same coin. While I’ve talked about its potential for harm, it can also help. For instance, I’m currently researching if Generative AI can identify manipulations in websites and apps, supporting business and regulators in protecting customers. In a recent project, I also explored the potential of AI to help customers understand tricky details about their finances.

MDS: Much of the discussions around digital rights focus on the balance between freedom and regulation. Looking to the future, do you think it’s possible to achieve meaningful protection of digital rights while maintaining freedom of information and expression online – or are these goals inherently incompatible?  

RW: I don’t think these goals are incompatible, though there are clear tensions. Freedom of information and expression are not freedoms to harm. Likewise, digital rights should not simply be the right to an overprotective, over censored, and bland digital experience.

These discussions and nuances constantly play out in the real world. In the online world, however, anonymity and the ability to collate similar opinions into a rapid and collective voice can polarise the debates. I do feel that as many of our services move quickly online – a trend that has accelerated since the pandemic – more can, and should, be done to protect the now vast range of internet users.

We are unfortunately not always safe in the real world too, despite our right to be so. In the real world services look after us if something bad happens. We need to extend these services into the online sphere too.


Previous
Previous

International Digital Rights Days: A Conversation with Reina Yaidoo

Next
Next

International Digital Rights Day: A Conversation With Christopher Northen