Welcome

Welcome to the official publication of the St Andrews Foreign Affairs Society. Feel free to reach out to the editors at fareview@st-andrews.ac.uk

The Digitalisation of Development

The Digitalisation of Development

The European Union’s decision to invest 9.2 billion euros in digitalising Europe through supercomputing, artificial intelligence, cybersecurity, and advanced digital skills begs the question of how this technological growth impacts the development sector. Would big data and smart technology finally close the poverty gap, feed the hungry, clean the oceans, boost developing economies, and solve long-lasting inequalities?

For the past few decades, the humanitarian sector has been encouraging the use of remote and mobile technology for managing disasters, aiding conflict zones, and empowering local communities. For example, geospatial technology was used in Afghanistan to conduct battle assessments and deliver aid. It is also used in mapping displaced populations, analysing refugee camps’ structure, and exploring resources as part of refugee management tools as exemplified in the Darfur crisis. These processes involve data-mining, GPS tracking, and biometric information like fingerprinting and iris scans integrated through geospatial, telecom, and server-user databases. Such technology helps control corruption, distribute aid effectively, count populations and create an infrastructure for democratic rights like voting and ID registrations and access to health and education.

 However, the adoption of such practices raises several concerns regarding lack of safeguards on their construction and implementation. This is especially important as the tools used by the development sector are also key methods of national and international security bodies to conduct intelligence. Such practices have been widely criticised on the basis that the invasive nature of the technology violates fundamental human rights like privacy and autonomy. While development is generally perceived as ‘everyman’s road to utopia’ (Arndt 1987, p.1), this does not necessarily justify the means it uses to achieve its objectives. When the development sector begins to intensively use security intelligence practices, there is a need for higher regulation and assessment of how that affects human rights. Three key problems derive from the digitalisation of development. Together, they convey the need to keep development outside of the national security sector and engage further in a public debate about the role of technology and how it could be regulated.

 When Biases Get in the Way

Technology presents innovative ways of dealing with economic problems. However, its formation is based on existing social inequalities. While used to close the economic gap, it reinforces social divisions because technology and artificial intelligence trace their foundations to human intelligence. The algorithms being built are programmed by humans who transfer their conscious and unconscious biases to the models used in artificial intelligence. For example, ten of the biggest tech companies in Silicon Valley did not hire a single black woman in 2016. Therefore, the industry creating the tools of the future is currently represented by white, Western men. The lack of diversity in the sector ‘teaching’ the machines enforces existing structural limitations in terms of race, colour, gender, class, and age.

 Furthermore, the benefits of the digital era are not equally distributed as some groups cannot connect to the global digital network. The unavailability of basic living conditions, social hierarchies, and unstable state governance are serious barriers to accessing new technology. It is women, older population, ethnic minorities, indigenous communities and people from deprived economic areas who are most digitally marginalised. Illustratively, in developing countries, women use the internet 33 per cent less than men. Therefore, when the United Nations use biometric data to map food distribution or count populations, there is firstly a high risk that not everyone was included in the database and secondly that the algorithm used for assessing the set variables are using discriminatory models. If technology becomes the new way of managing global development, we must make sure that it does not employ dominant colonial binaries, social hierarchies, or discriminatory biases.

 A Question of Ownership: Private-public Agreements

The West witnessed technological development through state investments in the field; starting from phone infrastructure to digital surveillance methods. It was then that the private sector undertook new initiatives and expanded on existing infrastructure. Developing countries are now joining a global network where smart technology, wireless devices, and 5G among others are created by the private sector for commercial purposes. Unlike other non-cyber security areas wherein the state plays the principal role, cybersecurity has put private actors in a position of ownership as states rely on them to regulate security. Thus, digital technology is manufactured and distributed without prior implementation of regulatory state mechanisms.

 Surveillance technologies used by the private sector to analyse customer behaviour, create target advertising and sell access to data have been strongly criticised in the West. However, similar technology is used in the Global South where no safeguards are present. Due to its limited resources, the development sector has fostered public-private partnerships to utilise the commercial technology and services for humanitarian work (e.g., UNOSAT, RESPOND).

 While providing tools to manage humanitarian needs, private actors also benefit from the unregulated social space, which enables them to collect data about the users and their behaviour. As a result, they have enough information to test and hone their technology. Put differently, the Global South has become a laboratory for tech companies to improve the intelligence potential of their products. Moreover, a justification exists that poor people are not concerned with matters of privacy, so intrusive surveillance practices can be justified (Duffield 2016). This argument, promoted by the industry, demonstrates that the agenda behind partnerships with development agencies and NGOs is not to help people but to make a profit. Public-private partnerships between tech companies and humanitarian bodies must be further regulated. In their current state, the private sector sets the rules at the expense of people’s rights. 

The Security-Development Nexus
The surveillance technologies used by the private sector to support humanitarian work are highly similar to security intelligence practices developed by national and international security agencies. For example, the United Nations’ Global Pulse project uses satellite information, social media mining and existing databases to deal with crises, decrease poverty, and bring peace. Similarly, the TEMPORA programme used by the British Government Communications Headquarters (GCHQ) stored and analysed data extracted from users’ activity on platforms like Google, Facebook, Twitter, YouTube as well as email and phone correspondence regardless of  nationality or location. The aim is to run the gathered data through algorithms which can determine whether a user’s online behaviour implies a potential threat to national security. Therefore, cyber intelligence technology is characterised with a dual-use which can satisfy both the security and development sector.

 This is problematic. Once this technology is implemented on the ground through public-private partnerships, the difference between security goals and development objectives is blurred. Solving humanitarian issues through tools initially intended to garner advantage in a conflict or war raises certain concerns about human rights. This is especially troubling when such technology is used in spaces where democratic safeguards do not exist. Development should not be portrayed as a tool of security because it changes the perception of what the end goals of development projects are. Using security tools to pursue development only enforces discourse that the purpose of development is to achieve security and prevent security threats. It portrays development as a Western project created to guarantee the security of the West by managing the dangerous and volatile South. Thus, security can be used as a legitimate reason to establish development projects that neglect "good governance, protection of human rights, and sustainability" (Wilkinson, 2016:32).

Even though the aim of using big data for elevating humanitarian work is an innovative idea which can bring a positive change in people’s experience, this should be done with appropriate and adequate safeguards. Considering that the West is still questioning the effectiveness and appropriateness of cyber technology, its unregulated use in the Global South begs the question of how standards are being set in development projects and what the agenda of development truly is.

Image courtesy of rawpixel.com via Freepik, ©2018, some rights reserved.

Additional Attributions

Arndt, Heinz Wolfgang, Economic development: The History of an Idea (Chicago, London: University of Chicago Press 1987).

 Mark Duffield, ‘The digital development-security nexus: linking cyber-humanitarianism and drone warfare,’ ed. Paul Jackson, Handbook of International Security and Development (Cheltenham:  Edward Elgar Publishing Limited 2016).

 Cai Wilkinson, 'The securitisation of development,' in ed. Paul Jackson, Handbook of International Security and Development (Cheltenham:  Edward Elgar Publishing Limited 2016).

Hostile Strangers and Criminal Government: Greece’s Migrant Policy in 2020

Hostile Strangers and Criminal Government: Greece’s Migrant Policy in 2020

Security Interfaces as Perpetuations of Coloniality

Security Interfaces as Perpetuations of Coloniality