top of page

Situating the Global South in AI and Digital Border Governance

  • Writer: Rohan Pai
    Rohan Pai
  • 3 days ago
  • 7 min read
By Kavya Narayanan


Border infrastructure worldwide has increasingly become reliant on machine learning, big data, automated decision-making, predictive analytics and digital technologies. Advanced AI tools are currently used across a range of applications deployed across the entire migration cycle; from profiling and identifying people on the move, to predicting migration flows and surveilling border crossing points. While these technologies are deployed ostensibly to solve border management challenges, they also raise significant concerns around transparency, data protection, and risks to fundamental rights. 


Then there is the question of inequality; AI development exacerbates digital colonialism through international labour division while extracting majority world value in favour of Big Tech. Consequently, while supply chains for AI development continue to extract resources from the Global South, surveillance infrastructure built on AI is then directly used to prevent the inflows of vulnerable populations from these regions to high-resource destination countries. Major companies already outsource data labelling and content moderation to India, Kenya, and the Philippines. Labourers face unstable, low-paid employment while remaining invisible in development narratives. Resource extraction and AI data centres damage ecosystems already impacted by climate change. Inclusion efforts remain procedural and governed by Western and Eurocentric frameworks, failing to address power imbalances and other critical issues like digital sovereignty, infrastructural monopolies, supply chain harms, labour rights, and environmental costs. Concurrently, while legal protections and human rights safeguards for people on the move exist within high-resource destination countries, Global South countries continue to lag behind in creating such safeguards, while disproportionately bearing the harms of AI and border surveillance infrastructure.


The Global South is therefore increasingly vulnerable to the impacts of these developments in AI and border infrastructure: not only does it continue to be marginalised in the development of these systems and shaping governance norms around such infrastructure, it also faces information and regulatory vacuums around how AI is being deployed at its own borders while bearing disproportionate risks from AI-enabled border control systems. 


The following sections will showcase how these vacuums, in both, the design and the governance of these systems are shaped through multiple pathways, leaving the Global South especially vulnerable to their impacts.


Unchecked surveillance and the 'black box' of National Security


National security operates as an exception, enabling unchecked surveillance beyond democratic oversight. Post-9/11 legislation like the USA PATRIOT Act disproportionately affected Muslim-Americans and Arab-Americans while undermining privacy rights, also enabling torture ('enhanced interrogation') and indefinite detention of 'enemy combatants' without due process under the guise of security These measures have primarily affected people of colour, and the introduction of AI/ML and automated-decision making (ADM) risks the continued embedding of human bias within the digital realm. 


Within the Global South, surveillance concerns further complicate digital media use among refugees facing prolonged asylum and resettlement processes, like Sri Lankan Tamil refugees in Indian camps, where government surveillance fears coupled with skepticism regarding the peace process impedes their return, even though official hostilities have ceased. In India, Operation Sindoor has enabled the detention and covert deportation of over 2,000 individuals suspected of being undocumented Bangladeshi immigrants, reportedly carried out without any judicial oversight or deportation orders, further raising grave questions around human rights in a global context of unchecked surveillance within the black box of national security. 


Critically, the Global South faces a dual challenge: surveillance technologies are predominantly supplied by Global North nations, creating new forms of exploitation and vulnerability for populations. This dependency is exacerbated by the absence of comprehensive regulatory frameworks and governance structures for surveillance systems within Global South countries, leaving populations exposed without adequate legal protections.


Algorithmic Control Beyond Legal and Regulatory Reach 


Algorithmic risk assessments for visa processing and immigration detention worldwide have become increasingly punitive, creating harmful feedback loops while avoiding court oversight. Since 2013, ICE has used COMPAS risk assessment to determine immigrant detention with no bond, detention with bond eligibility, or community supervision release. A recent lawsuit claims ICE rigged software creating a 'secret no-release policy' for suspected immigration violators. The system could only recommend detention or refer cases to ICE supervisors, who allegedly almost never ordered releases. 


Deepening algorithmic profiling trends particularly affect the Global South


India is currently witnessing a push for tech-driven border security with the introduction of the Comprehensive Integrated Border Management System (CIBMS), an especially concerning development given the opacity of national security decision-making and missing legal and civil protections to challenge misuse. Similarly, Delhi Police's CMAPS indicates 'criminal hotspots' and Mumbai and Uttar Pradesh Police's sentiment analysis software scans social media for disturbance alerts. These expand policing beyond traditional jurisdictions, moving from targeted, suspicion-driven approaches to programmatic, ubiquitous surveillance triggered by algorithmic thresholds. 


Biometric requirements further pose significant risks to vulnerable populations by enabling accurate identification, making individuals more vulnerable to abuse. This particularly affects Rohingya populations in India, Bangladesh and Myanmar. Rohingya have often relied on documentation provided through smuggler networks to board planes and seek safety, an ability circumvented by global biometric ID systems requiring people to prove identity with their bodies. 


In India, while policy documents recognise AI risks as technological failures addressable through technical standards, there is a lack of information around the current deployment of AI, structural accountability, and legal frameworks for its governance. ADM development occurs in regulatory vacuums where transparency and democratic control receive inadequate attention. It is clear that In the absence of strong data protection frameworks, people on the move remain vulnerable to systematic profiling and unchecked use of their data. 


Addressing these numerous risks and considerations requires institutional capacity and regulatory frameworks absent in many Global South contexts. In countries like India, data protections are still nascent with the recent introduction of the Digital Personal Data Protection (DPDP) Act which neither addresses immigration directly, nor does it clearly define limitations or use-cases for the use of data for migration management.


The central question under these circumstances is one of transparency, especially pertinent in the Global South in the absence of formal legislation and safeguards. An important question arises: how do we safeguard transparency from the dual challenges of unscrupulous actors and undefined legal or human rights safeguards in the Global South context?


The path to better digital border governance in the Global South


Experts suggest that while AI is being designed in support of State migration management objectives, the interests and voices of people on the move have generally not been included in the design, decision, and implementation stages. Addressing these challenges requires decolonising AI governance and strengthening institutional frameworks. Global South actors can serve as challengers to exclusionary mechanisms, provide contextualised risk interpretations, and foster alternative governance models based on solidarity and resistance. To achieve this, we need civil-society and multilateral institutions to work together to build and amplify alternative models for governance and consultation that are effective in these contexts.


Parallely, AI exacerbates existing national security profiling based on race, ethnicity, religion, and national origin. The lack of legal protection for refugees and asylum-seekers in these contexts indicate a growing need for multilateral intervention and support from humanitarian groups in countering unchecked state power over vulnerable populations. We need strong measures that hold states accountable and increase transparency of border security measures in use. States must be assessed on their capacity to ensure data-related safeguards for AI governance are implemented. 


AI systems reproduce and scale human bias due to flaws in training data, model design, and decision-making logic. What has become apparent is that legal and policy safeguards are insufficient to prevent or redress these harms. A commitment to non-discrimination, therefore, must consider not only data inputs, but also pay attention to the social, economic, political and historical contexts in which data are collected, digital technologies designed, produced, and put to work. We need to direct focus on tracking and addressing the outcomes of the use of these technologies over the long term.


Building human rights capacity: Data Stewardship as a means to involve communities in Decision-Making


In the era of rapid expansion of the data economy due to advancements in AI, we need to raise  serious questions for governments, businesses and the public about who has access to data, who gets to decide what that data is used for, and ultimately who is able to realise the value of data. There is increasing attention being paid to the notion that parties who have contributed to the generation of data should have some rights in the utilisation of such data. The answers lie in what systems we can put in place to limit the misuse of data, preserve people's privacy and hold those causing harm to account. 


Data Stewardship has recently gained significant traction (particularly models like data trusts and data cooperatives) from researchers, policymakers and practitioners alike. Participatory data stewardship similarly details a spectrum of participation for communities to be involved in  decision-making processes throughout the lifecycle of data—from its collection through to its processing, storage and sharing to eventual deletion. However, multilateral and humanitarian support is critical to ensure political will, policy environment, and international agreements are aligned in order to achieve an enabling environment for data stewardship for refugees and migrants at scale.


We need to direct support to civil society organisations that work with migrants and refugees to evolve models of data stewardship as an extension of their other rights-based work.  Setting up a network of organisations that could take on additional functions as data stewards could ensure people on the move are aware of their data rights and how data about them is being used by states deploying AI border surveillance. Recognizing data rights as a component of human rights is critical to ensure people on the move are adequately safeguarded from biases in AI training data and have reliable support in the form of data stewards to navigate surveillance infrastructure.


In the absence of robust legal and regulatory frameworks to hold states and private actors accountable for harms in Global South contexts, it is vital to enhance community participation in data collection and sharing through data stewardship models. These need to be implemented as self-regulatory mechanisms to improve accountability and transparency. For vulnerable populations that are dependent on connectivity and digital media for safety, raising awareness on how data about them can be misused is critical. As independent, third-party actors, data stewards can act as intermediaries that protect people on the move from State malpractice in the absence of legal redress.





 
 
 

Comments


bottom of page