Notice: This paper was human reviewed and human edited. Research and development was aided by AI tools for transcription, summarization, and drafting support. Feel free to counter any claims or findings. If something is incorrect, please provide feedback. Reach us at contact@biasware.com.

Introduction

Artificial Intelligence is often hailed as a global revolution, but in reality its benefits and influence are unevenly distributed[1]. A growing chorus of critics warns that AI is reproducing old patterns of colonial inequality in a digital form. Algorithmic colonialism refers to the way AI systems and digital platforms reinforce the dominance of wealthy, predominantly Western nations by marginalizing the voices, data, and knowledge of less powerful regions. From the datasets that teach AI how to “think,” to the platforms controlling information flow, the Global South finds itself largely on the sidelines. This report delves into how AI, intentionally or not, can mirror historical colonial dynamics – and what can be done to create a more inclusive AI future.

Defining Algorithmic Colonialism

At its core, algorithmic colonialism is the digital-age continuation of historical power imbalances. It describes a process where dominant tech systems from wealthy (often formerly colonial) nations are imposed on less powerful communities, leading to new forms of dependency and control[2]. Just as empires once extracted resources and imposed their rule overseas, today’s tech giants extract data and deploy algorithms worldwide under the guise of innovation. The term builds on concepts like “digital colonialism” and “data colonialism,” which highlight how corporations treat personal data as a resource to be mined much like colonial powers seized land and gold[3][4].

This phenomenon was brought into focus by researchers in the late 2010s. For example, Abeba Birhane, an Ethiopian scholar, describes “colonialism in the age of AI” as manifesting through “state-of-the-art algorithms” and “AI-driven solutions” that are unsuited to local problems, ultimately leaving regions like Africa dependent on Western software and infrastructure[5]. In other words, Silicon Valley’s technologies are not neutral exports; they often come with embedded Western values and assumptions. The origins of the concept trace to the realization that the AI boom was largely centered in North America, Europe, and East Asia, and that the rest of the world was becoming a passive consumer of algorithms built elsewhere. Scholars began drawing parallels between data extraction and colonial extraction – Shoshana Zuboff famously observed that tech companies treat our personal behaviors and culture as “raw material free for the taking,” much as colonial powers claimed lands and peoples[3]. Algorithmic colonialism thus frames modern AI as part of a continuum of domination, repackaged in digital form.

AI Training Data Bias: Geographic and Cultural Imbalances

AI systems are only as good – and as fair – as the data they are trained on. Here lies a fundamental imbalance: the vast majority of AI training data comes from wealthy, English-speaking regions, while voices from the Global South are grossly underrepresented[6]. The internet – which provides the bulk of data for training large language models and other AI – is dominated by content from the Global North. English is by far the most common language online, accounting for roughly 60% of content on the web, even though only about 16% of people speak it[7]. By contrast, languages spoken by huge populations have a tiny online footprint (for example, Chinese speakers are 14% of the world, but Chinese appears in only ~1.4% of top websites[7]). The graphic below illustrates this stark disparity between languages spoken versus content available online, which directly feeds into AI datasets![][image1]

English dominates the web, far out of proportion to its share of world population. This means AI trained on internet data “learns” a worldview heavily skewed toward Western and Anglophone content. A recent analysis noted that less than 1% of the data used to train large AI models comes from African or Southeast Asian languages, even though billions speak them[8]. As a result, these models struggle with languages like Swahili, Hindi, Tamil or Lao, and often fail to grasp non-Western contexts[9][10]. Meanwhile, whole cultures and knowledge systems that aren’t digitized in Western-centric datasets are effectively invisible to AI. Non-Western ways of thinking, storytelling, and problem-solving are underrepresented, leading to what one writer calls “cultural erasure” – if something isn’t in English or on Wikipedia, it might as well not exist to an AI model[11].

Several structural factors drive this bias. First, under-digitization in parts of the Global South means less local content is available online. Indigenous knowledge or local literature often remains offline or in languages not parsed by web crawlers. Second, data gathering efforts by big tech tend to prioritize rich data sources (like English-language social media, Western news archives, etc.) for convenience and profit. The irony, as critics note, is that data about the Global South is often extracted by Western companies (through smartphones, social media, etc.), yet the benefits of that data – from advertising revenue to AI advancements – flow back to the Global North[12]. This one-way data flow echoes colonial patterns of resource extraction: an AI form of “draining away” value. In sum, the knowledge imbalance in training data ensures that AI systems speak with a predominantly Western accent, literally and metaphorically.

How Bias Manifests in AI Systems

These training data disparities aren’t just abstract statistics – they have real-world effects on AI behavior. When AI systems trained on one part of the world are applied globally, they often misfire or discriminate. Below are a few prominent examples of how algorithmic bias tied to data imbalance shows up in AI applications:

  • Biased Language Models: Large language models (like chatbots and translation tools) perform impressively in English, but flounder in many non-Western languages. They may produce awkward or inaccurate translations, struggle with local idioms, or default to Western examples. For instance, an editorial in Nature notes that GPT-style models “trained mainly on English and Western culture-centric data” perform poorly in languages like Hindi or Xhosa and miss cultural nuances entirely. This means users in those language communities get inferior AI services – an English query might yield a detailed answer, while a Swahili query gets confusion or silence. Such models can also inadvertently amplify Western perspectives as “universal,” overlooking context from the Global South in everything from medical advice to historical information.

  • Facial Recognition & Imagery: AI vision systems have shown striking biases when confronted with faces or scenes outside the datasets they were taught on. A famous study by Joy Buolamwini found that commercial face recognition systems, trained mostly on images of light-skinned Westerners, had error rates of over 34% when identifying darker-skinned women, versus near 0% for white men. In other words, the AI could hardly recognize many Black women as human faces. This bias isn’t just about identity – it’s about safety and rights. Such flawed systems have led to false arrests when police tech misidentifies people of color. Likewise, image recognition AI might label non-Western cultural attire or environments incorrectly (for example, seeing an Indigenous dress as a “costume” or a rural farming scene as “wilderness”) because the model was never taught about those contexts. The problem stems from Eurocentric training images, resulting in AI that literally doesn’t see people from certain backgrounds clearly.

  • Content Moderation Failures: Social media and content platforms use AI filters to detect hate speech, misinformation, or violence – but these filters work unevenly across languages. Algorithms attuned to English may completely miss dangerous content written in, say, Amharic or Burmese. A tragic case occurred in Myanmar, where Facebook’s lack of Burmese-language moderation enabled hate speech calling for violence against the Rohingya minority to spread unchecked. Researchers and even the U.N. found that Facebook’s AI (and human) moderation was essentially blind to local language incitement, contributing to a genocide in 2017. This exemplifies how a one-size-fits-all algorithm, designed in California, failed a community due to linguistic and cultural ignorance. Beyond extreme cases like Myanmar, everyday content moderation tends to reflect the cultural norms of Silicon Valley – often flagging posts by users in the Global South as “offensive” or “misinformation” mistakenly because the AI lacks local context to interpret slang, satire, or political discourse. The result is either harmful content slipping through in under-served regions, or harmless content being censored, both of which silence marginalized voices.

These examples underscore a pattern: when AI’s knowledge is lopsided, its mistakes are lopsided too. The technology ends up serving some groups better than others, which in turn deepens digital inequality.

Voices from the Global South

Critiques of algorithmic colonialism are not just coming from Western academia; scholars and activists from marginalized regions are leading the call for change. Their perspectives highlight lived experiences of AI’s imbalance and propose visions for a more inclusive tech future:

  • African Perspectives: African researchers have been outspoken about how AI solutions from Silicon Valley often ignore local realities. Abeba Birhane describes how Western tech firms roll out “AI-driven solutions” in Africa – like population mapping or fintech apps – that don’t actually solve African problems and instead foster dependency on foreign systems. As she notes, much of Africa’s digital infrastructure (from social media to ride-sharing) is controlled by Western monopolies, who frame their services as “connecting” or “saving” Africa while extracting data and profits [19][20]. This dynamic echoes the colonial mentality of “we know what’s best for you.” Other African voices, such as members of the Deep Learning Indaba (a pan-African AI community) and Masakhane (an open-source African language AI project), stress the importance of AI by Africans, for Africans. They argue that local context and languages must be central: for example, Masakhane’s researchers are creating translation models for dozens of African languages and benchmarks to test AI performance on African context tasks[21][22].

  • Indigenous and Under-represented Communities: Indigenous scholars warn that AI can perpetrate “epistemological violence” – a stripping away or misrepresentation of their ways of knowing [23]. Indigenous data (traditional knowledge, cultural expressions, even genetic or geographic data) is at risk of being appropriated by algorithms without consent or understanding. A study in Cameroon introduced the concept of algorithmic colonialism in the context of Indigenous data sovereignty, showing how tech projects were mining local knowledge (for instance, using images of sacred artifacts or recordings of tribal songs in training data) and commodifying it without regard for its cultural significance [24][25]. The authors documented how Western-designed systems, from health apps to mapping tools, often fail to accommodate Indigenous worldviews – effectively forcing communities to fit into Western data categories or be ignored [26]. In response, Indigenous activists and researchers are advocating for “decolonial AI” frameworks that put their communities in control of their data and AI use. This includes demanding ethical data collection (with informed consent and respect for cultural context) and developing AI that serves community-defined goals rather than corporate profit[27][28]. The message from marginalized communities is clear: “Nothing about us without us” – AI should not be another way outsiders speak for them or about them, but a tool they have a say in shaping.

Risks and Consequences of the Imbalance

Why does it matter if AI systems are skewed toward one part of the world’s data and values? The stakes are high: unchecked algorithmic colonialism threatens to widen global inequalities and erode cultural diversity. Some key risks and consequences include:

  • Policy Blind Spots: As governments and organizations increasingly rely on AI for decision-making (from welfare distribution to policing and healthcare), algorithms trained on foreign or biased data can lead to harmful blind spots. For example, a predictive policing AI developed in a Western city might misidentify crime patterns in an African or Latin American city because it doesn’t understand local social dynamics – possibly leading to wrongful targeting of certain neighborhoods. In public health, AI models might overlook diseases or risk factors prevalent in tropical regions if the training data was mostly European. These blind spots mean that policies guided by AI could ignore or misjudge the needs of the Global South, reinforcing misallocation of resources or ineffective interventions. It’s a new form of “one-size-fits-all” policy approach, reminiscent of colonial-era policies that failed to account for local contexts.

  • Digital Dependency and Monopolies: When only a handful of countries (and companies) design the world’s AI, everyone else becomes dependent on foreign technology. This dependency can stifle local innovation – why develop a local solution if a Silicon Valley product is imposed as the standard? As Birhane noted, Africa’s tech ecosystem can become reliant on Western infrastructure, leaving little room for homegrown alternatives [29][20]. Moreover, the concentration of AI power in a few big tech firms creates a monopoly-like scenario on a global scale. Countries without their own AI capabilities may have to accept the terms and worldview embedded in imported systems. This digital dependence is risky: it can lead to economic and security vulnerabilities (imagine if critical sectors like finance or energy rely on black-box AI from abroad), and it perpetuates a colonial dynamic of “producer vs consumer” states in technology.

  • Cultural Erasure and Language Extinction: As AI and digital content shape what knowledge is accessible, there’s a real fear of cultural erosion. When younger generations grow up with AI assistants, search engines, and media feeds that recognize only dominant languages and narratives, marginalized cultures may be further pushed aside. UNESCO estimates that many minority languages are at risk of extinction this century – and the digital realm is both a battleground and a lifeline for them. If AI only “speaks” English (or a few major tongues), speakers of Indigenous or lesser-known languages might shift to those languages online, accelerating the loss of their mother tongues. Additionally, AI-generated content could flood the internet with Western-centric stories and imagery, drowning out local content. This results in a subtle form of cultural imperialism: the homogenization of knowledge. As one analysis put it, “in a digital-first world limited to English content, there is a real risk of cultural erasure, with younger generations growing up less aware of local history and culture.” [30] In short, algorithmic bias doesn’t just harm accuracy – it can chip away at the rich tapestry of human languages and traditions.

  • Reinforcing Global Inequality: Perhaps the broadest consequence is that AI could entrench the divide between the tech-rich and tech-poor regions. If current trends continue, AI will augment productivity and growth in the countries that lead in AI, while others lag further behind. Wealthy nations will develop AI policies and standards to their advantage, potentially exporting them to less powerful ones. This could mirror the colonial economy of old: a flow of data and AI services replaces the flow of raw materials, but the power imbalance remains. The Global South could become merely a data source and market for AI, rather than an equal innovator. The voices and problems that AI addresses would skew toward those of the Global North. Such inequitable development risks deepening socioeconomic gaps – from who reaps AI’s economic rewards to who bears its surveillance and labor downsides. As a sobering warning, experts note we risk a future where “global power [is] further concentrated in a handful of companies and countries” and AI systems “can’t understand or serve most of the world’s population.”[31]

In summary, algorithmic colonialism is not just about representation in datasets – it translates into real consequences for knowledge, governance, and self-determination in the digital age. The imbalance, if unaddressed, could solidify a two-tier world: one where AI works primarily for the historically privileged, while others are left with its flaws and without a voice in its direction.

Towards Inclusive AI: Pathways and Solutions

While the challenges are immense, they are not insurmountable. A growing movement of researchers, policymakers, and communities around the world is seeking to “decolonize” AI and make it more inclusive. Here we outline several strategies and frameworks that can help rebalance global AI development and ensure that diverse voices and knowledge systems are respected:

  • Diversify and Decolonize Data: The phrase “garbage in, garbage out” applies to bias – if our training data is one-sided, outcomes will be too. To change this, we need ethical, inclusive data collection efforts that actively involve underrepresented communities. This means creating datasets that include voices, languages, and perspectives from the Global South and beyond. For instance, grassroots projects are digitizing folklore, oral histories, and literature from marginalized cultures (with community consent) so that these can feed into AI models. Researchers are calling for “open datasets in local languages, histories, and cultural contexts” as a way to decolonize AI knowledge[32]. Crucially, data practices must shift from extraction to partnership: rather than Western companies scraping online content indiscriminately, there should be collaboration and compensation when using local data. Ideas like Indigenous data sovereignty give a blueprint – letting communities set terms on how their data is used, ensuring they benefit from any AI built on it[27][28]. By expanding what data is valued as “AI-worthy,” we can teach algorithms about the whole world, not just a slice of it.

  • Multilingual and Cultural AI: Building AI that speaks many languages and respects cultural nuances is both a technical and social imperative. On the technical side, this involves investing in multilingual AI models and language tools for low-resource languages. Promising examples include BLOOM, a large multilingual language model trained on 46 languages (including African and Southeast Asian ones), released openly to encourage wider use[[32]. Likewise, translation communities like Masakhane are creating state-of-the-art NLP (Natural Language Processing) for Swahili, Amharic, Yoruba, and more. Meanwhile, benchmarks like AfroBench have been developed to evaluate how well AI performs on tasks in 64 African languages, revealing significant gaps that researchers are now addressing[21]. Beyond language, culture-aware AI means the design of algorithms should involve local experts. This could be as simple as content recommendation systems that understand regional preferences, or as critical as healthcare AI trained on local medical data (to recognize, say, genetic traits or disease patterns in South Asian or African populations that a U.S.-trained model might miss). Multilingual, culturally adapted AI not only serves users better – it also acts as a preservation tool for languages and knowledge, by necessitating the documentation and computational modeling of those rich traditions.

  • Empower Local AI Ecosystems: To truly break the pattern of algorithmic colonialism, countries and communities need the capacity to develop their own AI solutions. This calls for investing in local AI ecosystems – the people, infrastructure, and institutions that enable AI innovation outside the usual hubs. One aspect is closing the “compute divide”: currently, the vast majority of cutting-edge AI hardware (data centers, supercomputers) is in the US, Europe, and China[1][33]. Initiatives like Africa’s AI Research Cloud – a regional supercomputing network in Rwanda and Kenya – are a step toward giving researchers in Africa the tools to train and run large models at home[34]. Similarly, governments and industry in the Global South can establish regional AI hubs or centers of excellence to provide shared resources. Another aspect is talent and education. We need a pipeline of AI developers and researchers from all regions, which means supporting STEM education, funding scholarships and research labs, and mentorship programs that pair experts with local students[35]. Efforts like the annual Deep Learning Indaba gathering in Africa help nurture local expertise and stem the brain drain that often pulls talent to Silicon Valley. The open-source movement also plays a role: using and contributing to open AI tools (like TensorFlow, PyTorch, Hugging Face) lowers barriers to entry so that a student in Phnom Penh or Nairobi can experiment at the cutting edge with minimal cost[36]. Ultimately, an empowered local ecosystem means the community can identify its own problems and build AI to solve them – whether it’s a crop disease detector for local farmers or a dialect-specific chatbot for mental health support.

  • Ethical Frameworks and Policy: Just as past eras required new laws to protect colonies or promote independence, the AI era needs policy frameworks to guard against digital exploitation. Countries in the Global South are starting to draft data protection and AI ethics laws that assert their digital sovereignty. For example, some are considering laws to require that data collected within the country be stored on local servers, and that companies be transparent about how they use it. Such data sovereignty laws could prevent unregulated data mining by foreign firms[37]. Another policy idea is to mandate benefit sharing: if a tech company trains a lucrative AI model on, say, Pakistani social media data, could there be a mechanism to share some revenue or technology back with the community? International organizations and coalitions of smaller nations can also push for global norms – much like climate agreements – to ensure AI doesn’t become a new colonial frontier. Ethically, incorporating principles of fairness, accountability, and transparency in AI deployment is crucial. This might mean requiring impact assessments before deploying AI in sensitive social domains, and including local stakeholders in those assessments (e.g. consulting Māori communities before rolling out an AI tool in New Zealand that could affect them). Inclusive governance – giving emerging economies and marginalized groups a seat at the table in international AI standards discussions – is likewise key to preventing the dominance of a few voices.

  • Supporting Local Content and Innovation: Lastly, an often overlooked but powerful step is fostering local content creation and startup innovation. If more content from diverse cultures is produced and put online (in native languages), it organically enriches the data that AI trains on. This can be encouraged through grants for digitizing archives, supporting Wikipedia editing in underrepresented languages, or cultural programs for digital storytelling. On the innovation side, investing in local AI startups ensures solutions are developed with on-the-ground insight. Governments and investors can create sovereign AI funds or incubators to back entrepreneurs tackling local challenges with AI[38]. For example, there are startups in India focusing on AI for agriculture in small farms, or in Uganda using AI for diagnosing diseases prevalent in East Africa – these need funding and an enabling regulatory environment. By nurturing these initiatives, countries reduce reliance on foreign “one-size” products and create tech that aligns with their societal values. Cross-border collaborations within the Global South (say, Latin American or African tech alliances) can also accelerate this, sharing best practices and data in a south-south exchange.

Taken together, these solutions form a multi-faceted approach to make AI development more globally inclusive and equitable. Importantly, none of these changes happen overnight – they require sustained effort, resources, and political will. But the momentum is building. A poignant concept emerging from many of these discussions is “AI by all, for all.” The idea is that AI’s benefits, governance, and knowledge creation should be a shared endeavor, not monopolized by a few. As one coalition put it: “The future of AI should not be written in just a few languages or powered by just a few data centers. It must be co-authored by all of us.”[39] In practical terms, that means an AI future where a researcher in Nairobi can train a cutting-edge model on African data using local cloud infrastructure, or a student in Bolivia can access an AI education in Spanish and build solutions for her community[40].

Such a future is ambitious but not impossible. Ensuring AI serves the many and not just the few is arguably one of the grand challenges of our time. It demands reimagining everything from how we collect data to how we collaborate across borders. By recognizing and tackling algorithmic colonialism, we take a step toward an AI revolution that genuinely includes the diverse communities of our world – preserving the rich mosaic of human knowledge and making technology a tool of empowerment rather than domination. The stakes are nothing less than the fairness of our digital tomorrow, and the cultural heritage that tomorrow will carry forward.

In summary, algorithmic colonialism shines a light on the hidden inequalities in today’s AI landscape. AI systems, trained predominantly on Western data and values, risk reinforcing global power imbalances – much like colonial empires did in the past. The underrepresentation of Global South languages, cultures, and perspectives in training datasets has tangible consequences: biased applications, cultural erasure, and a widening digital divide. However, awareness of this problem is the first step towards change. Around the world, voices from marginalized regions are demanding a more inclusive AI – one that respects local knowledge and gives equal worth to every language and culture. By diversifying data, fostering local AI development, and enacting fair policies, we can begin to decolonize AI. The goal is a future where AI truly serves all of humanity, not just a privileged few. In the making of that future, inclusivity is a form of safety and strength: an AI trained on the breadth of human experience is more robust, more just, and ultimately more transformative for good[41]. As we move forward, the guiding principle is clear – AI’s story should be written by all of us, together, so that technology becomes a force that bridges cultures and empowers communities, rather than a new empire of algorithms.

Sources:

  1. Abeba Birhane (2020). “Algorithmic Colonisation of Africa.” The Elephant – Analysis piece on how Western AI solutions impose on Africa[42][19].

  2. Alliance for AI & Humanity (2025). “AI — West vs the Rest: Bridging the Global Divide in Artificial Intelligence.” Medium – Discusses global AI inequality and data colonialism, with stats on language data <1% for African/SE Asian languages[6][9] and solutions like open data and local AI hubs[32][34].

  3. Nature Machine Intelligence Editorial (May 2025). “Localizing AI in the global south.” – Notes that major LLMs trained on Western data perform poorly in non-Western languages[13][10] and emphasizes risk of cultural erasure if AI remains English-centric[30].

  4. Nouridin Melo (2025). “Algorithmic Colonialism and the Appropriation of Indigenous Data: Safeguarding Cultural Epistemologies in the Digital Age.” Preprint – Introduces the concept in a Cameroonian context, defining algorithmic colonialism as extraction of Indigenous data that reinforces Western dominance[28] and calling for Indigenous data sovereignty frameworks[23].

  5. Joy Buolamwini et al. (2018). “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” – Research finding facial recognition systems much less accurate for dark-skinned women, as reported by MIT News[14], highlighting bias from unbalanced training images.

  6. Daniel Zaleznik (2022). “Facebook and Genocide: How Facebook contributed to genocide in Myanmar…” Systemic Justice Journal (Harvard) – Details Facebook’s role in Myanmar’s violence due to lack of Burmese content moderation[15], exemplifying algorithmic blind spots with non-English content.

  7. Voronoi/Visual Capitalist (2023). “Most Commonly Used Languages on the Internet.” – Data visualization showing English at ~60% of web content vs. tiny shares for languages of the Global South[7]![][image1]
    , illustrating the imbalance in digital data that feeds AI.


[1] [6] [8] [9] [11] [12] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] AI — West vs the Rest: Bridging the Global Divide in Artificial Intelligence by Alliance for AI & Humanity Medium

https://medium.com/@aaih/ai-west-vs-the-rest-bridging-the-global-divide-in-artificial-intelligence-00e6e04450ea

[2] Algorithmic Colonialism → Term

https://climate.sustainability-directory.com/term/algorithmic-colonialism/

[3] [5] [19] [20] [29] [42] Algorithmic Colonisation of Africa - The Elephant

https://www.theelephant.info/analysis/2020/08/21/algorithmic-colonisation-of-africa/

[4] Institute of Network Cultures How to Resist Data Colonialism? Interview with Nick Couldry & Ulises Mejias

https://networkcultures.org/blog/2022/12/13/how-to-resist-against-data-colonialism-interview-with-nick-couldry-ulises-mejias/

[7] Visualizing the Most Used Languages on the Internet - Voronoi

https://www.voronoiapp.com/other/Visualizing-the-Most-Used-Languages-on-the-Internet-102

[10] [13] [21] [22] [30] Localizing AI in the global south Nature Machine Intelligence

https://www.nature.com/articles/s42256-025-01057-z?error=cookies_not_supported\&code=0253e641-d062-417b-8939-8b1dc8b8ec46

[14] Study finds gender and skin-type bias in commercial artificial-intelligence systems MIT News Massachusetts Institute of Technology

https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212

[15] [16] [17] Facebook and Genocide: How Facebook contributed to genocide in Myanmar and why it will not be held accountable - Harvard Law School Systemic Justice Project

https://systemicjustice.org/article/facebook-and-genocide-how-facebook-contributed-to-genocide-in-myanmar-and-why-it-will-not-be-held-accountable/

[18] One-size-fits-all content moderation fails the Global South

https://cis.cornell.edu/one-size-fits-all-content-moderation-fails-global-south

[23] [24] [25] [26] [27] [28] Algorithmic Colonialism and the Appropriation of Indigenous Data: Safeguarding Cultural Epistemologies in the Digital Age[v1] Preprints.org

https://www.preprints.org/manuscript/202505.1403/v1