broken image

In Justice We Act.

  • Home
  • About
  • News
  • Competition 
    • Results 2025 Spring
    • Competition 2025 Spring
    • Results 2024 Fall
    • Competition 2024 Fall
    • Results 2024 Spring
    • Competition 2024 Spring
    • Results 2023 Fall
    • Competition 2023 Fall
    • Results 2023 Spring
    • Competition 2023 Spring
    • Results 2022 Fall
    • Competition 2022 Fall
    • Results 2022 Spring
    • Competition 2022 Spring
    • Results 2021
    • Competition 2021
  • SIPI 
    • 2025 SIPI
    • 2024 SIPI Results
    • 2024 SIPI
  • Event 
    • 2024 Spring Concert for Peace
    • 2023 Spring Concert for Peace
    • 2022 Spring Forum
  • Voice
  • Interviews
  • Opinion
  • Gallery
  • Watchers
  • …  
    • Home
    • About
    • News
    • Competition 
      • Results 2025 Spring
      • Competition 2025 Spring
      • Results 2024 Fall
      • Competition 2024 Fall
      • Results 2024 Spring
      • Competition 2024 Spring
      • Results 2023 Fall
      • Competition 2023 Fall
      • Results 2023 Spring
      • Competition 2023 Spring
      • Results 2022 Fall
      • Competition 2022 Fall
      • Results 2022 Spring
      • Competition 2022 Spring
      • Results 2021
      • Competition 2021
    • SIPI 
      • 2025 SIPI
      • 2024 SIPI Results
      • 2024 SIPI
    • Event 
      • 2024 Spring Concert for Peace
      • 2023 Spring Concert for Peace
      • 2022 Spring Forum
    • Voice
    • Interviews
    • Opinion
    • Gallery
    • Watchers
broken image

In Justice We Act.

  • Home
  • About
  • News
  • Competition 
    • Results 2025 Spring
    • Competition 2025 Spring
    • Results 2024 Fall
    • Competition 2024 Fall
    • Results 2024 Spring
    • Competition 2024 Spring
    • Results 2023 Fall
    • Competition 2023 Fall
    • Results 2023 Spring
    • Competition 2023 Spring
    • Results 2022 Fall
    • Competition 2022 Fall
    • Results 2022 Spring
    • Competition 2022 Spring
    • Results 2021
    • Competition 2021
  • SIPI 
    • 2025 SIPI
    • 2024 SIPI Results
    • 2024 SIPI
  • Event 
    • 2024 Spring Concert for Peace
    • 2023 Spring Concert for Peace
    • 2022 Spring Forum
  • Voice
  • Interviews
  • Opinion
  • Gallery
  • Watchers
  • …  
    • Home
    • About
    • News
    • Competition 
      • Results 2025 Spring
      • Competition 2025 Spring
      • Results 2024 Fall
      • Competition 2024 Fall
      • Results 2024 Spring
      • Competition 2024 Spring
      • Results 2023 Fall
      • Competition 2023 Fall
      • Results 2023 Spring
      • Competition 2023 Spring
      • Results 2022 Fall
      • Competition 2022 Fall
      • Results 2022 Spring
      • Competition 2022 Spring
      • Results 2021
      • Competition 2021
    • SIPI 
      • 2025 SIPI
      • 2024 SIPI Results
      • 2024 SIPI
    • Event 
      • 2024 Spring Concert for Peace
      • 2023 Spring Concert for Peace
      • 2022 Spring Forum
    • Voice
    • Interviews
    • Opinion
    • Gallery
    • Watchers
broken image

Invisible Injustice: How AI Exploits the Marginalized and Why Regulation Must Act First

Muzeng Huang, Benenden School

· Winning Essays

Introduction

AI systems impact and shape the decisions of people worldwide in many aspects of life, from legal rulings and job opportunities to entertainment content. Unlike traditional technologies, AI creates algorithmic harm through an almost invisible process called “Discrimination 3.0,” where bias operates in a more subtle yet deeply embedded manner within digital platforms (Barzilay & Ben-David, 2017). This process makes it difficult for those harmed to understand how and why they were affected. AI causes significant harm to individuals, families, and society before detection—a gap that current reactive frameworks, such as the General Data Protection Regulation (GDPR), fail to address. Without proactive regulation, AI risks becoming a tool that amplifies existing power imbalances, leading to further exploitation and societal inequalities.

This essay examines how AI amplifies societal and economic biases from an individual level to a broader scale, focusing primarily on the gig economy’s gender inequalities enabled by AI algorithm integration. It argues that proactive regulation is essential to mitigate the disproportionate impact of AI on marginalized groups.

Deepening the Gig Economy’s Gender Divide

AI algorithms in the gig economy reinforce gender inequities, widening the gender wage gap and occupational segregation. The gig economy is a labor-sharing market system characterized by short-term, flexible, and task-based work, facilitated by platforms that connect contractors with customers (Tan et al., 2020). AI algorithms manage gig workers by matching them to customers, setting prices, and tracking performance, relying on internally collected data to optimize their internal processes (World Bank, 2020; Wazani, 2024). However, the AI systems responsible for algorithmic decision-making are often embedded with gender biases that originated from historical datasets, decreasing job opportunities for women gig workers (Edwards & Veale, 2017). AI’s lack of transparency further exacerbates this, making systemic discrimination an almost invisible process.

“Surge pricing” is one specific mechanism of discriminatory algorithmic management that rewards “ideal workers” and penalizes “non-ideal workers” based on performance metrics (Rosenblat & Stark, 2016). The concept of the “ideal worker” tends to refer to individuals without caregiving responsibilities, which allows them to potentially work unlimited hours to meet customer demands (Williams & Segal, 2003). This creates a stark divide between male and female gig workers, as the latter demographic is more likely to have fewer work hours due to their household, childcare, and reproductive responsibilities.

Women gig workers tend to complete fewer tasks, decline rides due to distance from home, respond less frequently, prioritize flexibility, and avoid busier weekend and night shifts due to family responsibilities (Datta et al., 2023). In response, male-centric algorithms penalize women for having gendered responsibilities and not fitting the “ideal worker” model by reducing their algorithmic visibility and scores (Micha, Poggi, & Pereyra, 2022). This can be devastating, as the algorithmic design prioritizes operational efficiency and profit maximization to determine future management decisions. Since there is a direct correlation between performance scores and earnings, one gig worker explained, “You can't risk a negative review, they are so damaging. So if you're then dropped from that algorithm, you don't show [up to customers] and you don't get invited to send proposals” (James, 2023).

On a macroeconomic level, these automated biases have accumulated to exacerbate the gender pay gap in the gig economy across numerous countries (Read, 2022). Figure 1 highlights the pay disparities among Chinese food delivery workers, where the payment rate increases with the number of completed deliveries.

Figure 1

Food Delivery Riders’ Pay Per Order in China

broken image

Note. A bar graph showing that around 20% of both genders earn > ¥5 per order, but a higher share of women workers fall into the ¥5–8 range, while more men are in the ¥8–10 yuan and >10 yuan brackets. From China Labour Bulletin.

The Hidden Toll of Algorithmic Exploitation

Not only does algorithmic management widen the gender pay gap, but it also creates a pattern of exploitation through its design intent to maximize efficiency. The pressure from algorithmic management has led to a new form of unregulated work for gig workers. The “just-in-time scheduling” imposed by algorithms endangers the physical and mental health of gig workers, as shifts are notified only hours in advance instead of days prior. While this approach reduces labor costs for companies, it creates significant uncertainty, stress, and burnout for gig workers (Thelen, 2019).

This exploitation disproportionately disadvantages vulnerable and marginalized groups. For women gig workers, earning a living wage often requires excessive overwork, constant task management, navigating vague gig requirements, and juggling irregular schedules—all while balancing childcare duties (James, 2023). Moreover, exploitation also manifests through incentivized risky behavior and the undermining of worker safety, such as working long hours or within isolated locations. Yet, women gig workers who decline assignments due to caregiving responsibilities or safety concerns face algorithmic penalties, which leads to lower monthly salaries (Schisler, 2022). This practice closely mimics unethical sweatshop labor practices; the alleged benefit of having flexibility as a gig worker is just another form of exploitation by another name.

Expanding beyond gender pay inequities, this exploitation further disproportionately impacts multiply marginalized gig workers—where overlapping factors such as gender, race, and socioeconomic status interact to create unique experiences of oppression. For example, research on numerous ride-hailing platforms found that working-class, migrant, and single-mother drivers are more likely to be victims of algorithmic control, often compromising their health and safety to make ends meet (Kwan, 2022). Thus, combating gender biases in algorithmic management serves as a critical gateway for alleviating any financial burdens faced by neighboring communities.

How AI’s Hidden Prejudices Shape Society

AI-driven marginalization extends to finance, healthcare, and the judicial system. Onuoha (2018) introduced the concept of “algorithmic violence” to describe how automated decision-making systems and algorithms inflict harm to individuals from essential aspects of their lives.

In 2014, Amazon introduced an AI-based recruitment tool that favored men over women due to its biased datasets. Although it has been removed, this example illustrates how AI algorithms not only inherit human prejudices but also magnify them macroscopically, limiting women’s career prospects and financial independence (Dastin, 2018). Similarly, findings indicate that algorithmic financers often disadvantage women borrowers—despite their lower default rates compared to men—by assigning them higher interest rates and denying loans more frequently (Cristina et al., 2023).

As these incidents of gender bias accumulate on the individual level, they form a broader pattern of systemic inequality and exploitation. “Algorithmic violence” thus represents a subtle, invisible form of passive violence that is difficult to detect and regulate on a case-by-case basis. Its harms are pervasive and long-term, with effects subtly rippling through society in difficult-to-detect ways.

The long-term effects of AI-driven inequalities deepen as marginalized groups continue to be disempowered, diminished, and exploited under late-stage capitalism and the skewed datasets it produces. Extreme inequality comes at a significant cost to society, spanning economic, social, and health-related domains. Wilkinson and Pickett (2017) link it to lower life expectancy, higher infant mortality, mental illness, and HIV rates. They identified social evaluative threats (SES) as major stressors in unequal societies, affecting all social classes. Low-SES individuals often suffer from low self-esteem, self-doubt, depression, and anxiety, while high-SES individuals may develop narcissistic and self-centered tendencies. High inequality also destabilizes economies, weakening innovation, institutions, and market accountability (Boushey, 2019). Ultimately, unchallenged exploitation erodes overall economic growth and societal well-being in the long term.

AI’s Opacity Shields Accountability

Another key reason AI requires proactive regulation is its algorithmic opacity—that is, the technical lack of transparency in how algorithms operate as a "black box," making it challenging even for developers to detect and address biases (Chen, 2023). Research has shown that complete algorithmic transparency may be unattainable due to the inherent complexities of machine learning (Burrell, 2016). Although companies claim that this is a technical limitation or defect in their products, this rhetorical strategy is misleading and used to absolve companies of legal accountability (Tomassetti, 2016).

AI’s opaque nature is frequently used as a scapegoat for causing tangible harm. In Wobley v. Workday, Inc., an African American job applicant accused the defendant of using discriminatory hiring software. In their defense, Workday’s leveraged algorithmic opacity to evade public scrutiny. The court ruled that AI opacity cannot shield companies from accountability and mandated that they must now disclose how their algorithms operate.

The case highlights how unfair AI systems should not be tolerated and that companies cannot hide behind AI's complexity. While it was a significant breakthrough in setting future precedents, the decision was still reactive since the applicant had already experienced anti-Blackness. Thus, proactive regulation is essential to establish clear accountability and ensure that fairer systems are designed and implemented from the ground up.

Proactive Initiatives in Fairer AI

There are various ways to mitigate biases as part of proactive regulation despite the challenge of algorithmic opacity. Options include prioritizing algorithmic fairness by increasing diversity within development teams, improving input dataset quality to ensure better representation, and developing new fairness metrics that factor in intersectionality (Johnson, 2019; Katyal, 2020). Moreover, many bias mitigation strategies and tests can be adopted by companies before deployment (O'Connor & Liu, 2023).

A successful example of bias mitigation can be seen in Microsoft’s facial recognition overhaul. Initially, the system had a 20.8% error rate for identifying darker-skinned women. However, after an intentional initiative to dataset expansion and diversification and algorithmic refinement, Microsoft achieved a 20 times reduction in error rates for darker-skinned individuals and 9 times for all women (Smith, 2018). This shows how algorithmic bias is not inevitable, but instead something that can be significantly reduced through intentional dataset curation and a strong organizational commitment to fairness—all made more achievable through proactive regulation.

Moreover, in the case of the gig economy, algorithmic management could be designed to include and empower. The Uber Ellas initiative in Argentina shifted away from a male-centric approach by allowing women drivers greater control over the gender of their passengers. This proactive design in the algorithmic management reduced cancellations, increased trips, and improved safety, leading to a 30% rise in women drivers in Mendoza and Buenos Aires within a year. Unlike exploitative surge pricing controls, Uber Ellas’ success proves that a proactive, inclusive design tailored to the needs of vulnerable groups can yield positive results, proving that algorithmic opacity should not be an obstacle to ethical AI design.

Proactive, Not Reactive, AI Regulation

Based on the case studies, current legal frameworks must become more proactive in regulating the fair design, development, and release of AI algorithms. For example, Article 22 of the GDPR grants people "the right not to be subject to solely automated decisions, including profiling" (Information Commissioner's Office, 2023). Nevertheless, its effectiveness is limited by its reactive approach, as placing the burden on affected individuals to report the issue does not necessarily guarantee justice.

Moreover, the GDPR fails to adequately address Discrimination 3.0, where algorithmic harm does not always result in an immediate “legal or similarly significant” impact to trigger the law. Current laws often focus on addressing individual acts of discrimination rather than tackling the broader systemic culture of patriarchy. For example, Title VII of the American Civil Rights Act prohibits gender-based discrimination in the workplace. However, it requires proof of discriminatory intent, which is difficult to establish when discrimination is widespread and “invisible.” Katyal (2020) describes many AI algorithmic decisions as subconscious “nudges” —minute differences that may not immediately change behavior but still accumulate over time, as previously discussed.

Conclusion

Overall, proactive regulation is urgently needed to address the tangible harms of AI algorithmic bias inflicted on marginalized groups, which exploits those especially with intersectional identities and perpetuates systemic inequities. Rather than merely responding after the damage has been done to individual livelihoods and society at large through reactive regulation, it is important to take action early to prevent the entrenchment of systemic inequities. Instead, proactive regulation is necessary to embed fairness in AI design, ensuring diverse representation and achieving economic growth and prosperity ethically and sustainably.

References

Barzilay, A. R., & Ben-David, A. (2017). Platform Inequality: Gender in the Gig-Economy. SSRN Electronic Journal, 47(393). https://doi.org/10.2139/ssrn.2995906

Booth, K. (2020, August 24). Algorithms workers can’t see are increasingly pulling the management strings. The University of Sydney. https://www.sydney.edu.au/news-opinion/news/2020/08/24/algorithms-workers-cant-see-are-increasingly-pulling-the-strings.html

Boushey, H. (2019). Unbound: How inequality constricts our economy and what we can do about it. Harvard University Press.

‌Burrell, J. (2016). How the machine “thinks”: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12. https://doi.org/10.1177/2053951715622512

Chen, Z. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and Social Sciences Communications, 10(1), 1–12. https://doi.org/10.1057/s41599-023-02079-x

Dastin, J. (2018). “Insight - Amazon scraps secret AI recruiting tool that showed bias against women.” Reuters, www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-show ed-bias-against-women-idUSKCN1MK0AG/.

China Labour Bulletin. (2023). Women workers in China’s gig economy face discrimination, lower pay, unsafe conditions. https://clb.org.hk/en/content/women-workers-china%E2%80%99s-gig-economy-face-discrimination-lower-pay-unsafe-conditions

Cristina, A., Gomes, M., & Rigobon, R. (2023). Algorithmic discrimination in the credit domain: What do we know about it? AI & Society, 39, 2059–2098 https://doi.org/10.1007/s00146-023-01676-3

Datta, N., Rong, C., Singh, S., Stinshoff, C., Iacob, N., Nigatu, N. S., Nxumalo, M., & Klimaviciute, L. (2023). Working without borders: The promise and peril of online gig work. World Bank. https://doi.org/10.1596/40066

Dokuka, S., Kapuza, A., Sverdlov, M., & Yalov, T. (2022). Women in gig economy work less in the evenings. Scientific Reports, 12(1). https://doi.org/10.1038/s41598-022-12558-x

Edwards, L., & Veale, M. (2017). Slave to the algorithm? Why a right to explanation is probably not the remedy you are looking for. SSRN Electronic Journal, 16(18). https://doi.org/10.2139/ssrn.2972855

Information Commissioner’s Office. (2023). What is the impact of article 22 of the UK GDPR on fairness? https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/how-do-we-ensure-fairness-in-ai/what-is-the-impact-of-article-22-of-the-uk-gdpr-on-fairness/

James, A. (2023). Platform work‐lives in the gig economy: Recentering work–family research. Gender, Work & Organization, 31(2). https://doi.org/10.1111/gwao.13087

Johnson, K. N. (2019). Automating the risk of bias. George Washington Law Review, 87(6). Tulane Public Law Research Paper No. 19-12. https://www.gwlr.org/wp-content/uploads/2020/01/87-Geo.-Wash.-L.-Rev.-1214.pdf.

Katyal, S. K. (2020). Private accountability in an age of artificial intelligence. In W. Barfield (Ed.), The Cambridge Handbook of the Law of Algorithms (pp. 47–106). Cambridge: Cambridge University Press.

Kwan, H. (2022). Gendered precarious employment in China’s gig economy: Exploring women gig drivers’ intersectional vulnerabilities and resistances. Gender & Development, 30(3), 551–573. https://doi.org/10.1080/13552074.2022.2118464

Micha, A., Poggi, C., & Pereyra, F. (2022). When women enter male-dominated territories in the platform economy: Gender inequalities among drivers and riders in Argentina. Gender & Development, 30(3), 575–600. https://doi.org/10.1080/13552074.2022.2117931

O’Connor, S., & Liu, H. (2023). Gender bias perpetuation and mitigation in AI technologies: Challenges and opportunities. AI & Society, 39(4), 2045–2057. https://doi.org/10.1007/s00146-023-01675-4

Onuoha, M. (2018). Notes on algorithmic violence. GitHub. https://github.com/MimiOnuoha/On-Algorithmic-Violence

Read, S. (2022, August 15). Gender pay gaps far worse for women in the gig economy, says study. World Economic Forum. https://www.weforum.org/stories/2022/08/gender-pay-gap-gig-economy/

Rosenblat, A., & Stark, L. (2016, July 30). Algorithmic labor and information asymmetries: A case study of Uber’s drivers. International Journal Of Communication, 10, 27. . https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2686227

Schisler, C. (2022, May 5). “Women working in gig-economy face harassment, lack of safety support.” Houston Today. https://www.houston-today.com/business/women-working-in-gig-economy-face-harassment-lack-of-safety-support-6433361

Smith, B. (2018, December 18). Facial recognition: It’s time for action. Microsoft on the Issues. https://blogs.microsoft.com/on-the-issues/2018/12/06/facial-recognition-its-time-for-action/

Tan, Z. M., Aggarwal, N., Cowls, J., Morley, J., Taddeo, M., & Floridi, L. (2021). The ethical debate about the gig economy: A review and critical analysis. Technology in Society, 65. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3669216

Thelen, K. (2019). The American precariat: U.S. capitalism in comparative perspective. Perspectives on Politics, 17(1), 5–27. https://doi.org/10.1017/s1537592718003419

‌Tomassetti, J. (2016). It’s none of our business: The postindustrial corporation and the guy with a car as entrepreneur. Confex.com; Sase. https://sase.confex.com/sase/2016am/webprogram/Paper5157.html

Wazani, K. (2024). AI, gig economy and unemployment foresight: A glance on the global economy. American Academic & Scholarly Research Journal, 16(2). https://www.aasrc.org/aasrj/index.php/aasrj/article/view/2276/1398

Wilkinson, R. G., & Pickett, K. E. (2017). The enemy between us: The psychological and social costs of inequality. European Journal of Social Psychology, 47(1), 11–24. https://doi.org/10.1002/ejsp.2275

Williams, J., & Segal, N. (2003). Beyond the maternal wall: Relief for family caregivers who are discriminated against on the job. Harvard Women's Law Journal, 26(77). https://repository.uclawsf.edu/faculty_scholarship/805/

Wood, A. J., Lehdonvirta, V., & Graham, M. (2018). Workers of the internet unite? Online freelancer organisation among remote gig economy workers in six Asian and African countries. New Technology, Work and Employment, 33(2), 95–112. https://doi.org/10.1111/ntwe.12112

Subscribe
Previous
Tech Turns Against Women…Or Is It Us
Next
Protecting Patients’ Human Rights from Algorithmic Biases...
 Return to site
Profile picture
Cancel
Cookie Use
We use cookies to improve browsing experience, security, and data collection. By accepting, you agree to the use of cookies for advertising and analytics. You can change your cookie settings at any time. Learn More
Accept all
Settings
Decline All
Cookie Settings
Necessary Cookies
These cookies enable core functionality such as security, network management, and accessibility. These cookies can’t be switched off.
Analytics Cookies
These cookies help us better understand how visitors interact with our website and help us discover errors.
Preferences Cookies
These cookies allow the website to remember choices you've made to provide enhanced functionality and personalization.
Save