This study investigates the discriminatory impact of artificial intelligence (AI)-driven content moderation on social media platforms (SMPs), particularly in the Global South, where cultural and linguistic diversity often clash with the Western-centric AI frameworks. Platforms like Meta increasingly rely on AI algorithms to moderate vast amounts of content, but research shows that these algorithms disproportionately restrict free expression in the Global South (European Union Agency for Fundamental Rights, 2023; De Gregorio & Stremlau, 2023). This results in “over removal” – censorship of lawful content – and “slow removal,” which fails to address harmful material, both of which perpetuate inequality and hinder free speech. Through a case study on Meta, this research examines how AI-based content moderation misunderstands local contexts and systematically marginalizes users. The contributing factors include limited financial investment, inadequate language training, and political and corporate biases. The imbalance reflects power asymmetries, as governments in the Global South lack influence over platform policies. This study uses a human rights perspective to explore solutions through multistakeholder engagement, advocating for collaboration among tech companies, governments, and civil society to reform AI governance. Ultimately, it aims to inform regulatory frameworks that ensure fairer, more inclusive content moderation and protect free expression for a globally equitable digital landscape.