Generative A.I. deepfake images, videos, and other content, overtaking social media apps and other digital content, commonly referred to as A.I. slop, are creating a media ecosystem defined by content chaos. The decisions of big tech companies to program algorithms that favor, and reward in monetary terms, overly emotionally triggering material continue to amplify information pollution. As a climate media scholar, the author argues that popularizing information pollution terminology, which labels generative A.I. slop as contaminating social media apps and platforms, would focus attention on the social and political harms. This content fuels disinformation campaigns and is diminishing public trust. In addition, labeling A.I. slop with a pollution metaphor brings into the lens more readily the environmental harms of generative artificial intelligence. Examples discussed include the cratered monetary value of AI-generated art, the A.I. agent-only social network Moltbook, misinformation and disinformation following extreme weather disasters, and a creative direct action by anti-data center activists in Quilicura, Chile.