Foundation models are many things and encompass several modalities; they use text, images, sound, and more recently, action or inference units. But all of these forms share one thing in common: the (massive) scale. The “large” in large language models has been well studied by scholars in critical data, AI and archive studies, with several experts pointing at how these models are environmentally harmful, technically opaque and corporationally monopolistic primarily because of their scale. This piece discusses questions of technical and cultural scale – in the material, archival and procedural senses – within the contemporary technical and discursive landscape. At stake here is the role of critical and design studies within academic, artistic and para-academic worlds. It suggests that instead of corporate chatbots that aspire to pass the Turing test through multipurpose, encyclopedic service, we may be better served by playing with local models and reaching for small-scale AI development. This epistemological shift, in fact, may also provide some creative and critical potential that more effectively gets at the strangeness of machine learning systems while consciously and carefully handling the scalar environmental and social impacts of big AI.