The challenge of composing both sound and moving image within a coherent
computer-mediated framework is addressed, and some of the aesthetic issues
highlighted. A conceptual model for an audiovisual delivery system is
proposed, and this model acts as a guide for detailed discussion of some
illustrative examples of audiovisual composition. Options for types of score
generated as graphical output of the system are outlined. The need for
extensive algorithmic control of compositional decisions within an
interactive framework is proposed. The combination of Tabula Vigilans Audio
Interactive (TVAI), an algorithmic composition language for electroacoustic
music and realtime image generation, with MIDAS, a multiprocessor
audiovisual system platform, is shown to have the features desired for the
conceptual outline given earlier, and examples are
given of work achieved using these resources. It is shown that ultimately
delivery of new work may be efficiently distributed via the World Wide Web,
with composers' interactive scripts delivered remotely but rendered
locally by means of a user's ‘rendering black box’.