The threat of predation and behavioral tactics associated with predator avoidance are reported to play a critical role in primate socioecology (Hill and Dunbar 1998, Terborgh and Janson 1986, van Schaik 1983). In particular, it has been suggested that predation risk can have a significant effect on group size and composition (Cheney and Wrangham 1987, Isbell 1994, Stanford 1998, Treves 1999), vigilance behavior (Burger and Cochfeld 1994), patterns of habitat utilization (Cowlishaw 1999), within-group spacing, and individual foraging success (Cowlishaw 1998). Hill and Dunbar (1998: 412) define predation risk as ‘the animals’ own perception of the likelihood of being subject to an attack by a predator … it reflects the animals' collective past historical experience of actual attacks by predators and is the basis on which the animals implement their antipredator strategies.' In social animals, individuals may assess predation risk based on their own personal information, as well as by relying on alarm calls and vigilance behavior provided by other group members. Several authors have suggested that when relying on group-based information, including shared vigilance, individuals living in larger social groups experience lower predation risk than conspecifics living in smaller social groups (Janson and Goldsmith 1995, Terborgh 1990, Terborgh and Janson 1986, van Schaik 1983). The mechanisms promoting shared or cooperative vigilance, however, are poorly understood (Lima and Bednekoff 1999).
In this chapter, we examine evidence of predator sensitive foraging in wild tamarins in an attempt to link social interactions, foraging patterns, and predator avoidance behaviors. Specifically, we examine evidence of antipredator behavior in single and mixed-species troops of saddleback (Saguinus fuscicollis weddelli) and emperor (S. imperator imperator) tamarins when foraging at experimental feeding platforms.
More than 25 years ago, Ashbaugh and colleagues described a series of patients whose striking but uniform clinical, physiologic, roentgenographic, and pathologic abnormalities distinguished them from other patients who developed respiratory failure. This syndrome has since become known as adult or acute respiratory distress syndrome (ARDS). Despite extensive research and literature devoted to ARDS, overall mortality rates have remained in excess of 40%. The inability to find new therapeutic modalities that decrease mortality rates from this syndrome has been a source of disappointment in this field. Consequently, prevention or early intervention appears to be an important and necessary approach in the management of ARDS. The high mortality rate and lack of success of new interventions have also led to a reevaluation of our basic understanding of ARDS. Thus, revisiting the epidemiology of this syndrome is of paramount importance. By determining the incidence and establishing risk factors for ARDS, invaluable information required to develop preventative strategies or targeted early therapy may surface, offering the hope for improved outcomes in patients afflicted with this syndrome.
In this chapter, we begin by describing possible study designs used in determining epidemiologic features of any disease, including ARDS, in order to better appreciate the strengths and weaknesses of the existing literature. We then describe the major studies estimating the incidence, risk factors, and case-fatality rate of ARDS. We also briefly discuss the long-term outcomes of this illness. In order to provide the reader with the most reliable epidemiologic inferences from the literature, we have primarily based our comments on a systematic search, selection, and appraisal of the published literature.
Email your librarian or administrator to recommend adding this to your organisation's collection.