We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Founded in 1935, Compositio Mathematica publishes first-class research papers in pure mathematics. All contributions are required to meet high standards of quality and originality and are carefully screened byexperts in the field. Papers published before 1997 are available from the NUMDAM website. Compositio is owned and published by non-profit organisations (the Foundation Compositio Mathematica and the London Mathematical Society) that use any surplus income from the journal to sponsor mathematics and mathematical research.
This chapter introduces the Conway–Maxwell–Poisson regression model, along with adaptations of the model to account for zero-inflation, censoring, and data clustering. Section 5.1 motivates the consideration and development of the various COM–Poisson regressions. Section 5.2 introduces the regression model and discusses related issues including parameter estimation, hypothesis testing, and statistical computing in R. Section 5.3 advances that work to address excess zeroes, while Section 5.4 describes COM–Poisson models that incorporate repeated measures and longitudinal studies. Section 5.5 focuses attention on the R statistical packages and functionality associated with regression analysis that accommodates excess zeros and/or clustered data as described in the two previous sections. Section 5.6 considers a general additive model based on COM–Poisson. Finally, Section 5.7 informs readers of other statistical computing softwares that are also available to conduct COM–Poisson regression, discussing their associated functionality. The chapter concludes with discussion.
The sharp and rigid dichotory between public and private corporations is a hallmark of securities regulation, but it has become outdated. The recent emergence, and dominance, of large private companies—once called “unicorns” for their rarity but now numbering in the hundreds—undermines essential assumptions behind securities regulations. This Chapter explores how the rise of these gargantuan private companies was brought on by the public’s eroding confidence in public companies and exacerbated by an ill-equipped government producing ineffective, reactionary legislation. Specifically, the Penny Stock Reform Act of 1990 and the Sarbanes-Oxley Act of 2002 resulted in a muddied pool of legitimate and fraudulent investment opportunities. Ordinary investors were left clambering for new investment opportunities and have shifted their gaze to the wilderness of the unregulated cryptocurrency market.
Chapter 9 examines the bubble in internet and other technology stocks that occurred at the end of the 1990s. This bubble witnessed the coming to market of many young firms which had never generated a profit. The excitement resulted in the NASDAQ index trebling in value in the 18 months prior to its peak in March 2000. By the end of 2000, however, it had lost more than half of its value. This bubble in tech stocks was not confined to the United States – it was a global phenomenon. The chapter then uses the bubble triangle can explain the causes of the dot-com bubble. The spark was provided by the new internet technology. Marketability increased as a result of new technology and many more companies floating on stock exchanges. Monetary conditions were loose in the runup of the bubble and there was a sharp rise in margin lending. Speculation was rampant in the runup, thanks to the rise of the day trader. The chapter concludes by arguing that the modest levels of economic damage associated with the bursting of the dot-com bubble suggest it could have been useful. However, its minor economic impact might also have made the authorities and investors complacent about the housing bubble which followed on its heels.
While the Poisson model motivated much of the classical control chart theory for count data, several works note the constraining equi-dispersion assumption. Dispersion must be addressed because over-dispersed data can produce false out-of-control detections when using Poisson limits, while under-dispersed data will produce Poisson limits that are too broad, resulting in potential false negatives and out-of-control states requiring a longer study period for detection. Section 6.1 introduces the Shewhart COM–Poisson control chart, demonstrating its flexibility in assessing in- or out-of-control status, along with advancements made to this chart. These initial works lead to a wellspring of flexible control chart development motivated by the COM–Poisson distribution. Section 6.2 describes a generalized exponentially weighted moving average control chart, and Section 6.3 describes the cumulative sum charts for monitoring COM–Poisson processes. Meanwhile, Section 6.4 introduces generally weighted moving average charts based on the COM-Poisson, and Section 6.5 presents the Conway–Maxwell–Poisson chart via the progressive mean statistic. Finally, the chapter concludes with discussion.
In 1995, the OMG published a COM/CORBA Interworking Request for Propsals (RFP). The RFP was composed of two parts. Part A dealt with interworking between CORBA and the commercially available implementation of COM. Part B dealt with interworking between CORBA and DCOM, which was still in development at that time. The OMG ratified Part A of the COM/CORBA Interworking Specification in 1996 and Part B in 1998. There are currently commercial implementations of Part A.
Our goal in this appendix is take a look at the concepts and considerations put forth in the specification. To begin, we will consider motivations for COM/CORBA integration. Then, we will give a very brief overview of COM. Moving into the meat of our topic, we will discuss a conceptual model for bridging, examine features common to COM and CORBA, and investigate mapping issues. We will look at locating and managing distributed objects from the perspectives of both COM and CORBA. We will conclude by examining COM/CORBA distribution issues.
From Whence We COM
COM evolved from OLE, Object Linking and Embedding, a technology which was developed for the single-user, single-machine environment of Windows 3.1. OLE enabled users to create and manage compound documents, thereby maximizing code reuse within and across applications on the Windows platform. OLE2 was designed to extend the paradigm to the component level. OLE2 interfaces and protocols mediate dynamic component interaction on a desk top.
Survival analysis studies the time-to-event for various subjects. In the biological and medical sciences, interest can focus on patient time to death due to various (competing) causes. In engineering reliability, one may study the time to component failure due to analogous factors or stimuli. Cure rate models serve a particular interest because, with advancements in associated disciplines, subjects can be viewed as “cured meaning that they do not show any recurrence of a disease (in biomedical studies) or subsequent manufacturing error (in engineering) following a treatment. This chapter generalizes two classical cure-rate models via the development of a COM–Poisson cure rate model. The chapter first describes the COM–Poisson cure rate model framework and general notation, and then details the model framework assuming right and interval censoring, respectively. The chapter then describes the broader destructive COM–Poisson cure rate model which allows for the number of competing risks to diminish via damage or eradication. Finally, the chapter details the various lifetime distributions considered in the literature to date for COM–Poisson-based cure rate modeling.
A multivariate Poisson distribution is a natural choice for modeling count data stemming from correlated random variables; however, it is limited by the underlying univariate model assumption that the data are equi-dispersed. Alternative models include a multivariate negative binomial and a multivariate generalized Poisson distribution, which themselves suffer from analogous limitations as described in Chapter 1. While the aforementioned distributions motivate the need to instead consider a multivariate analog of the univariate COM–Poisson, such model development varies in order to take into account (or results in) certain distributional qualities. This chapter summarizes such efforts where, for each approach, readers will first learn about any bivariate COM–Poisson distribution formulations, followed by any multivariate analogs. Accordingly, because these models are multidimensional generalizations of the univariate COM–Poisson, they each contain their analogous forms of the Poisson, Bernoulli, and geometric distributions as special cases. The methods discussed in this chapter are the trivariate reduction, compounding, Sarmanov family of distributions, and copulas.
Of the Reconstruction and Development Programme (RDP)'s stated aspiration to ‘fundamental transformation’ (African National Congress (ANC), 1994), an essay by the late Harold Wolpe (1995) noted that the ways in which the document ‘eradicated sources of contradiction and conflict by asserting a consensual model of society’ (Hart, 2007) meant that the very notion of fundamental transformation threatened to become a source of contestation. Seventeen years on, not only has this prediction turned out to be remarkably accurate, but politics in South Africa today seems ever more sharply polarised over the content of ‘the promise of liberation’ (Veriava, 2011).
As the ANC government has attempted to perfect and link its growth and development strategies, increasingly entrenching its neoliberal approach (from the RDP to the Growth, Employment and Redistribution Strategy (Gear) to the Accelerated and Shared Growth Initiative (Asgisa) and now the New Growth Path (NGP)), it has attempted to define the promise and the possibilities for its realisation according to the rationalities and limitations of this model. In so doing, it has come up against community and social movements that have put forward different understandings of the promise and asserted that many of the ANC's claims to realising the promise have been compromises, aimed at ensuring the reproduction of the fragile coalition between business, labour and government that has determined the nature of the transition (Ballard et al., 2006; Gibson, 2006; McKinley and Naidoo, 2004).
While in the pre-2006 period the leadership of the ANC Alliance was at pains to silence any hint of criticism of its policies from within its ranks, by 2007 and the showdown at Polokwane, differences and conflicts between members and factions of the alliance were being played out in the media, allowing it to re-present itself as a contested space in which debate and critique are cultivated, permitting change to happen from within the alliance, a throwback to its critics who had, on the basis of the experiences of the late 1990s and early 2000s, declared the ANC Alliance to be a space in which dissent and disagreement is silenced and contained.
This chapter defines the COM–Poisson distribution in greater detail, discussing its associated attributes and computing tools available for analysis. This chapter first details how the COM–Poisson distribution was derived, and then describes the probability distribution, and introduces computing functions available in R that can be used to determine various probabilistic quantities of interest, including the normalizing constant, probability and cumulative distribution functions, random number generation, mean, and variance. The chapter then outlines the distributional and statistical properties associated with this model, and discusses parameter estimation and statistical inference associated with the COM–Poisson model. Various processes for generating random data are then discussed, along with associated available R computing tools. Continued discussion provides reparametrizations of the density function that serve as alternative forms for statistical analyses and model development, considers the COM–Poisson as a weighted Poisson distribution, and details discussion describing the various ways to approximate the COM–Poisson normalizing function.
It seems almost a truism to say that the Internet has changed the world. One can hardly find one area of economic life not strongly influenced by today's flow of information. Important aspects of people's lives have become dependent on a binary language manifested in bits and bytes. Furthermore, this new order is certainly not limited to business or economy. In fact, it applies to the whole issue of globalisation, such as the emerging of new political movements based on networks.