Date of Award

Spring 1990

Document Type


Degree Name

Doctor of Philosophy (PhD)


Mathematics and Statistics


Computational and Applied Mathematics

Committee Director

Ram C. Dahiya

Committee Member

Bruce Rubin

Committee Member

N. Rao Chaganty


The problem addressed in this dissertation is the existence and estimation of the parameters of a truncated Cauchy distribution. It is known that when a number of distributions with infinite support are truncated to a finite interval that the maximum likelihood estimator of the scale parameter fails to exist with positive probability. In particular, necessary and sufficient conditions which give rise to instances of non-existence have been found for the exponential (Deemer and Votaw (1955)), gamma (Broeder (1955), Hegde and Dahiya (1989)), Weibull (Mittal and Dahiya (1989)) and normal distribution (Barndorff-Nielsen (1978), Mittal and Dahiya (1987), Hegde and Dahiya (1989)). Alternative estimators have been proposed to deal with the problems of non-existence and "blowing up" of the estimates. Mittal and Dahiya (1987, 1989) employ the Bayes model estimator of Blumenthal and Marcus (1975) for the normal and Weibull cases and Hegde and Dahiya (1989) apply it to the gamma. Hegde (1986) also studies the harmonic mean estimator of Joe and Reid (1984). Here we prove a sufficient and asymptotically necessary condition for the existence of the ML estimator of the scale parameter of the truncated Cauchy distribution. A modified ML estimator and an estimator based on equating population and sample quantiles are presented as alternatives. These estimators exist with probability one. The performance of these estimators is examined by making use of simulations. Asymptotic variances of the ML estimators are also given.

Finally, an application of truncated distributions is presented. The fit of returns on common stocks to the normal, Cauchy, truncated normal, and truncated Cauchy distributions is compared via the Kolmogorov-Smirnov statistic. The results show that a truncated distribution is a better fitting model in virtually all cases.