Date of Award

Summer 2003

Document Type


Degree Name

Doctor of Philosophy (PhD)


Engineering Management & Systems Engineering

Committee Director

Resit Unal

Committee Member

Charles B. Keating

Committee Member

Andres Sousa-Poza

Committee Member

Mary Kae Lockwood


This dissertation describes the development of expert judgment calibration methodology as part of elicitation of the expert judgments to assist in the task of quantifying parameter uncertainty for proposed new aerospace vehicles. From previous work, it has been shown that experts in the field of aerospace systems design and development can provide valuable input into the sizing and conceptual design of future space launch vehicles employing advanced technology. In particular (and of specific interest in this case), assessment of operations and support cost implications of adopting proposed new technology is frequently asked of the experts. Often the input consisting of estimates and opinions is imprecise and may be offered with less than a high degree of confidence in its efficacy. Since the sizing and design of advanced space or launch vehicles must ultimately have costs attached to them (for subsequent program advocacy and tradeoff studies), the lack of precision in parameter estimates will be detrimental to the development of viable cost models to support the advocacy and tradeoffs. It is postulated that a system, which could accurately apply a measure of calibration to the imprecise and/or low-confidence estimates of the surveyed experts, would greatly enhance the derived parametric data. The development of such a calibration aid has been the thrust of this effort. Bayesian network methodology, augmented by uncertainty modeling and aggregation techniques, among others, were employed in the tool construction. Appropriate survey questionnaire instruments were compiled for use in acquiring the experts' input; the responses served as input to a test case for validation of the resulting calibration model. Application of the derived techniques were applied as part of a larger expert assessment elicitation and aggregation study. Results of this research show that calibration of expert judgments, particularly for far-term events, appears to be possible. Suggestions for refinement and extension of the development are presented.