Date of Award

Spring 2002

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Engineering Management & Systems Engineering

Committee Director

Derya A. Jacobs

Committee Member

Abel A. Fernandez

Committee Member

Charles B. Keating

Committee Member

Resit Unal

Committee Member

Mark Scerbo

Abstract

Multiple linear regression techniques have been traditionally used to construct predictive statistical models, relating one or more independent variables (inputs) to a dependent variable (output). Artificial neural networks can also be constructed and trained to learn these complex relationships, and have been shown to perform at least as well as linear regression on the same data sets. Research on the use of neural network models as alternatives to multivariate linear regression has focused predominantly on the effects of sample size, noise, and input vector size on the comparative performance of these two modeling techniques. However, research has also shown that a mis-specified regression model or an incorrect neural network architecture also contributes significantly to poor model performance. This dissertation compares the effects on model performance of various formulations of regression and neural network models, measuring performance in terms of mean squared error and variance. A factorial experiment is conducted in which model parameters are varied. Simulated data from three different functions are used to generate training and testing data sets. Statistical tests are used to determine differences in performance as well as the degree of model robustness, or the degree to which model performance is insensitive to changes in model formulation. Based on the experimental results and conclusions, a predictive modeling methodology is proposed that capitalizes on the advantages of both neural network and regression approaches and assists practitioners in constructing accurate and robust predictive models.

DOI

10.25777/5ba0-5475

ISBN

9780599754584

Share

COinS