Date of Award

Summer 8-2022

Document Type

Thesis

Degree Name

Master of Science (MS)

Department

Computer Science

Committee Director

Yaohang Li

Committee Member

Ravi Mukkamala

Committee Member

Fengjiao Wang

Abstract

Large sets of sensors deployed in nearly every practical environment are prone to drifting out of calibration. This drift can be sensor-based, with one or several sensors falling out of calibration, or system-wide, with changes to the physical system causing sensor-reading issues. Recalibrating sensors in either case can be both time and cost prohibitive. Ideally, some technique could be employed between the sensors and the final reading that recovers the drift-free sensor readings. This paper covers the employment of two ensemble learning techniques — stacking and bootstrap aggregation (or bagging) — to recover drift-free sensor readings from a suite of sensors. The ensembles are composed of two different deep learning network types: Long Short-Term Memory (LSTM) networks and Gated Recurrent Unit (GRU) networks. Standalone LSTM and GRU networks were also constructed, trained, and optimized to create a baseline against which the ensemble methods could be compared. The metrics used to compare the various models were Mean Squared Error (MSE), time and computing resources required, as well as a comparison of output graph shape compared to the drift-free sensor readings.

Both the stacking and bagging ensembles outperformed the standalone models (LSTM and GRU). The stacked ensemble achieved a lower MSE than the both the LSTM and GRU models and a similar overall fit compared to the standalone models. This was achieved using less time to train the ensemble than either of the standalone models. The bagging ensemble achieved an MSE lower than both standalone models by a factor of nearly 100 and achieved a much tighter fit when compared to the standalone models, though did require nearly 30 times the number of CPU seconds to train. In both instances, the ensemble learning methods were determined to outperform the standalone models.

Rights

In Copyright. URI: http://rightsstatements.org/vocab/InC/1.0/ This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).

DOI

10.25777/hv23-5c81

ISBN

9798352694206

ORCID

0000-0003-4708-681X

Share

COinS