Date of Award

Spring 2024

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Electrical & Computer Engineering

Committee Director

Khan M. Iftekharuddin

Committee Member

Jiang Li

Committee Member

Norou Diawara

Committee Member

John W. Harrington

Abstract

Facial expression production and perception in autism spectrum disorder (ASD) suggest the potential presence of behavioral biomarkers that may stratify individuals on the spectrum into prognostic or treatment subgroups. High-speed internet and the ease of technology have enabled remote, scalable, affordable, and timely access to medical care, such as measurements of ASDrelated behaviors in familiar environments to complement clinical observation. Machine and deep learning (DL)-based analysis of video tracking (VT) of expression production and eye tracking (ET) of expression perception may aid stratification biomarker discovery for children and young adults with ASD. However, there are open challenges in 1) facial expression analysis (FEA) across age groups to overcome domain shift between child and adult expressions, 2) Facial Action Coding System (FACS)-labeled 3D avatar-based stimuli to improve user engagement for eliciting expressions, and 3) assessment of construct validity and group discriminability criteria to discover candidate biomarkers for ASD.

Consequently, this dissertation proposes three goals. The proposed dissertation goals have been completed in collaboration and consultation with a team of Old Dominion University and Eastern Virginia Medical School investigators. The first proposed aim is a novel deep domain adaptation fusing DL-based texture features with geometric landmark features for generalized child/adult FEA. Novel facial feature selection for DL is performed using a new statistical method based on a mixture of beta distributions. Our model performs competitively over transfer learning and existing domain adaptation methods using multiple benchmark data sets. Second, we propose FACS-labeled customizable avatars for improved user engagement and DL models for multi-label FACS action unit (AU) detection. The DL models incorporate feature fusion, multi-task learning of AUs and expressions, and a novel beta-guided correlation loss to achieve state-of-the-art AU detection performance on our primary benchmark data set. We report the construct validity of proposed stimuli and measurements based on a feasibility study of twenty healthy adults. Finally, we conduct an online pilot study of 11 autistic children and young adults and 11 age-/gender-matched neurotypical individuals. Webcam-based ET and VT are collected while participants recognize and mimic avatar expressions. Extensive statistical analyses, including evaluation of construct validity and group discriminability, identify one candidate ET biomarker and 14 additional ET and VT measurements that may be candidates for more comprehensive future studies with increased sample size for validation and clinical translation.

Rights

In Copyright. URI: http://rightsstatements.org/vocab/InC/1.0/ This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).

DOI

10.25777/w99f-ts51

ISBN

9798382770697

ORCID

0000-0002-6578-4657

Share

COinS