Date of Award

Summer 2024

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Mathematics & Statistics

Program/Concentration

Computational and Applied Mathematics

Committee Director

Ke Shi

Committee Member

Yan Peng

Committee Member

Ruhai Zhou

Committee Member

Yongjin Lu

Abstract

A plethora of scientific and engineering problems encountered are multiscale in nature. This multiscale feature often influences simulation efforts wherever large disparities in spatial scales are experienced. Notable examples include composite materials, fluid flow through porous media and turbulent transport in high Reynolds number flow. Although there are promising results from the advancement of modern supercomputer, obtaining direct numerical solution of multiscale problems is very laborious. This difficulty stems from the tremendous amount of computer memory and CPU time required. Parallel computing may be one obvious choice in remedying this issue. However, the complexity and size of the discrete problem is not reduced. The goal of this dissertation is to design a multiscale model reduction framework within the hybridizable discontinuous Galerkin (HDG) finite element method. We utilize local snapshots that incorporate some local features of the solution space in constructing a lower dimensional trace space. This approach affords us the opportunity to avoid high dimensional representation of the trace spaces. Furthermore, we leverage the advantages of localized multiscale basis functions to capture the multiscale structure of the solution rather than the standard polynomial basis. These basis functions contain essential multiscale information embedded in the solution. They allow us to obtain better approximations through the coarse space enrichment. Moreover, orthogonality and its sparse representation are preserved. With these tools, we can construct coarse scale solutions accurately and efficiently without solving a global fine scale system. We employ the use of neural network to further improve the efficiency of our method by training the network to learn the solution map of our model. Such training is done on a single coarse block instead of the entire domain. A significant advantage of this approach is that, once trained, this network can be used for any geometry and parameter distribution without retraining. We can avoid the time-consuming global assembly due to the local solvers.

Rights

In Copyright. URI: http://rightsstatements.org/vocab/InC/1.0/ This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).

DOI

10.25777/e475-5362

ISBN

9798384455776

Share

COinS