Date of Award

Winter 1989

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Mathematics & Statistics

Program/Concentration

Computational and Applied Mathematics

Committee Director

W. D. Lakin

Committee Member

John Tweed

Committee Member

Stan Weinstein

Committee Member

John Swetits

Abstract

This dissertation is devoted to the acceleration of convergence of vector sequences. This means to produce a replacement sequence from the original sequence with higher rate of convergence.

It is assumed that the sequence is generated from a linear matrix iteration xi+ i = Gxi + k where G is an n x n square matrix and xI+1 , xi,and k are n x 1 vectors. Acceleration of convergence is obtained when we are able to resolve approximations to low dimension invariant subspaces of G which contain large components of the error. When this occurs, simple weighted averages of iterates x,+|, i = 1 ,2 ,... k where k < n are used to produce iterates which contain approximately no error in the selfsame low dimension invariant subspaces. We begin with simple techniques based upon the resolution of a simple dominant eigenvalue/eigenvector pair and extend the notion to higher dimensional invariant spaces. Discussion is given to using various subspace iteration methods and their convergence. These ideas are again generalized by solving the eigenelement for a projection of G onto an appropriate subspace. The use of Lanzcos-type methods are discussed for establishing these projections.

We produce acceleration techniques based on the process of generalized inversion. The relationship between the minimal polynomial extrapolation technique (MPE) for acceleration of convergence and conjugate gradient type methods is explored. Further acceleration techniques are formed from conjugate gradient type techniques and a generalized inverse Newton's method.

An exposition is given to accelerations based upon generalizations of rational interpolation and Pade approximation. Further acceleration techniques using Sherman-Woodbury-Morrison type formulas are formulated and suggested as a replacement for the E-transform.

We contrast the effect of several extrapolation techniques drawn from the dissertation on a nonsymmetric linear iteration. We pick the Minimal Polynomial Extrapolation (MPE) as a representative of techniques based on orthogonal residuals, the Vector $\epsilon$-Algorithm (VEA) as a representative vector interpolation technique and a technique formulated in this dissertation based on solving a projected eigenproblem. The results show the projected eigenproblem technique to be superior for certain iterations.

Rights

In Copyright. URI: http://rightsstatements.org/vocab/InC/1.0/ This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).

DOI

10.25777/fnyf-vb61

Share

COinS