Optimal approximation of a large matrix by a sum of projected linear mappings on prescribed subspaces

Main Article Content

Phil Howlett
Anatoli Torokhti

Abstract

We propose and justify a matrix reduction method for calculating the optimal approximation of an observed matrix $A \in \mathbb{C}^{m \times n}$ by a sum $\sum_{i=1}^p \sum_{j=1}^q B_iX_{ij}C_j$ of matrix products where each $B_i \in \mathbb{C}^{m \times g_i}$ and $C_j \in \mathbb{C}^{h_j \times n}$ is known and where the unknown matrix kernels $X_{ij}$ are determined by minimizing the Frobenius norm of the error. The sum can be represented as a bounded linear mapping $BXC$ with unknown kernel $X$ from a prescribed subspace ${\mathcal T} \subseteq \mathbb{C}^n$ onto a prescribed subspace ${\mathcal S} \subseteq \mathbb{C}^m$ defined, respectively, by the collective domains and ranges of the given matrices $C_1,\ldots,C_q$ and $B_1,\ldots,B_p$. We show that the optimal kernel is $X = B^{\dagger}AC^{\dagger}$ and that the optimal approximation $BB^{\dagger}AC^{\dagger}C$ is the projection of the observed mapping $A$ onto a mapping from ${\mathcal T}$ to ${\mathcal S}$. If $A$ is large, $B$ and $C$ may also be large and direct calculation of $B^{\dagger}$ and $C^{\dagger}$ becomes unwieldy and inefficient. The proposed method avoids this difficulty by reducing the solution process to finding the pseudo-inverses of a collection of much smaller matrices. This significantly reduces the computational burden.

Article Details

Section
Article