Eigenfaces
Principal Component Analysis is a statistical technique that is useful for manipulating biometric data, particularly for studies in facial recognition and deep machine learning.
With Dr. Tim Tribone and our research group members, Storey Peacock and Sathya Tadinada, I investigated and summarized a good example of an application where a Singular Value Decomposition on a complex-valued matrix was more useful than one in the real numbers (1).
A key point of this particular SVD is that the matrix to be decomposed is quite large because it contains the grayscale pixel values in each image (and, often, there are many images). The SVD can be considered a dimension reduction algorithm because it nicely amalgamates the information into a more manageable format. The important pieces of information - in this case, the significant features common throughout the images of human faces - are presented at the end of the SVD. It's like magic! One can see how this algorithm must be useful for things like facial recognition.
Our example, using "eigenones," illustrates an SVD with key ideas, but ultimately complex values weren't used. These images are extremely pixelated; Most images have many more pixels (so dimension reduction is useful).
Still, we inputted four images of "handwritten" ones and put these through the SVD, which produced this first eigenone (the first principal component). It's clear that this eigenone nicely compiles the common elements in the original handwritten ones.
​
Sources
(1) B.K. Tripathi. On the complex domain deep machine learning for face recognition. Springer Science+Business Media, 47:382–396, 2017.
(2) Gilbert Strang. Introduction to Linear Algebra. Wellesley-Cambridge Press, 2016.
Eigenfaces.
Strang, 2016
Four very pixelated images of "handwritten" ones.
The first eigenone. This is the first principal component.
Sathya, Storey and myself at the Math for All conference.