top of page

Principal Component Analysis is a statistical technique that is useful for manipulating biometric data, particularly for studies in facial recognition and deep machine learning.

 

With Dr. Tim Tribone and our research group members, Storey Peacock and Sathya Tadinada, I investigated and summarized a good example of an application where a Singular Value Decomposition on a complex-valued matrix was more useful than one in the real numbers (1). 

 

A key point of this particular SVD is that the matrix to be decomposed is quite large because it contains the grayscale pixel values in each image (and, often, there are many images). The SVD can be considered a dimension reduction algorithm because it nicely amalgamates the information into a more manageable format. The important pieces of information - in this case, the significant features common throughout the images of human faces - are presented at the end of the SVD. It's like magic! One can see how this algorithm must be useful for things like facial recognition. 


Our example, using "eigenones," illustrates an SVD with key ideas, but ultimately complex values weren't used. These images are extremely pixelated; Most images have many more pixels (so dimension reduction is useful). 

 

Still, we inputted four images of "handwritten" ones and put these through the SVD, which produced this first eigenone (the first principal component). It's clear that this eigenone nicely compiles the common elements in the original handwritten ones.

​

Sources

(1) B.K. Tripathi. On the complex domain deep machine learning for face recognition. Springer Science+Business Media, 47:382–396, 2017.

(2) Gilbert Strang. Introduction to Linear Algebra. Wellesley-Cambridge Press, 2016.

eigenfaces.png

Eigenfaces. 

Strang, 2016

IMG_4466_edited.jpg

Sathya, Storey and myself at the Math for All conference.

ones.png

 Four very pixelated images of "handwritten" ones. 

eigenone1.png

The first eigenone. This is the first principal component. 

Eigenfaces

Projects and Research

Visualizing the Hessian

I became much more familiar with MATLAB through this project with Dr. Chuck Dorval in the Biomedical Engineering Department at the University of Utah. I started this project while I was taking Calculus III and Physics, and this work allowed me to better understand vector and matrix visualizations.

​

Considering a 3-dimensional space in the brain in which constant current is delivered, we can observe the voltage at every point and the corresponding electric field vectors. Currently, visualization tools in the field allow neuro-engineers to view this rather naively. I worked to create a more detailed model that accurately represents the electric field in a 3D space, such that the eigenvectors and values of the Hessian matrix of the voltage function at each point exactly corresponded to the size and shape of the model. This model would help neuroscientists, especially those studying neurological disorders such as epilepsy, better understand the optimal placements of contacts during deep brain stimulation treatments.

​

It would be nice to go back and make this code much more efficient and sophisticated, and plot my figures over a CT scan.

​

zoom1.jpg

Other Interest: Philosophy of Science

I wrote a paper about Mathematics for a philosophy course at the U of Utah. I talk about the place mathematics has within the world of science. 

More Coming Soon!

untitled.jpg

The glyph. This was created with the intention of overlaying many glyphs over a medical scan of the brain. The length of the primary axis corresponds with the largest eigenvalue of the Hessian matrix. If an axon lay in this area, it would be hyperpolarized. The radius of the torus corresponds with the secondary eigenvector and eigenvalue. If an axon lay here, it would be depolarized. I recall the (not-quite) torus shape to be difficult to perfect. We needed the inner part to be exactly linear and the outer edge to be round, in order to leave a space that showed where hypothetical axons wouldn't be polarized.

untitled.jpg

The electric field. I visualized the less sophisticated model we intended to improve upon. Here, the source of current is at (0,0,0).

zoom0.jpg

I toyed with plotting many figures and rotating each of them individually. The axes with which each figure aligns would eventually relate to the current sources. These placements would provide information about the polarization of axons, dependent on the location of the source of current.

bottom of page