ECE 2390/1390 Image Processing and Computer Vision
代寫ECE 2390/1390作業、代做Python課程設計作業、Computer Vision作業代寫、Python程式設計作業代做
ECE 2390/1390 - Fall 2018 Image Processing and Computer Vision
Homework Assignment 6: PCA and Face
Recognition
Due Saturday, December 8, 2018 at 09:59 am EST
Description and dataset instructions
In lectures we discussed PCA for dimensionality reduction and its application for face recognition. The basic
idea is that the first few Eigenvectors contains most of the variance of the data. We discussed a dimensionality
trick to quickly solve for the Eigenvectors of the data covariance matrix. For this problem set you will
implement the necessary elements to compute the Eigen faces and apply different classification algorithms.
For this assignment, we are going to use the AT&T labs face dataset (Download tar or zip file). There are ten
different images of each of 40 distinct subjects. The size of each image is 92x112 pixels, with 256 grey levels
per pixel. The images are organized in 40 directories (one for each subject), which have names of the form sX,
where X indicates the subject number (between 1 and 40). In each of these directories, there are ten different
images of that subject, which have names of the form Y.pgm, where Y is the image number for that subject
(between 1 and 10).
Extract the dataset to the directory ‘input/all’ then write a function/script that randomly picks 8 images from
each subject folder for training. Save these images under the directory ‘input/train’. You should get a total of
320 images in this directory (you shouldn’t use the same file names, otherwise you may get conflicts. Chose
proper naming scheme). Those two images per subject that weren’t selected for training should be saved
under ‘input/test/sX’. The directory ‘input/test’ should now contain 40 subfolders with 2 images in each
subfolder.
What to submit
Download and unzip the ps5 folder: ps6.zip
Rename it to ps6_xxxx_LastName_FirstName (i.e. ps6_matlab_LastName_FirstName, or
ps6_python_LastName_FirstName) and add in your solutions:
ps6_xxxx_LastName_FirstName/
● input/ - input images, videos or other data supplied with the problem set
● output/ - directory containing output images and other files your code generates
● ps6.m or ps6.py - code for completing each part, esp. function calls; all functions themselves must be
defined in individual function files with filename same as function name, as indicated
● *.m or *.py - Matlab/Octave function files (one function per file), Python modules, any utility code
● Ps6_report.pdf - a PDF file with all output images and text responses
Zip it as ps6_xxxx_LastName_FirstName.zip, and submit on courseweb.
Guidelines
1. Include all the required images in the report to avoid penalty.
2. Include all the textual responses, outputs and data structure values (if asked) in the report.
ECE 2390/1390 - Fall 2018 Image Processing and Computer Vision
3. Make sure you submit the correct (and working) version of the code.
4. Include your name and ID on the report.
5. Comment your code appropriately.
6. Please avoid late submission. Late submission is not acceptable.
7. Plagiarism is prohibited as outlined in the Pitt Guidelines on Academic Integrity.
Questions
1. PCA analysis
In this part, you need to implement the PCA algorithm for face images. You need to write a code to
compute the mean image solve for the eigen faces.
a. Write the code to read all the images in the training directory. Reshape each image to be
represented as one column vector. Construct a matrix T whose columns are the training images.
The size of T should be 10304 x 320.
Output: The gray level images showing the values of T as ps6-1-a.png
b. Write a function that compute the average face vector m. The average face vector would be the
10304x1 mean vector computed across the column of T.
Output: Resize m to 92x112 and display the resultant image (the mean face) as ps6-1-b.png
Output (textual response):
- Describe your results.
c. Write down a function called ‘PCA_analysis’ that takes the training matrix T as an input and
returns the Eigen faces of T and the associated eigenvalues. Recall, according to the convention in
our class slides, M = 320 and d = 10304.
i. Find the centered data matrix A, by subtracting the mean vector from each column of T
ii. Define the data covariance matrix
iii. Use the dimensionality trick to compute the first 320 eigenvectors and eigenvalues of C. You
can use toolboxes to find the eigenvalues and eigenvectors (e.g. eig function in MATLAB).
However, using eig(C) is not permitted, you must use the dimensionality trick. If you are want,
you may search and learn how you would use SVD (it’s not that hard. We have seen what is
does!).
Output:
- image of the covariance matrix as ps6-1-c-1.png
- image of the first 8 eigen-faces (you will need to resize the eigenvectors) in one figure as ps6-1-
c-2.png
ECE 2390/1390 - Fall 2018 Image Processing and Computer Vision
Output (textual response):
- Describe your results and comment on the eigen-faces you obtained.
d. Recall that eigenvalues (??) represent how much variance is retained by the corresponding
eigenvector and that the first eigenvector captures the maximum variance. Now we are going to
learn how to decide on how many eigenfaces are enough to represent the variance in our training
set. To do so, we define the following
where is the total number of eigenvectors.is the percentage of variance captured by the
first eigenvectors. Compute the values of and determine the minimum number of
eigenvector, needed to capture at least 95% of the variance in the training data (i.e., minimum
value of k such that ≥ 0.95 ). Save those dominant eigenvectors in a basis matrix U. Those
vectors will be used as basis for PCA dimensionality reduction (i.e., each image will be represented
as a weighted sum of those vectors). U now defines the reduced eigen-face space.
Output: The plot of vs as ps6-1-b.png
Output (textual response):
- The number of eigenvectors, ??, that capture 95% of the variance in the training data
2. Feature extraction for face recognition
In order for us to use different classification techniques we need to extract some features. For face
recognition, we are going to use the image representation in the reduced eigen-face space as our feature
vector. Any image can now be represented in the new space as
where m is the mean vector from 1.b,
is the
column of the basis matrix U, and
is
reduced representation of the image in the reduced eigen-face space. Compare the size of the image
vector to the size of vector. Definitely, there is a great deal of dimensionality reduction. As matrix
multiplication, for one image can be computed as
, remember, I and m are now vectors,
not 2d matrices.
a. Project all the images in the training folder in the new reduced eigen-face space, i.e., find for
each image in the training folder. Construct a matrix W_training where each row in it corresponds
to one reduced training image. W_training is the training features matrix. Hint, keep track of
which subject corresponds to which row, this will defines your labels vector (you will need that
later to train a classifier).
ECE 2390/1390 - Fall 2018 Image Processing and Computer Vision
b. Project all the images in the testing folder in the new reduced eigen-face space, i.e., find ?? for
each image in the testing folder. Construct a matrix W_testing where each row in it corresponds
to one reduced testing image. W_testing is the testing features matrix. Hint, keep track of which
subject corresponds to which row, this will defines your true_class vector (you will need that later
to compute the accuracy of your classifier).
Output (textual response):
- The dimensions of W_training and W_testing
3. Face recognition
Next you’ll use the training features to train KNN and SVM classifiers and test the resultant classifier using
the testing features. For this section, you can use available packages for K-NN and SVM.
a. Train a KNN classifier using W_training matrix and the associated labels vector. Test your classifier
using samples in W_testing. Use K = 1, 3, 5, 7, 9, and 11 nearest neighbors. Compare the output of
the classifier to the true_class and compute the classifier accuracy. Accuracy can be defined as the
number of correctly classified samples divided over the total number of testing samples.
Output (textual response):
- Table for your KNN classifier accuracy at the different values K listed above.
- comment on and discuss your results.
b. Use W_training matrix and the associated labels vector, train three SVM classifiers. Each classifier
must use a different kernel from others. Hence, use linear, 3
rd order polynomial, and Gaussian rbf
kernels respectively. Since we have more than two classifiers, use the one vs all approach to build
your multi-class classifiers. Compare the output of the classifier to the true_class and compute the
accuracy of each classifier.
Output (textual response):http://www.6daixie.com/contents/3/2319.html
- table listing the accuracy of the SVM classifiers under different kernels
- comment on and discuss your results
- compare between the performance of KNN and SVM classifiers
因為專業,所以值得信賴。如有需要,請加QQ:99515681 或郵箱:[email protected]
微信:codinghelp