1. 程式人生 > >DeepLearning.ai-Week4-Face Recognition for the Happy House

DeepLearning.ai-Week4-Face Recognition for the Happy House

removes none 隨機 元組 deep 提高 輸入 numpy apply

1 - Task

  • Implement the triplet loss function
  • Use a pretrained model to map face images into 128-dimensional encodings
  • Use these encodings to perform face verification and face recognition

2 - Import Packages

from keras.models import Sequential
from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate
from keras.models import Model from keras.layers.normalization import BatchNormalization from keras.layers.pooling import MaxPooling2D, AveragePooling2D from keras.layers.merge import Concatenate from keras.layers.core import Lambda, Flatten, Dense from keras.initializers import glorot_uniform from keras.engine.topology import
Layer from keras import backend as K K.set_image_data_format(channels_first) import cv2 import os import numpy as np from numpy import genfromtxt import pandas as pd import tensorflow as tf from fr_utils import * from inception_blocks_v2 import * %matplotlib inline %load_ext autoreload %autoreload 2 np.set_printoptions(threshold
=np.nan)

3 - Naive Face Verification

  在人臉驗證任務中,我們需要判斷兩張圖片中的人臉是否為同一個人。最簡單的做法是逐個像素點比較,如果總共的距離低於給定的某一個閾值,則判斷為同一個人。這種方法效果會很差。所以我們可以通過學習一個模型,可以將人臉圖片提取出特征向量,再比較特征向量的距離來判斷是否為同一個人。

技術分享圖片

4 - Encoding face images into a 128-dimensional vector

4.1 - Using an ConvNet to compute encodings

  註意到

  • 這個網絡的輸入的批次圖像維度為$(m,3,96,96)$
  • 網絡的輸出為$(m,128)$的編碼,網絡是對每一個輸入編碼成一個128維度向量的輸出
FRmodel = faceRecoModel(input_shape=(3, 96, 96))
print("Total Params:", FRmodel.count_params())
Result:
Total Params: 3743280

技術分享圖片

  好的編碼應該具有如下性質:

  • 同一個人的兩張圖片的編碼應該盡可能的相似
  • 不同人的兩張圖片的編碼要非常不同

  $triplet loss$損失函數便是描述上述性質的,並且盡量使得同一個人的兩張圖片的編碼更加相似,不同人的兩張圖片的編碼更具有區分性。

技術分享圖片

4.2 - The Triplet Loss

技術分享圖片

  訓練用於編碼的神經網絡(對應上面f映射),需要三元組圖片$(A,P,N)$:

  • A is an "Anchor" image--a picture of a person.
  • P is a "Positive" image--a picture of the same person as the Anchor image.
  • N is a "Negative" image--a picture of a different person than the Anchor image.

  它們之間具有下式所示關系:

$$\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 + \alpha < \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$$

  所以我們要最小化的"$triplet cost$"如下:

$$\mathcal{J} = \sum^{m}_{i=1} \large[ \small \underbrace{\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2}_\text{(1)} - \underbrace{\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2}_\text{(2)} + \alpha \large ] \small_+ \tag{3}$$   這裏我們使用 "$[z]_+$"來表示$max(z,0)$。其中第一項描述的是$A$和$P$的距離,我們希望它盡可能的小,第二項描述的是$A$和$N$的距離,我們希望它盡可能的大(因此加上了負號)。   實現$triplet loss$需要如下四個步驟:
  • Compute the distance between the encodings of "anchor" and "positive": $\begin{Vmatrix} f(A^{(i)}) - f(P^{(i)}) \end{Vmatrix}_2^2$
  • Compute the distance between the encodings of "anchor" and "negative": $\begin{Vmatrix} f(A^{(i)}) - f(N^{(i)}) \end{Vmatrix}_2^2$
  • Compute the formula per training example: $\begin{Vmatrix} f(A^{(i)}) - f(P^{(i)}) \end{Vmatrix} - \begin{Vmatrix} f(A^{(i)}) - f(N^{(i)}) \end{Vmatrix}_2^2 + \alpha$
  • Compute the full formula by taking the max with zero and summing over the training examples:$$\mathcal{J} = \sum^{m}_{i=1} \large[ \small \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2+ \alpha \large ] \small_+ \tag{3}$$
# GRADED FUNCTION: triplet_loss

def triplet_loss(y_true, y_pred, alpha = 0.2):
    """
    Implementation of the triplet loss as defined by formula (3)
    
    Arguments:
    y_true -- true labels, required when you define a loss in Keras, you don‘t need it in this function.
    y_pred -- python list containing three objects:
            anchor -- the encodings for the anchor images, of shape (None, 128)
            positive -- the encodings for the positive images, of shape (None, 128)
            negative -- the encodings for the negative images, of shape (None, 128)
    
    Returns:
    loss -- real number, value of the loss
    """
    
    anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]
    
    ### START CODE HERE ### (≈ 4 lines)
    # Step 1: Compute the (encoding) distance between the anchor and the positive, you will need to sum over axis=-1
    pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)))
    # Step 2: Compute the (encoding) distance between the anchor and the negative, you will need to sum over axis=-1
    neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)))
    # Step 3: subtract the two previous distances and add alpha.
    basic_loss = tf.add(tf.subtract(pos_dist, neg_dist), alpha)
    # Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples.
    loss = tf.reduce_sum(tf.maximum(basic_loss, 0.0))
    ### END CODE HERE ###
    
    return loss
with tf.Session() as test:
    tf.set_random_seed(1)
    y_true = (None, None, None)
    y_pred = (tf.random_normal([3, 128], mean=6, stddev=0.1, seed = 1),
              tf.random_normal([3, 128], mean=1, stddev=1, seed = 1),
              tf.random_normal([3, 128], mean=3, stddev=4, seed = 1))
    loss = triplet_loss(y_true, y_pred)
    
    print("loss = " + str(loss.eval()))
Result: //和答案不一樣,但我找到別人能出正確答案的代碼也跟我一樣,所以我認為應該是隨機種子的問題
loss = 350.027

5 - Loading the trained model

  因為訓練FaceNet需要大量的數據和計算,因此此作業不打算訓練該模型,而是直接加載已經訓練好的模型。

FRmodel.compile(optimizer = adam, loss = triplet_loss, metrics = [accuracy])
load_weights_from_FaceNet(FRmodel)

技術分享圖片

6 - Applying the model

6.1 - Face Verification

  使用方法$img_to_encoding(image_path, model)$建立一個人臉特征庫,只有識別到其中的人臉才能通過驗證。

database = {}
database["danielle"] = img_to_encoding("images/danielle.png", FRmodel)
database["younes"] = img_to_encoding("images/younes.jpg", FRmodel)
database["tian"] = img_to_encoding("images/tian.jpg", FRmodel)
database["andrew"] = img_to_encoding("images/andrew.jpg", FRmodel)
database["kian"] = img_to_encoding("images/kian.jpg", FRmodel)
database["dan"] = img_to_encoding("images/dan.jpg", FRmodel)
database["sebastiano"] = img_to_encoding("images/sebastiano.jpg", FRmodel)
database["bertrand"] = img_to_encoding("images/bertrand.jpg", FRmodel)
database["kevin"] = img_to_encoding("images/kevin.jpg", FRmodel)
database["felix"] = img_to_encoding("images/felix.jpg", FRmodel)
database["benoit"] = img_to_encoding("images/benoit.jpg", FRmodel)
database["arnaud"] = img_to_encoding("images/arnaud.jpg", FRmodel)

  實現$verify()$方法去識別這個人是否能夠通過驗證,有如下幾步:

  • Compute the encoding of the image from image_path
  • Compute the distance about this encoding and the encoding of the identity image stored in the database
  • Open the door if the distance is less than 0.7, else do not open.
# GRADED FUNCTION: verify

def verify(image_path, identity, database, model):
    """
    Function that verifies if the person on the "image_path" image is "identity".
    
    Arguments:
    image_path -- path to an image
    identity -- string, name of the person you‘d like to verify the identity. Has to be a resident of the Happy house.
    database -- python dictionary mapping names of allowed people‘s names (strings) to their encodings (vectors).
    model -- your Inception model instance in Keras
    
    Returns:
    dist -- distance between the image_path and the image of "identity" in the database.
    door_open -- True, if the door should open. False otherwise.
    """
    
    ### START CODE HERE ###
    
    # Step 1: Compute the encoding for the image. Use img_to_encoding() see example above. (≈ 1 line)
    encoding = img_to_encoding(image_path, model)
    
    # Step 2: Compute distance with identity‘s image (≈ 1 line)
    dist = np.linalg.norm(encoding-database[identity])
    
    # Step 3: Open the door if dist < 0.7, else don‘t open (≈ 3 lines)
    if dist<0.7:
        print("It‘s " + str(identity) + ", welcome home!")
        door_open = True
    else:
        print("It‘s not " + str(identity) + ", please go away")
        door_open = False
        
    ### END CODE HERE ###
        
    return dist, door_open
verify("images/camera_0.jpg", "younes", database, FRmodel)
  技術分享圖片

verify("images/camera_2.jpg", "kian", database, FRmodel)
 
技術分享圖片

6.2 - Face Recognition

  實現$who\_is\_it()$方法需要以下幾步:

  • Compute the target encoding of the image from image_path
  • Find the encoding from the database that has smallest distance with the target encoding.
    • Initialize the min_dist variable to a large enough number (100). It will help you keep track of what is the closest encoding to the input‘s encoding.
    • Loop over the database dictionary‘s names and encodings. To loop use for (name, db_enc) in database.items().
      • Compute L2 distance between the target "encoding" and the current "encoding" from the database.
      • If this distance is less than the min_dist, then set min_dist to dist, and identity to name.
# GRADED FUNCTION: who_is_it

def who_is_it(image_path, database, model):
    """
    Implements face recognition for the happy house by finding who is the person on the image_path image.
    
    Arguments:
    image_path -- path to an image
    database -- database containing image encodings along with the name of the person on the image
    model -- your Inception model instance in Keras
    
    Returns:
    min_dist -- the minimum distance between image_path encoding and the encodings from the database
    identity -- string, the name prediction for the person on image_path
    """
    
    ### START CODE HERE ### 
    
    ## Step 1: Compute the target "encoding" for the image. Use img_to_encoding() see example above. ## (≈ 1 line)
    encoding = img_to_encoding(image_path, model)
    
    ## Step 2: Find the closest encoding ##
    
    # Initialize "min_dist" to a large value, say 100 (≈1 line)
    min_dist = 100
    
    # Loop over the database dictionary‘s names and encodings.
    for (name, db_enc) in dataset:
        
        # Compute L2 distance between the target "encoding" and the current "emb" from the database. (≈ 1 line)
        dist = np.linalg.norm(encoding-db_enc)

        # If this distance is less than the min_dist, then set min_dist to dist, and identity to name. (≈ 3 lines)
        if dist < min_dist:
            min_dist = dist
            identity = name

    ### END CODE HERE ###
    
    if min_dist > 0.7:
        print("Not in the database.")
    else:
        print ("it‘s " + str(identity) + ", the distance is " + str(min_dist))
        
    return min_dist, identity
who_is_it("images/camera_0.jpg", database, FRmodel)

7 - Summary

  雖然我們沒有實現和訓練神經網絡,但是可以通過下面兩種方法來提高算法的準確性:

  • Put more images of each person (under different lighting conditions, taken on different days, etc.) into the database. Then given a new image, compare the new face to multiple pictures of the person. This would increae accuracy.(增加數據集)
  • Crop the images to just contain the face, and less of the "border" region around the face. This preprocessing removes some of the irrelevant pixels around the face, and also makes the algorithm more robust.(降低噪聲)

8 - References

https://web.stanford.edu/class/cs230/

DeepLearning.ai-Week4-Face Recognition for the Happy House