1. 程式人生 > 實用技巧 >人臉識別開源庫face_recognition

人臉識別開源庫face_recognition

人臉識別開源庫face_recognition

人工智慧迷途小書童10個月前 (12-13)0評論

軟硬體環境

  • ubuntu 18.04 64bit
  • GTX 1070Ti
  • anaconda with python 3.6
  • face_recognition 1.2.3
  • dlib
  • opencv

視訊看這裡

face_recognition簡介

face_recognition號稱是世界上最簡單的基於python的人臉識別庫,是在大名鼎鼎的深度學習框架dlib上做的整合dlib模型在LFW(Labeled Faces in the Wild)能有99.38的準確率。另外face_recognition

提供了相應的命令列工具,可以通過命令列來對圖片資料夾進行人臉識別,非常的酷。

安裝face_recognition

可以利用pip來安裝

pip install face_recognition

或者從原始碼開始安裝

git clone https://github.com/ageitgey/face_recognition.git
cd face_recognition
python setup.py install

face_recognition工作流

找出人臉

第一步就是要找出給定圖片中包含的全部的臉的位置

import face_recognition

image = face_recognition.load_image_file("your_file.jpg")
face_locations = face_recognition.face_locations(image)
操作特徵點

找出每個人的眼睛、鼻子、嘴巴和下巴

import face_recognition

image = face_recognition.load_image_file("your_file.jpg")
face_landmarks_list = face_recognition.face_landmarks(image)

進行識別

最後一步,也是最關鍵的一步,識別出這張臉是屬於誰的。face_recognition使用了歐幾里得距離(可以參考我的另一篇文章https://xugaoxiang.com/2019/11/30/euclidean-distance/)來確定是不是同一張臉

import face_recognition

known_image = face_recognition.load_image_file("biden.jpg")
unknown_image = face_recognition.load_image_file("unknown.jpg")

biden_encoding = face_recognition.face_encodings(known_image)[0]
unknown_encoding = face_recognition.face_encodings(unknown_image)[0]

results = face_recognition.compare_faces([biden_encoding], unknown_encoding)

命令列工具

face_recognition提供了2個命令列工具

  • face_detection– 找出給定圖片或圖片資料夾中的人臉
  • face_recognition– 進行人臉識別

具體如何使用,可以通過--help來檢視引數。face_detection比較簡單,這裡就不講了。

命令列工具face_recognition使用

要進行人臉識別,首先必須準備已知人員的資訊,這裡用一個獨立的資料夾來存放,比如known,然後每一個人的圖片都以他的名字來命令,如Joe Biden.jpgKobe.jpg

然後將需要識別的圖片放置在另一個資料夾中,如unknown,執行以下命令進行識別

face_recognition known unknown

預設情況下,不設定閾值的話,正確識別率是非常非常低的。在實際情況下,需要根據自己的情況,來適當調整閾值。在同樣的測試環境下,將相似度閾設成了0.38,識別結果就正確了。

face_recognition known unknown --tolerance 0.38

列印中的unknown_person表明是陌生人

另一個有用的引數是--cpus,如果你使用的是多核CPU,可以利用這個引數來提升識別速度。--cpus=-1表示使用所有的CPU核。

KNN分類器

KNN(K-Nearest Neighbor)通過測量不同特徵值之間的距離進行分類。它的思路是:如果一個樣本在特徵空間中的k個最相似(即特徵空間中最鄰近)的樣本中的大多數屬於某一個類別,則該樣本也屬於這個類別,其中K通常是不大於20的整數。KNN演算法中,所選擇的鄰居都是已經正確分類的物件。該方法在定類決策上只依據最鄰近的一個或者幾個樣本的類別來決定待分樣本所屬的類別。

在實際的專案中,一般都會用一個分類器(classifier)來儲存已知人臉的資料,方便管理。face_recognition使用了基於KNN演算法的分類器。

製作分類器

程式碼來自examples/face_recognition_knn.py,註釋很詳細了,就不一一解釋了。經測試,同一人的圖片越多,識別的準確率就越高。

def train(train_dir, model_save_path=None, n_neighbors=None, knn_algo='ball_tree', verbose=False):
    """
    Trains a k-nearest neighbors classifier for face recognition.

    :param train_dir: directory that contains a sub-directory for each known person, with its name.

     (View in source code to see train_dir example tree structure)

     Structure:
        <train_dir>/
        ├── <person1>/
        │   ├── <somename1>.jpeg
        │   ├── <somename2>.jpeg
        │   ├── ...
        ├── <person2>/
        │   ├── <somename1>.jpeg
        │   └── <somename2>.jpeg
        └── ...

    :param model_save_path: (optional) path to save model on disk
    :param n_neighbors: (optional) number of neighbors to weigh in classification. Chosen automatically if not specified
    :param knn_algo: (optional) underlying data structure to support knn.default is ball_tree
    :param verbose: verbosity of training
    :return: returns knn classifier that was trained on the given data.
    """
    X = []
    y = []

    # Loop through each person in the training set
    for class_dir in os.listdir(train_dir):
        if not os.path.isdir(os.path.join(train_dir, class_dir)):
            continue

        # Loop through each training image for the current person
        for img_path in image_files_in_folder(os.path.join(train_dir, class_dir)):
            image = face_recognition.load_image_file(img_path)
            face_bounding_boxes = face_recognition.face_locations(image)

            if len(face_bounding_boxes) != 1:
                # If there are no people (or too many people) in a training image, skip the image.
                if verbose:
                    print("Image {} not suitable for training: {}".format(img_path, "Didn't find a face" if len(face_bounding_boxes) < 1 else "Found more than one face"))
            else:
                # Add face encoding for current image to the training set
                X.append(face_recognition.face_encodings(image, known_face_locations=face_bounding_boxes)[0])
                y.append(class_dir)

    # Determine how many neighbors to use for weighting in the KNN classifier
    if n_neighbors is None:
        n_neighbors = int(round(math.sqrt(len(X))))
        if verbose:
            print("Chose n_neighbors automatically:", n_neighbors)

    # Create and train the KNN classifier
    knn_clf = neighbors.KNeighborsClassifier(n_neighbors=n_neighbors, algorithm=knn_algo, weights='distance')
    knn_clf.fit(X, y)

    # Save the trained KNN classifier
    if model_save_path is not None:
        with open(model_save_path, 'wb') as f:
            pickle.dump(knn_clf, f)

    return knn_clf
使用分類器
def predict(X_img_path, knn_clf=None, model_path=None, distance_threshold=0.6):
    """
    Recognizes faces in given image using a trained KNN classifier

    :param X_img_path: path to image to be recognized
    :param knn_clf: (optional) a knn classifier object. if not specified, model_save_path must be specified.
    :param model_path: (optional) path to a pickled knn classifier. if not specified, model_save_path must be knn_clf.
    :param distance_threshold: (optional) distance threshold for face classification. the larger it is, the more chance
           of mis-classifying an unknown person as a known one.
    :return: a list of names and face locations for the recognized faces in the image: [(name, bounding box), ...].
        For faces of unrecognized persons, the name 'unknown' will be returned.
    """
    if not os.path.isfile(X_img_path) or os.path.splitext(X_img_path)[1][1:] not in ALLOWED_EXTENSIONS:
        raise Exception("Invalid image path: {}".format(X_img_path))

    if knn_clf is None and model_path is None:
        raise Exception("Must supply knn classifier either thourgh knn_clf or model_path")

    # Load a trained KNN model (if one was passed in)
    if knn_clf is None:
        with open(model_path, 'rb') as f:
            knn_clf = pickle.load(f)

    # Load image file and find face locations
    X_img = face_recognition.load_image_file(X_img_path)
    X_face_locations = face_recognition.face_locations(X_img)

    # If no faces are found in the image, return an empty result.
    if len(X_face_locations) == 0:
        return []

    # Find encodings for faces in the test iamge
    faces_encodings = face_recognition.face_encodings(X_img, known_face_locations=X_face_locations)

    # Use the KNN model to find the best matches for the test face
    closest_distances = knn_clf.kneighbors(faces_encodings, n_neighbors=1)
    are_matches = [closest_distances[0][i][0] <= distance_threshold for i in range(len(X_face_locations))]

    # Predict classes and remove classifications that aren't within the threshold
    return [(pred, loc) if rec else ("unknown", loc) for pred, loc, rec in zip(knn_clf.predict(faces_encodings), X_face_locations, are_matches)]

CUDA加速

如果還想提升效能,那就必須上顯示卡了,由於face_recognition依賴與dlib,因此需要先安裝支援CUDAdlib,可以參考另一篇文章https://xugaoxiang.com/2019/12/13/ubuntu-cuda/

import face_recognition

image = face_recognition.load_image_file("my_picture.jpg")
face_locations = face_recognition.face_locations(image, model="cnn")

# face_locations is now an array listing the co-ordinates of each face!

與攝像頭聯動

這裡使用opencv來實現,opencv讀取攝像頭每一幀的資料,然後進行resize、顏色空間的轉換(由opecv使用的BGR轉換成face_recognition使用的RGB),最後進行人臉的檢測及識別。

import face_recognition
import cv2

video_capture = cv2.VideoCapture(0)

# Load a sample picture and learn how to recognize it.
obama_image = face_recognition.load_image_file("obama.jpg")
obama_face_encoding = face_recognition.face_encodings(obama_image)[0]

# Load a second sample picture and learn how to recognize it.
biden_image = face_recognition.load_image_file("biden.jpg")
biden_face_encoding = face_recognition.face_encodings(biden_image)[0]

# Create arrays of known face encodings and their names
known_face_encodings = [
    obama_face_encoding,
    biden_face_encoding
]
known_face_names = [
    "Barack Obama",
    "Joe Biden"
]

# Initialize some variables
face_locations = []
face_encodings = []
face_names = []
process_this_frame = True

while True:
    # Grab a single frame of video
    ret, frame = video_capture.read()

    # Resize frame of video to 1/4 size for faster face recognition processing
    small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)

    # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
    rgb_small_frame = small_frame[:, :, ::-1]

    # Only process every other frame of video to save time
    if process_this_frame:
        # Find all the faces and face encodings in the current frame of video
        face_locations = face_recognition.face_locations(rgb_small_frame)
        face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)

        face_names = []
        for face_encoding in face_encodings:
            # See if the face is a match for the known face(s)
            matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
            name = "Unknown"

            # If a match was found in known_face_encodings, just use the first one.
            if True in matches:
                first_match_index = matches.index(True)
                name = known_face_names[first_match_index]

            face_names.append(name)

    process_this_frame = not process_this_frame


    # Display the results
    for (top, right, bottom, left), name in zip(face_locations, face_names):
        # Scale back up face locations since the frame we detected in was scaled to 1/4 size
        top *= 4
        right *= 4
        bottom *= 4
        left *= 4

        # Draw a box around the face
        cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)

        # Draw a label with a name below the face
        cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED)
        font = cv2.FONT_HERSHEY_DUPLEX
        cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)

    # Display the resulting image
    cv2.imshow('Video', frame)

    # Hit 'q' on the keyboard to quit!
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# Release handle to the webcam
video_capture.release()
cv2.destroyAllWindows()

另外本地視訊檔案、網路攝像頭(基於rtsp視訊流)的識別場景跟本地攝像頭的處理非常相似,這裡以就不再繼續貼程式碼了。

參考資料