1. 程式人生 > >Deploy your own TensorFlow object detection model to AWS DeepLens

Deploy your own TensorFlow object detection model to AWS DeepLens

In this blog post, we’ll show you how to deploy a TensorFlow object detection model to AWS DeepLens. This enables AWS DeepLens to perform real-time object detection using the built-in camera. Object detection is the technique for machines to correctly identify different objects in the image or video. Image recognition, specifically object detection is a very interesting topic in the AI deep learning world. For example, in autonomous driving, the camera on the vehicle first needs to be able to detect objects (such as people, cars, and signs) on the road before making any decisions to turn, slow down, or stop.

To deploy your own TensorFlow model, first you need to build a model with a supported base network. AWS DeepLens currently supports most of the popular base networks, such as Inception, MobileNet, NASNet, ResNet, and VGG. There are many pre-trained models using these networks from TensorFlow’s detection model zoo on their public GitHub repository. These pre-trained models can already identify many objects, such as humans and cars. You can just use them out of the box. Or you can extend these pre-trained models to train against your dataset to identify additional objects.

To keep this blog post easy to follow and relevant, you will learn to deploy one of the pre-built models from the model zoo that can detect up 90 common objects such as a bicycle, car, airplane, bus, or person. Keep in mind that the deployment process is the same if you are building and deploying your own custom model with the supported base networks mentioned previously.

Implementation steps

The following sections walk you through the implementation steps in detail.

Prerequisites

  1. Make sure to register your AWS DeepLens device before you begin. You can follow this link for a step-by-step guide to register the device.
  2. At the time of writing, TensorFlow 1.5 is not yet installed on the device in Python 2.7. Use SSH to connect to the device and execute the following command to install TensorFlow.
    sudo pip install tensorflow==1.5.0

Download a pre-built TensorFlow object detection model from the model zoo

  1. Untar and copy file named frozen_inference_graph.pb to a separate folder. This frozen graph is the model.
  2. On your workstation, download the label map for the model (mscoco_label_map.pbtxt) from the following link to the folder where frozen_inference_graph.pb file is located. https://github.com/tensorflow/models/tree/master/research/object_detection/data
  3. The label translates the model output into a human-readable name. For example, if the model returns a number 2, it is a bicycle.
  4. Zip and tar pb and mscoco_label_map.pbtxt together, in Linux or OSX. The command would be similar to the following:
    tar cxvf tfmodel.tar.gz frozen_inference_graph.pb mscoco_label_map.pbtxt

    On the Windows platform, you can use a third-party software such as 7-Zip to put them together.

Upload the tar file to Amazon S3

In this section, you will upload the tar file to Amazon S3 so the AWS DeepLens service can deploy it to the DeepLens device for local inference.

  1. Open the AmazonS3 console.
  2. Create an Amazon S3 bucket in the Northern Virginia Region that must contain the term “deeplens”. The AWS DeepLens default role has permission only to access the bucket with the name containing ”deeplens”. You can name it deeplens-tfmodel-yourinitials
  3. After the bucket is created, upload the tar file to the bucket.

Creating an AWS Lambda function

In this section, you’ll create a custom Lambda function that will be deployed as part of the AWS DeepLens deployment. This Lambda function contains code to load the custom TensorFlow model that was downloaded from the S3 bucket mentioned previously. This allows you to perform local inferencing (object detection) without connecting back to the AWS Cloud.

  1. Open the AWS Lambda console.
  2. Create a new function from a blueprint, make sure to search for AWS Greengrass and select the Python version.
  3. Give the function a meaning name and select AWSDeepLensLambdaRole from Existing role drop-down list. Then create the function. Note: If you do not see this role, make sure that you registered your AWS DeepLens device. The role is automatically created during the registration process.
  4. In the function editor, copy and paste the following code into greengrassHelloWorld.py. Note: Do not rename the file or the function handler, leave everything at the default.
    # Import packages
    import os
    import cv2
    import numpy as np
    import tensorflow as tf
    import sys
    import greengrasssdk
    from threading import Timer
    import time
    import awscam
    from threading import Thread
    
    # Creating a greengrass core sdk client
    client = greengrasssdk.client('iot-data')
    
    # The information exchanged between IoT and cloud has 
    # a topic and a message body.
    # This is the topic that this code uses to send messages to cloud
    iotTopic = '$aws/things/{}/infer'.format(os.environ['AWS_IOT_THING_NAME'])
    modelPath = "/opt/awscam/artifacts"
    
    # Path to frozen detection graph .pb file, which contains the model that is used
    # for object detection.
    PATH_TO_CKPT = os.path.join(modelPath,'frozen_inference_graph.pb')
    
    # List of the strings that is used to add correct label for each box.
    PATH_TO_LABELS = os.path.join(modelPath, 'mscoco_label_map.pbtxt')
    
    def greengrass_infinite_infer_run():
        try:
            # Load the TensorFlow model into memory.
            detection_graph = tf.Graph()
            with detection_graph.as_default():
                od_graph_def = tf.GraphDef()
                with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
                    serialized_graph = fid.read()
                    od_graph_def.ParseFromString(serialized_graph)
                    tf.import_graph_def(od_graph_def, name='')
    
                sess = tf.Session(graph=detection_graph)
                
            client.publish(topic=iotTopic, payload="Model loaded")
            
            tensor_dict = {}
            image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
            for key in ['num_detections', 'detection_boxes', 'detection_scores','detection_classes']:
                tensor_name = key + ':0'
                tensor_dict[key] = detection_graph.get_tensor_by_name(tensor_name)
            #load label map
            label_dict = {}
            with open(PATH_TO_LABELS, 'r') as f:
                    id=""
                    for l in (s.strip() for s in f):
                            if "id:" in l:
                                    id = l.strip('id:').replace('\"', '').strip()
                                    label_dict[id]=''
                            if "display_name:" in l:
                                    label_dict[id] = l.strip('display_name:').replace('\"', '').strip()
    
            client.publish(topic=iotTopic, payload="Start inferencing")
            while True:
                ret, frame = awscam.getLastFrame()
                if ret == False:
                    raise Exception("Failed to get frame from the stream")
                expanded_frame = np.expand_dims(frame, 0)
                # Perform the actual detection by running the model with the image as input
                output_dict = sess.run(tensor_dict, feed_dict={image_tensor: expanded_frame})
                scores = output_dict['detection_scores'][0]
                classes = output_dict['detection_classes'][0]
                #only want inferences that have a prediction score of 50% and higher
                msg = '{'
                for idx, val in enumerate(scores):
                    if val > 0.5:
                        msg += '"{}": {:.2f},'.format(label_dict[str(int(classes[idx]))], val*100)
                msg = msg.rstrip(',')
                msg +='}'
                
                client.publish(topic=iotTopic, payload = msg)
                
        except Exception as e:
            msg = "Test failed: " + str(e)
            client.publish(topic=iotTopic, payload=msg)
    
        # Asynchronously schedule this function to be run again in 15 seconds
        Timer(15, greengrass_infinite_infer_run).start()
    
    
    # Execute the function above
    greengrass_infinite_infer_run()
    
    
    # This is a dummy handler and will not be invoked
    # Instead the code above will be executed in an infinite loop for our example
    def function_handler(event, context):
        return

  5. Save the Lambda function.
  6. To ensure code integrity and consistency, AWS DeepLens only works with the published version of a Lambda function. To publish a Lambda function, choose Actions and select Publish new version.
  7. Give a description to the published version and choose Publish.

Setting up an AWS DeepLens project and deploying a custom TensorFlow object detection model

  1. Open the AWS DeepLens console.
  2. Make sure your DeepLens device is registered before you begin. You can follow this link for a step-by-step guide to register the device.
  3. Make sure TensorFlow 1.5 is installed on the device for Python 2.7. Installation instructions are in prerequisites section.
  4. Make sure you are in the same region where you created the S3 bucket. At the time of writing, AWS DeepLens is only available in the Northern Virginia Region.
  5. In the AWS DeepLens console, in the left navigation panel, choose Models. Then choose Import model.
  6. On the Import model page, choose Externally trained model. In the Model artefact path, enter the S3 bucket and model path you created earlier. Give it a meaningful name and description, and make sure to choose TensorFlow as the model framework. Choose Import model.
  7. In the left navigation panel, choose Project. Then choose Create new project.
  8. Create a new blank project.
  9. Give your project a meaningful name and description, and then choose Add model.
  10. Select the model that you imported earlier, and then choose Add model.
  11. On the project detail page, choose Add function.
  12. Choose the Lambda function you created earlier, then choose Add function.
  13. Choose Create to create the project. On the project page, select the new project that you just set up, and choose Deploy to device.
  14. Select the AWS DeepLens device that you want to deploy the project to, choose Review, and then chose Deploy. The deployment process will take few minutes because the AWS DeepLens device needs to download the model tar file from S3 and the Lambda function to execute locally on the device.
  15. The status pane at the top of the screen displays deployment progress. Wait until it turns green and displays status that the deployment succeeded.
  16. In the Device details pane, copy the MQTT topic, and then navigate to the AWS IoT console.
  17. In the AWS IoT console, in the left navigation pane choose Test, paste the MQTT topic copied in the previous step, then choose Subscribe to topic.
  18. Detected objects and their prediction confidence score are sent in real time through MQTT to the AWS IoT platform.

Conclusion

In this blog post, you learned how to deploy a custom TensorFlow object detection model to AWS DeepLens. This enables AWS DeepLens to perform real-time object detection using the built-in camera. You can see that it’s possible to use AWS services to build some very powerful AI solutions. Why not take your skills to the next level by integrating AWS DeepLens with other Amazon services, such as Alexa.

About the Authors

Tristan Li is a Solutions Architect with Amazon Web Services. He works with enterprise customers in the US, helping them adopt cloud technology to build scalable and secure solutions on AWS.

Wayne Davis is an Enterprise Solutions Archtiect for Amazon Web Services. Over the last 24 months he has been rapidly customers come up to speed on cloud technologies as fast as possible.