1. 程式人生 > >Jetson Nano Developer Kit

Jetson Nano Developer Kit

今年企業對Java開發的市場需求,你看懂了嗎? >>>   

  在使用Entity Framework這種ORM框架得時候,一般結合Repository倉儲形式來處理業務邏輯;雖然這種模式帶來很多好處,但是也會引發一些爭議,在此拋開不談,小弟結合專案經驗來實現一下,歡迎大佬拍磚;
  
  後續會帶來Dapper 基於Repository實現,程式碼一些實現會相容Dapper,所以做了一些比較醜陋得寫法;但是我得想法是通過一些Ioc可以在Entity Framework和Dapper兩者之間進行切換;
  
  您可以通過Nuget:Install-Package MasterChief.DotNet.Core.EF 安裝使用;
  
  您可以通過Github:MasterChief 檢視具體原始碼以及單元測試
  
  The Jetson Nano Developer Kit is an AI computer for learning and for making.
  
  一個推理框架,用於部署模型到嵌入式裝置.
  
  Four Steps to Deep Learning
  
  System Setup
  
  Image Recognition
  
  Object Detection
  
  Segmentation
  
  CUDA
  
  一種平行計算技術
  
  編寫自己的影象識別程式.
  
  https://github.com/dusty-nv/jetson-inference/tree/master/examples/my-recognition
  
  /*
  
  * Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
  
  *
  
  * Permission is hereby granted, free of charge, to any person obtaining a
  
  * copy of this software and associated documentation files (the "Software"),
  
  * to deal in the Software without restriction, including without limitation
  
  * the rights to use, copy, modify, merge, publish, distribute, sublicense,
  
  * and/or sell copies of the Software, and to permit persons to whom the
  
  * Software is furnished to do so, subject to the following conditions:
  
  *
  
  * The above copyright notice and this permission notice shall be included in
  
  * all copies or substantial portions of the Software.
  
  *
  
  * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
  
  * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
  
  * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
  
  * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
  
  * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
  
  * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
  
  * DEALINGS IN THE SOFTWARE.
  
  */
  
  // include imageNet header for image recognition
  
  #include <jetson-inference/imageNet.h>
  
  // include loadImage header for loading images
  
  #include <jetson-utils/loadImage.h>
  
  // main entry point
  
  int main( int argc, char** argv )
  
  {
  
  // a command line argument containing the image filename is expected,
  
  // so make sure we have at least 2 args (the first arg is the program)
  
  if( argc < 2 )
  
  {
  
  printf("my-recognition:  expected image filename as argument\n");
  
  printf("example usage:   ./my-recognition my_image.jpg\n");
  
  return 0;
  
  }
  
  // retrieve the image filename from the array of command line args
  
  const char* imgFilename = argv[1];
  
  // these variables will be used to store the image data and dimensions
  
  // the image data will be stored in shared CPU/GPU memory, so there are
  
  // pointers for the CPU and GPU (both reference the same physical memory)
  
  float* imgCPU    = NULL;        // CPU pointer to floating-point RGBA image data
  
  float* imgCUDA   = NULL;        // GPU pointer to floating-point RGBA image data
  
  int    imgWidth  = 0;       // width of the image (in pixels)
  
  int    imgHeight = 0;       // height of the image (in pixels)
  
  // load the image from disk as float4 RGBA (32 bits per channel, 128 bits per pixel)
  
  if( !loadImageRGBA(imgFilename, (float4**)&imgCPU, (float4**)&imgCUDA, &imgWidth, &imgHeight) )
  
  {
  
  printf("failed to load image '%s'\n", imgFilename);
  
  return 0;
  
  }
  
  // load the GoogleNet image recognition network with TensorRT
  
  // you can use imageNet::ALEXNET to load AlexNet model instead
  
  imageNet* net = imageNet::Create(imageNet::GOOGLENET);
  
  // check to make sure that the network model loaded properly
  
  if( !net )
  
  {
  
  printf("failed to load image recognition network\n");
  
  return 0;
  
  }
  
  // this variable will store the confidence of the classification (between 0 and 1)
  
  float confidence = 0.0;
  
  // classify the image with TensorRT on the GPU (hence we use the CUDA pointer)
  
  // this will return the index of the object class that the image was recognized as (or -1 on error)
  
  const int classIndex = net->Classify(imgCUDA, imgWidth, imgHeight, &confidence);
  
  // make sure a valid classification result was returned
  
  if( classIndex >= 0 )
  
  {
  
  // retrieve the name/description of the object class index
  
  const char* classDescription = net->GetClassDesc(classIndex);
  
  // print out the classification results
  
  printf("image is recognized as '%s' (class #%i) with %f%% confidence\n",
  
  classDescription, classIndex, confidence * 100.0f);
  
  }
  
  else
  
  {
  
  // if Classify() returned < 0, an error occurred
  
  printf("failed to classify image\n");
  
  }
  
  // free the network's resources before shutting down
  
  delete net;
  
  // this is the end of the example!
  
  return 0;
  
  }
  
  載入影象 loadImageRGBA
  
  載入的影象儲存於共享記憶體,對映到cpu和gpu.實際的記憶體裡的image只有1份,cpu/gpu pointer指向的都是同一份實體記憶體。
  
  The loaded image will be stored in shared memory that's mapped to both the CPU and GPU. There are two pointers available for access in the CPU and GPU address spaces, but there is really only one copy of the image in memory. Both the CPU and GPU pointers resolve to the same physical memory, without needing to perform memory copies (i.e. cudaMemcpy()).
  
  載入神經網路模型
  
  imageNet::Create()
  
  GOOGLENET是一個預先訓練好的模型,使用的資料集是ImageNet(注意不是imageNet物件).類別有1000個,包括了動植物,常見生活用品等.
  
  // load the GoogleNet image recognition network with TensorRT
  
  // you can use imageNet::ALEXNET to load AlexNet model instead
  
  imageNet* net = imageNet::Create(imageNet::GOOGLENET);
  
  // check to make sure that the network model loaded properly
  
  if( !net )
  
  {
  
  printf("failed to load image recognition network\n");
  
  return 0;
  
  }
  
  對圖片進行分類
  
  Classify返回的是類別對應的index
  
  //this variable will store the confidence of the classification (www.hengtongyoule.com/ between 0 and 1)
  
  float confidence = 0.0;
  
  // classify the image with TensorRT on the GPU (hence we use the CUDA pointer)
  
  // this will return the index of the object class that the image was recognized as (www.tianjiuyule178.com or -1 on error)
  
  const int classIndex = net->Classify(imgCUDA,www.gaozhuoyiqi.com imgWidth, imgHeight, &confidence);
  
  解釋結果
  
  // make sure a valid classification result was returned
  
  if( classIndex >= 0 )
  
  {
  
  // retrieve the name/description of the object class index
  
  const char* classDescription = net->GetClassDesc(classIndex);
  
  // print out the classification results
  
  printf("image is recognized as '%s'www.qwert888.com/ (class #%i) with %f%% confidence\n",
  
  classDescription, classIndex, confidence * 100.0f);
  
  }
  
  else
  
  {
  
  // if Classify() returned <www.zhongyiyul.cn 0, an error occurred
  
  printf("failed to classify image\n");
  
  }
  
  These descriptions of the 1000 classes are parsed from ilsvrc12_synset_words.txt when the network gets loaded (this file was previously downloaded when the jetson-inference repo was built).
  
  退出
  
  程式退出前要釋放掉資源
  
  // free the network's resources before shutting down
  
  delete net;
  
  // this is the end of the example!
  
  return 0;
  
  }
  
  cmake檔案
  
  # require CMake 2.8 or greater
  
  cmake_minimum_required(VERSION 2.8)
  
  # declare my-recognition project
  
  project(my-recognition)
  
  # import jetson-inference and jetson-utils packages.
  
  # note that if you didn't do "sudo make install"
  
  # while building jetson-inference, this will error.
  
  find_package(jetson-utils)
  
  find_package(jetson-inference)
  
  # CUDA and Qt4 are required
  
  find_package(CUDA)
  
  find_package(Qt4)
  
  # setup Qt4 for build
  
  include(${QT_USE_FILE})
  
  add_definitions(${QT_DEFINITIONS})
  
  # compile the my-recognition program
  
  cuda_add_executable(my-www.yunshengyule178.com  recognition my-recognition.cpp)
  
  # link my-recognition to jetson-inference library
  
  target_link_libraries(my-recognition jetson-inference)
  
  沒什麼要特別說的,主要的依賴如下:
  
  find_package(jetson-utils)
  
  find_package(jetson-inference)
  
  target_link_libraries(my-recognition jetson-inference)
  
  實時圖片識別
  
  上面的程式碼展示的是本地圖片的識別,這一節給出實時的攝像頭拍攝圖片識別的demo.