1. 程式人生 > >Using AWS IoT for Predictive Maintenance

Using AWS IoT for Predictive Maintenance

The interest in machine learning for industrial and manufacturing use cases on the edge is growing. Manufacturers need to know when a machine is about to fail so they can better plan for maintenance. For example, as a manufacturer, you might have a machine that is sensitive to various temperature, velocity, or pressure changes. When these changes occur, they might indicate a failure.

Prediction, sometimes referred to as inference, requires machine-learning (ML) models based on large amounts of data for each component of the system. The model is based on a specified algorithm that represents the relationships between the values in the training data. You use these ML models to evaluate new data from the manufacturing system in near real-time. A predicted failure exists when the evaluation of the new data with the ML model indicates there is a statistical match with a piece of equipment in the system.

Typically, an ML model is built for each type of machine or sub-process using its unique data and features. This leads to an expansive set of ML models that represents each of the critical machines in the manufacturing process and different types of predictions desired. Although the ML model supports inference of new data sent to the AWS Cloud, you can also perform the inference on premises, where latency is much lower. This results in a more real-time evaluation of the data. Performing local inference also saves costs related to the transfer of what could be massive amounts of data to the cloud.

The AWS services used to build and train ML models for automated deployment to the edge make the process highly scalable and easy to do. You collect data from the machines or infrastructure that you want to make predictions on and build ML models using AWS services in the cloud. Then you transfer the ML models back to the on-premises location where they are used with a simple AWS Lambda function to evaluate new data sent to a local server running AWS Greengrass.

AWS Greengrass lets you run local compute, messaging, ML inference, and more. It includes a lightweight IoT broker that you run on your own hardware close to the connected equipment. The broker communicates securely with many IoT devices and is a gateway to AWS IoT Core where selected data can be further processed. AWS Greengrass can also execute AWS Lambda functions to process or evaluate data locally without an ongoing need to connect to the cloud.

Building ML models
You need to build and train ML models before you start maintenance predictions. A high-level ML process to build and train models applies to most use cases and is relatively easy to implement with AWS IoT.

Start by collecting supporting data for the ML problem that you are trying to solve and temporarily send it to AWS IoT Core. This data should be from the machine or system associated with each ML model. A dedicated AWS Direct Connect connection between the on-premises location of the machines and AWS IoT Core supports high-volume data rates. Depending on the volume of data you are sending to the cloud, you might need to stagger the data collection for your machines (that is, work in batches).

Alternatively, an AWS Snowball appliance can transfer large amounts of data to your private AWS account using a secure hardened storage device you ship with a package delivery service. The data is transferred from AWS Snowball to Amazon S3 buckets you designate in your account.

AWS IoT Analytics supports the efficient storage of data and pipeline processing to enrich and filter the data for later use in ML model building. It also supports feature engineering in the pipeline processing with custom AWS Lambda functions that you can write to derive new attributes to help classify the data. You can visualize the results of the pipeline processing in AWS IoT Analytics using Amazon QuickSight to validate any transformations or filters you apply.

Amazon SageMaker supports direct integration with AWS IoT Analytics as a data source. Jupyter Notebook templates are provided to get you started quickly in building and training the ML model. For predictive maintenance use cases, linear regression and classification are the two most common algorithms you can use. There are many other algorithms to consider for time-series data prediction and you can try different ones and measure the effectiveness of each in your process. Also consider that AWS Greengrass ML Inference supports Apache MXNet, TensorFlow and Chainer pre-built packages that make deployment easier. Either of these ML frameworks simplify the deployment process to AWS Greengrass, but you can use others with additional setup. For example, you could use the popular Python library scikit-learn to analyze data.

Cost-optimized
Many users like the elasticity of the AWS Cloud combined with its pay-for-what-you-use pricing structure. When ML models are built and trained or later retrained, large amounts of raw data are sent to AWS IoT Core. In addition, you need large amounts of compute to speed the processing along using Amazon SageMaker. When the ML models are complete, you can archive the raw data to a lower cost storage service with Amazon Glacier or delete it. The compute resources allocated for the training are also released and costs decrease.

Deploying ML models to the edge
Running predictions locally requires the real-time machine data, ML model, and local compute resources to perform the inference. AWS Greengrass supports deploying ML models built with Amazon SageMaker to the edge. An AWS Lambda function performs the inference. Identical machines can receive the same deployment package that contains the ML model and inference Lambda function. This creates a low-latency solution. There is no dependency on AWS IoT Core to evaluate real-time data and send alerts or commands to infrastructure to shut down, if required.

Running local predictions
The AWS Lambda function linked to the ML model as part of the AWS Greengrass deployment configuration performs predictions in real time. The AWS Greengrass message broker routes selected data published on a designated MQTT topic to the AWS Lambda function to perform the inference. When an inference returns a high probability of a match, then multiple actions can be executed in the AWS Lambda function. For example, a shutdown command can be sent to a machine or, using either local or cloud messaging services, an alert can be sent to an operations team.

For each ML model, you need to determine the threshold for inference confidence that equates to a predicted failure condition. For example, if an inference for a machine you are monitoring indicates with high confidence (let’s say a level of 90%), then you would take appropriate action. However, if the confidence level is 30%, then you might decide not to act on that result. You can use using AWS IoT Core to publish inference results on a dedicated logging and reporting topic.

Another consideration for running inference locally is ensuring you have a large enough server or multiple servers to support the amount of compute required. Factors that influence hardware sizing include:

  • Number of machines being monitored (for example, is it 1 or 100 machines?)
  • Amount of data sent from each machine (for example, is it 50,000 bytes or 1,000 bytes?)
  • The rate at which data is sent from each machine (for example, is it once a minute or every 10 milliseconds?)
  • How CPU-intensive is the ML model when performing inference and what are the memory requirements? (Some models require more system resources and might benefit from GPUs, for example.)
  • What other processing is occurring on the host and are any processes resource-intensive?

System architecture
The end-to-end architecture includes:

  • The collection of data to build and train a model.
  • The deployment of models back to the factory.
  • The evaluation of data to perform local inference.

AWS Greengrass supports accessing local resources and AWS IoT Core to help keep your manufacturing process up and running.

Testimonial: Environment Monitoring Solutions sees 500% ROI by using AWS IoT
Environmental Monitoring Solutions specializes in solutions that help petrol retailers gather and analyze data on the performance of their petrol stations. By using AWS IoT to detect fuel leaks early to minimize environmental impact, the company received a 500% ROI. AWS IoT made it possible to connect sensors in the underground tanks and pumps of each petrol station and collect all data at 30-second intervals. The data is aggregated on cloud-computing infrastructure and displayed on a web-enabled interface in near-real time.

According to Russell Dupuy, the company’s founder and managing director, “With our AWS IoT–enabled Fuelsuite solution, customers manage their petrol stations proactively rather than reactively… to dramatically improve efficiencies and detect fuel leaks early to minimize environmental impacts.”

See for yourself. Get started today using AWS IoT for predictive maintenance.

Learn More:

相關推薦

Using AWS IoT for Predictive Maintenance

The interest in machine learning for industrial and manufacturing use cases on the edge is growing. Manufacturers need to know when a machine is a

How to build a front-line concussion monitoring system using AWS IoT and serverless data lakes

In part 1 of this series, we demonstrated how to build a data pipeline in support of a data lake. We used key AWS services such as Amazon Kinesis

Using AWS OpsWorks for Chef Automate in a federated environment

Many large enterprises operate on a federated model. That is, they are separated into different business units or organizations, with different go

Using AWS IoT Device Management in a Retail Scenario to Process Order Requests

In this blog post, we will simulate a common business scenario to show you how to use the group policy feature in AWS IoT Device Management. Speci

Using AWS IoT Core in a Low-Power Application

At AWS, we work closely with customers to assist them in building various types of IoT solutions. We often hear from customers about the need to m

Develop a voice-based interface for your products using cognitive IoT services

Summary Building audio into your applications can be challenging, but Node-RED and Watson services makes it quicker and simpler than

Using Continuous Jobs with AWS IoT Device Management

In an earlier Using Over-the-Air Updates with AWS IoT Device Management blog post, we showed you how to create a simple AWS IoT snapshot job and t

Testing Lambda functions using the AWS Toolkit for Eclipse

In this blog post, I will introduce how to test AWS Lambda functions in Eclipse by using the AWS Toolkit for Eclipse. The AWS Toolkit for Eclipse

PL/SQL Developer登錄出現——Using a filter for all users can lead to poor performance!

objects default devel http mage eve 配置 tool cnblogs 用PL/SQL Developer登錄Oracle時提示:Using a filter for all users can lead to poor performan

[Tools] Using mobile device for debugging your mobile web site

per ins conn build mode github and gpo actions 1. First you have enable "Developer mode" on your mobile device. (Different device might b

AWS RDS for MySQL 維護

jaAWS RDS維護檢查雲數據庫的數據庫參數 max_connections=4000 #但實際連接數到800,就無法訪問了VPC: 邏輯的虛擬的網絡,可以配置自由的IP,子網、路由表和網關 提供了安全組合網絡訪問控制列表等高級安全功能 AWS RDS for MySQL 維護

AWS RDS for MySQL 基本維護

ja1、AWS查詢慢日誌select count(1) ,user_host from mysql.slow_log group by user_host;select count(*) from mysql.slow_log ; mysql> desc mysql.slow_log ;+---

亞馬遜AWS-IoT:從架構到開發

本來很早就想寫一個關於AWS,MS這些老牌雲服務商的IOT支援介紹的,一直犯懶。昨天參加了AWS的一個線下活動,很接地氣的活動,一下記住了好幾個AWS的服務名稱以及其IOT架構,又勾起了寫的想法。 不過,偷懶主義告訴我,一定有別人寫吧,搜搜看,還真找到一篇不錯的,那我就不寫

【論文速讀】Shitala Prasad_ECCV2018】Using Object Information for Spotting Text

Shitala Prasad_ECCV2018】Using Object Information for Spotting Text 作者和程式碼 關鍵詞 文字檢測、水平文字、FasterRCNN、xywh、multi-stage 方法亮點 作者argue影象中的文字不可能單獨出現,文字一定是寫

AWS SDK for Java 的使用(適用於 Java 的 AWS 開發工具包開發人員指南)之配置aws憑證

今天接了個新專案,使用的AWS SDK for Java。例如: <dependency> <groupId>com.amazonaws</groupId> <artifactId>

Mercedes drops clapped-out robots industrial iot for humans

www.inhandnetworks.de German car maker Mercedes has given robots at its factory some tough news - they’ve become redundant in the m

煙感器裝置接入AWS IOT的一種方法

      最近和同事一起做了一個煙感器接入 AWS IOT 的Demo ,遇到一些問題,想記錄下來,以備日後查閱。 需求: 1. 煙感器(WIFI方案)接入AWS IOT core 平臺,APP端也接入AWS IOT 端。 2. 煙感器

USING DEEP LEARNING FOR ANOMALY DETECTION IN RADIOLOGICAL IMAGES

關注Deep Learning在醫療資料中的應用,指出了Deep Learning在醫療資料應用中遇到的問題,即不能像處理圖片資料那樣,輸入大量訓練資料,而是相對資料量的缺乏;注意到人類的學習過程,當教小孩讀和寫的時候,它是一個學生和老師互動反饋的過程,受此啟發

Serverless Backend using AWS Lambda: Hands

Storing Data in DynamoDBBefore we can start storing data in our DynamoDB, we need to set some permissions for the Lambda function to have write access.Insi

Using Bitrise CI for React Native apps

Using Bitrise CI for React Native appsAfter trying Travis, CircleCI and BuddyBuild, I now choose Bitrise for my mobile applications. The many cool steps an