With the advent of powerful yet inexpensive edge computers it has become easy to utilise such computers for visual processing tasks. However, maintaining models on such devices in high numbers and multiple versions is a challenging problem.
Microsoft IoT Edge allows cloud analytics and custom business logic to be deployed on devices so that organizations focus on business insights instead of data management. This will enable a solution to scale by configuring IoT software, deploying it to devices via standard containers, and monitoring it all from the cloud. Microsoft has provided the source code of the IoT Edge components here, with the following functions architecture:
Machine Learning Platform for IOT (MLPIOT) is a platform for developing smart sensors. It is developed on top of the Microsoft IoT Edge. Using the rich set of functionalities available, it allows an abstraction for a machine vision system, which provides an ecosystem around the abstraction to make it easy to implement, test, train, deploy, upgrade, maintain, and retrain machine learning applications.
MLPIOT is particularly designed to help with the task of implementing vision applications for embedded systems in manufacturing environments where multiple instances of an application is needed to be installed, tuned, and maintained for a long time. Although it can also be used for other scenarios.
- Machine learning application life-cycle management
- Supports edge management
- Training on edge
- Deployment on edge
- Remote maintenance
- Integration with factory standards such as OPC UA
- Tools for scaling machine learning application in different settings
MLPIOT Abstraction Source Code
An MLPIOT application is defined by a mandatory processing pipeline and an optional training pipeline.
The processing pipeline is a life-cycle management which initializes and manages the following components:
Scene Descriptor is the core of MLPIOT vision apps. It is responsible for interpreting a given image and describing it as a Scene Description, which is a list of annotated objects.
As soon as a Scene Description is prepared, it is passed to an Event Extractor, which is responsible for analysing the scene and emitting events with standardized severity levels.
An application can be configured to have different lists of Action Executors in different situations. Developers can write their own Action Executors, or use the provided general-purpose ones like Logger for when the application is being developed, or OPC UA one which publishes the events through the standard protocol.