This is the multi-page printable view of this section. Click here to print.
Features
1 - Unified Namespace / Message Broker
The Unified Namespace is an event-driven architecture that allows for seamless communication between nodes in a network. It operates on the principle that all data, regardless of whether there is an immediate consumer, should be published and made available for consumption. This means that any node in the network can work as either a producer or a consumer, depending on the needs of the system at any given time.
To use any functionalities of the United Manufacturing Hub, you need to use the Unified Namespace as well. More information can be found in our Learning Hub on the topic of Unified Namespace.
When should I use it?
An application consists always out of multiple building blocks. To connect those building blocks, one can either exchange data between them through databases, through service calls (such as REST), or through a message broker.
Opinion: We think for most applications in manufacturing, communication via a message broker is the best choice as it prevents spaghetti diagrams and allows for real-time data processing. For more information about this, you can check out this blog article.
In the United Manufacturing Hub, each single piece of information / “message” / “event” is sent through a message broker, which is also called the Unified Namespace.
What can I do with it?
The Unified Namespace / Message Broker in the United Manufacturing Hub provides several notable functionalities in addition to the features already mentioned:
- Easy integration using MQTT: Many modern shopfloor equipment can send and receive data using the MQTT protocol.
- Easy integration with legacy equipment: Using tools like Node-RED, data can be easily extracted from various protocols such as Siemens S7, OPC-UA, or Modbus
- Get notified in real-time via MQTT: The Unified Namespace allows you to receive real-time notifications via MQTT when new messages are published. This can be useful for applications that require near real-time processing of data, such as an AGV waiting for new commands.
- Retrieve past messages from Kafka logs: By looking into the Kafka logs, you can always be aware of the last messages that have been sent to a topic. This allows you to replay certain scenarios for troubleshooting or testing purposes.
- Efficiently process messages from millions of devices: The Unified Namespace is designed to handle messages from millions of devices in your factory, even over unreliable connections. By using Kafka, you can efficiently at-least-once process each message, ensuring that each message arrives at-least-once (1 or more times).
- Trace messages through the system: The Unified Namespace provides tracing capabilities, allowing you to understand where messages are coming from and where they go. This can be useful for debugging and troubleshooting purposes. You can use the Management Console to visualize the flow of messages through the system.to visualize the flow of messages through the system.
How can I use it?
Using the Unified Namespace is quite simple:
Configure your IoT devices and devices on the shopfloor to use the in-built MQTT broker of the United Manufacturing Hub by specifying the MQTT protocol, selecting unencrypted (1883) / encrypted (8883) ports depending on your configuration, and send the messages into a topic starting with ia/raw
. From there on, you can start processing the messages in Node-RED by reading in the messages again via MQTT or Kafka, adjusting the payload or the topic to match the UMH datamodel and sending it back again to MQTT or Kafka.
If you send the messages into other topics, some features might not work correctly (see also limitations).
Recommendation: Send messages from IoT devices via MQTT and then work in Kafka only.
What are the limitations?
- Messages are only bridged between MQTT and Kafka if they fulfill the following requirements:
- payload is a valid JSON OR message is sent to the
ia/raw
topic - only sent to topics matching the allowed topics in the UMH datamodel, independent of what is configured in the environment variables (will be changed soon)
- The topic lengths can be maximum 249 characters as this is a Kafka limitation
- Only the following characters are allowed in the topic:
a-z
,A-Z
,_
and-
- Max. messages size for the mqtt-kafka-bridge is 0.95MB (1000000 bytes). If you have more, we recommend using Kafka directly and not bridging it via MQTT.
- payload is a valid JSON OR message is sent to the
- Messages from MQTT to Kafka will be published under a different topic:
- Spaces will be removed
/
characters will be replaced with a.
- and vice versa
- By default, there will be no Authorization and Authentication on the MQTT broker. You need to enable authentication and authorization yourself.
- The MQTT or Kafka broker is not exposed externally by default. You need to enable external MQTT access first, or alternatively expose Kafka externally.
Where to get more information?
- For more information about the involved microservices, please take a look at our architecture page.
- For more information about MQTT, Kafka, or the Unified Namespace, visit the Learning Hub.
- For more information about the reasons to use MQTT and Kafka, please take a look at our blog article Tools & Techniques for scalable data processing in Industrial IoT.
2 - Historian / Data Storage
The Historian / Data Storage feature in the United Manufacturing Hub provides reliable data storage and analysis for your manufacturing data. Essentially, a Historian is just another term for a data storage system, designed specifically for time-series data in manufacturing.
When should I use it?
If you want to reliably store data from your shop floor that is not designed to fulfill any legal purposes, such as GxP, then the United Manufacturing Hub’s Historian feature is ideal. Open-Source databases such as TimescaleDB are superior to traditional historians in terms of reliability, scalability and maintainability, but can be challenging to use for the OT engineer. The United Manufacturing Hub fills this usability gap, allowing OT engineers to easily ingest, process, and store data permanently in an Open-Source database.
What can I do with it?
The Historian / Data Storage feature of the United Manufacturing Hub allows you to:
Store and analyze data
- Automatically store data from the
processValue
topics in the Unified Namespace. Data can be sent to the Unified Namespace from various sources, allowing you to store tags from your PLC and production lines reliably. - Conduct basic data analysis, including automatic downsampling, gap filling, and statistical functions such as Min, Max, and Avg
Query and visualize data
- Query data in an ISA95 model, from enterprise to site, area, production line, and work cell.
- Visualize your data in Grafana to easily monitor and troubleshoot your production processes.
More information about the exact analytics functionalities can be found in the umh-datasource-v2 documentation. Further below some screenshots of said datasource.
Efficiently manage data
- Compress and retain data to reduce database size using various techniques.
How can I use it?
Convert your data in your Unified Namespace to processValue messages, and the Historian feature will store them automatically. You can then view the data in Grafana. An example can be found in the Getting Started guide.
For more information about what exactly is behind the Historian feature, check out our our architecture page
What are the limitations?
- Only data in
processValue
topics are saved automatically. Data in topics likeia/raw
are not. Data send to other messages in the UMH datamodel are stored slightly different and can be retrieved via Grafana as well. See also Analytics feature. - After storing a couple of millions messages, you should consider compressing the messages or establishing retention policies.
- At the moment, extensive queries can only be done in your own code by leveraging the API in factoryinsight, or processing the data in the Unified Namespace.
Apart from these limitations, the United Manufacturing Hub’s Historian feature is highly performant compared to legacy Historians.
Where to get more information?
- Learn more about the benefits of using open-source databases in our blog article, Historians vs Open-Source databases - which is better?
- Check out the Getting Started guide to start using the Historian feature.
- Learn more about the United Manufacturing Hub’s architecture by visiting our architecture page.
3 - Shopfloor KPIs / Analytics
The Shopfloor KPI / Analytics feature of the United Manufacturing Hub provides a configurable and plug-and-play approach to create “Shopfloor Dashboards” for production transparency consisting of various KPIs and drill-downs.
Click on the images to enlarge them. More examples can be found in this YouTube video and in our community-repo on GitHub.
When should I use it?
If you want to create production dashboards that are highly configurable and can drill down into specific KPIs, the Shopfloor KPI / Analytics feature of the United Manufacturing Hub is an ideal choice. This feature is designed to help you quickly and easily create dashboards that provide a clear view of your shop floor performance.
What can I do with it?
The Shopfloor KPI / Analytics feature of the United Manufacturing Hub allows you to:
Query and visualize
In Grafana, you can:
- Calculate the OEE (Overall Equipment Effectiveness) and view trends over time
- Availability is calculated using the formula
(plannedTime - stopTime) / plannedTime
, whereplannedTime
is the duration of time for all machines states that do not belong in the Availability or Performance category, andstopTime
is the duration of all machine states configured to be an availability stop. - Performance is calculated using the formula
runningTime / (runningTime + stopTime)
, whererunningTime
is the duration of all machine states that consider the machine to be running, andstopTime
is the duration of all machine states that are considered a performance loss. Note that this formula does not take into account losses caused by letting the machine run at a lower speed than possible. To approximate this, you can use the LowSpeedThresholdInPcsPerHour configuration option (see further below). - Quality is calculated using the formula
good pieces / total pieces
- Availability is calculated using the formula
- Drill down into stop reasons (including histograms) to identify the root-causes for a potentially low OEE.
- List all produced and planned orders including target vs actual produced pieces, total production time, stop reasons per order, and more using job and product tables.
- See machine states, shifts, and orders on timelines to get a clear view of what happened during a specific time range.
- View production speed and produced pieces over time.
Configure
In the database, you can configure:
- Stop Reasons Configuration: Configure which stop reasons belong into which category for the OEE calculation and whether they should be included in the OEE calculation at all. For instance, some companies define changeovers as availability losses, some as performance losses. You can easily move them into the correct category.
- Automatic Detection and Classification: Configure whether to automatically detect/classify certain types of machine states and stops:
- AutomaticallyIdentifyChangeovers: If the machine state was an unspecified machine stop (UnknownStop), but an order was recently started, the time between the start of the order until the machine state turns to running, will be considered a Changeover Preparation State (10010). If this happens at the end of the order, it will be a Changeover Post-processing State (10020).
- MicrostopDurationInSeconds: If an unspecified stop (UnknownStop) has a duration smaller than a configurable threshold (e.g., 120 seconds), it will be considered a Microstop State (50000) instead. Some companies put small unknown stops into a different category (performance) than larger unknown stops, which usually land up in the availability loss bucket.
- IgnoreMicrostopUnderThisDurationInSeconds: In some cases, the machine can actually stop for a couple of seconds in routine intervals, which might be unwanted as it makes analysis difficult. One can set a threshold to ignore microstops that are smaller than a configurable threshold (usually like 1-2 seconds).
- MinimumRunningTimeInSeconds: Same logic if the machine is running for a couple of seconds only. With this configurable threshold, small run-times can be ignored. These can happen, for example, during the changeover phase.
- ThresholdForNoShiftsConsideredBreakInSeconds: If no shift was planned, an UnknownStop will always be classified as a NoShift state. Some companies move smaller NoShift’s into their category called “Break” and move them either into Availability or Performance.
- LowSpeedThresholdInPcsPerHour: For a simplified performance calculation, a threshold can be set, and if the machine has a lower speed than this, it could be considered a LowSpeedState and could be categorized into the performance loss bucket.
- Language Configuration: The language of the machine states can be configured using the languageCode configuration option (or overwritten in Grafana).
You can find the configuration options in the configurationTable
How can I use it?
Using it is very easy:
- Send messages according to the UMH datamodel to the Unified Namespace (similar to the Historian feature)
- Configure your OEE calculation by adjusting the configuration table
- Open Grafana, select your equipment and select the analysis you want to have. More information can be found in the umh-datasource-v2.
For more information about what exactly is behind the Analytics feature, check out our our architecture page and our datamodel
What are the limitations?
At the moment, the limitations are:
- Speed losses in Performance are not calculated and can only be approximated using the LowSpeedThresholdInPcsPerHour configuration option
- There is no way of tracking losses through reworked products. Either a product is scrapped or not.
Where to get more information?
- Learn more about the benefits of using open-source databases in our blog article, Historians vs Open-Source databases - which is better?
- Learn more about the United Manufacturing Hub’s architecture by visiting our architecture page.
- Learn more about the datamodel by visiting our datamodel
- To build visual dashboards, check out our tutorial on using Grafana Canvas
4 - Data connectivity with Node-RED
One feature of the United Manufacturing Hub is to connect devices on the shopfloor such as PLCs, Quality Stations or MES / ERP systems with the Unified Namespace using Node-RED. Node-RED has a large library of nodes, which lets you connect various protocols. It also has a user-friendly UI with little code, making it easy to configure the desired nodes.
When should I use it?
Sometimes it is necessary to connect a lot of different protocols (e.g Siemens-S7, OPC-UA, Serial, …) and node-RED can be a maintainable solution to connect all these protocols without the need for other data connectivity tools. Node-RED is largely known in the IT/OT-Community making it a familiar tool for a lot of users.
What can I do with it?
By default, there are connector nodes for common protocols:
- connect to MQTT using the MQTT node
- connect to HTTP using the HTTP node
- connect to TCP using the TCP node
- connect to IP using the UDP node
Furthermore, you can install packages to support more connection protocols. For example:
- connect to OPC-UA (node-red-contrib-opcua)
- connect to kafka (node-red-contrib-kafkajs)
- connect to Siemens-S7 (node-red-contrib-s7)
- connect to serial (node-red-node-serialport
- connect to modbus (node-red-contrib-modbus)
- connect to MC-protocol (node-red-contrib-mcprotocol)
- connect to OMRON FINS Ethernet protocol (node-red-contrib-omron-fins)
- connect to EtherNet/IP Protocol (node-red-contrib-cip-ethernet-ip)
- connect to PostgreSQL (node-red-contrib-postgresql)
- connect to SAP SQL Anywhere
You can additionally contextualize the data, using function or other different nodes do manipulate the received data.
How can I use it?
Node-RED comes preinstalled as a microservice with the United Manufacturing Hub.
To access Node-RED navigate to Network -> Services on the left-hand side in UMHLens. You can download UMHLens / OpenLens here.
On the top right, change the Namespace from default to united-manufacturing-hub.
Click on united-manufacturing-hub-nodered-service, scroll down to Connection and forward the port.
Once Node-RED opens in the browser, add
nodered
to the URL to avoid the cannot get error.Begin exploring right away! If you require inspiration on where to start, we provide a variety of guides to help you become familiar with various node-red workflows, including how to process data and align it with the UMH datamodel:
- Create a Node-RED Flow with Simulated OPC-UA Data
- Create a Node-RED flow with simulated PackML data
- Alternatively, visit learning page where you can find multiple best practices for using Node-RED
What are the limitations?
- Most packages have no enterprise support. If you encounter any errors, you need to ask the community. However, we found that these packages are often more stable than the commercial ones out there, as they have been battle tested by way more users than commercial software.
- Having many flows without following a strict structure, leads in general to confusion.
- One additional limitation is “the speed of development of Node-RED”. After a big Node-RED and JavaScript update dependencies most likely break, and those single community maintained nodes need to be updated.
Where to get more information?
- Learn more about Node-RED and the United Manufacturing Hub by following our Get started guide .
- Learn more about Best-practices & guides for Node-RED.
- Learn how to connect Node-RED to SAP SQL Anywhere with a custom docker instance.
- Checkout the industrial forum for Node-RED
5 - Retrofitting with ifm IO-link master and sensorconnect
Retrofitting older machines with sensors is sometimes the only-way to capture process-relevant information. In this article, we will focus on retrofitting with ifm IO-Link master and Sensorconnect, a microservice of the United Manufacturing Hub, that finds and reads out ifm IO-Link masters in the network and pushes sensor data to MQTT/Kafka for further processing.
When should I use it?
Retrofitting with ifm IO-Link master such as the AL1350 and using Sensorconnect is ideal when dealing with older machines that are not equipped with any connectable hardware to read relevant information out of the machine itself. By placing sensors on the machine and connecting them with IO-Link master, required information can be gathered for valuable insights. Sensorconnect helps to easily connect to all sensors correctly and properly capture the large amount of sensor data provided.
What can I do with it?
With ifm IO-Link master and Sensorconnect, you can collect data from sensors and make it accessible for further use. Sensorconnect offers:
- Automatic detection of ifm IO-Link masters in the network.
- Identification of IO-Link and alternative digital or analog sensors connected to the master using converter such as the DP2200. Digital Sensors employ a voltage range from 10 to 30V DC, producing binary outputs of true or false. In contrast, analog sensors operate at 24V DC, with a current range spanning from 4 to 20 mA. Utilizing the appropriate converter, analog outputs can be effectively transformed into digital signals.
- Constant polling of data from the detected sensors.
- Interpreting the received data based on a sensor database containing thousands of entries.
- Sending data in JSON format to MQTT and Kafka for further data processing.
How can I use it?
To use ifm IO-link gateways and Sensorconnect please follow these instructions:
- Ensure all IO-Link gateways are in the same network or accessible from your instance of the United Manufacturing Hub.
- Retrofit the machines by connecting the desired sensors and establish a connection with ifm IO-Link gateways.
- Configure the Sensorconnect IP-range to either match the IP address using subnet notation /32, or, in cases involving multiple masters, configure it to scan an entire range, for example /24. To change the value, go to the Customize the United Manufacturing Hub section.
- Once completed, the data should be available in your Unified Namespace.
What are the limitations?
- The current ifm firmware has a software bug, that will cause the IO-Link master to crash if it receives to many requests. To resolve this issue, you can either request an experimental firmware, which is available exclusively from ifm, or re-connect the power to the IO-Link gateway.
Where to get more information?
6 - Retrofitting with USB barcodereader
The barcodereader microservice enables the processing of barcodes from USB-linked scanner devices, subsequently publishing the acquired data to the Unified Namespace.
When should I use it?
When you need to connect a barcode reader or any other USB devices acting as a keyboard (HID). These cases could be to scan an order at the production machine from the accompanying order sheet. Or To scan material for inventory and track and trace.
What can I do with it?
You can connect USB devices acting as a keyboard to the Unified Namespace. It will record all inputs and send it out once a return / enter character has been detected. A lof of barcode scanners work that way. Additionally, you can also connect something like a quality testing station (we once connected a Mitutoyo quality testing station).
How can I use it?
To use the microservice barcode reader, you will need configure the helm-chart and enable it.
- Enable _000_commonConfig.datasources.barcodereader.enabled in the Helm Chart
- During startup, it will show all connected USB devices. Remember yours and then change the INPUT_DEVICE_NAME and INPUT_DEVICE_PATH
- Also set ASSET_ID, CUSTOMER_ID, etc. as this will then send it into the topic ia/ASSET_ID/…/barcode
- Restart the pod
- Scan a device, and it will be written into the topic xxx
Once installed, you can configure the microservice by setting the needed environment variables. The program will continuously scan for barcodes using the device and publish the data to the Kafka topic.
What are the limitations?
- Sometimes special characters are not parsed correctly. They need to be adjusted afterward in th Unified Namespace.
Where to get more information?
7 - Alerting
The United Manufacturing Hub utilizes a TimescaleDB database, which is based on PostgreSQL. Therefore, you can use the PostgreSQL plugin in Grafana to implement and configure alerts and notifications.
Why should I use it?
Alerts based on real-time data enable proactive problem detection. For example, you will receive a notification if the temperature of machine oil or an electrical component of a production line exceeds limitations. By utilizing such alerts, you can schedule maintenance, enhance efficiency, and reduce downtime in your factories.
What can I do with it?
Grafana alerts help you keep an eye on your production and manufacturing processes. By setting up alerts, you can quickly identify problems, ensuring smooth operations and high-quality products. An example of using alerts is the tracking of the temperature of an industrial oven. If the temperature goes too high or too low, you will get an alert, and the responsible team can take action before any damage occurs. Alerts can be configured in many different ways, for example, to set off an alarm if a maximum is reached once or if it exceeds a limit when averaged over a time period. It is also possible to include several values to create an alert, for example if a temperature surpasses a limit and/or the concentration of a component is too low. Notifications can be sent simultaneously across many services like Discord, Mail, Slack, Webhook, Telegram, or Microsoft Teams. It is also possible to forward the alert with SMS over a personal Webhook. A complete list can be found on the Grafana page about alerting.
How can I use it?
For a detailed tutorial on how to set up an alert, please visit our learn page with the detailed step-by-step tutorial. Here you can find an overview of the process.
Install the PostgreSQL plugin in Grafana: Before you can formulate alerts, you need to install the PostgreSQL plugin, which is already integrated into Grafana.
Alert Rule: When creating an alert, you first have to set the alert rule in Grafana. Here you set a name, specify which values are used for the rule, and when the rule is fired. Additionally, you can add labels for your rules, to link them to the correct contact points. You have to use SQL to select the desired values.
Contact Point: In a contact point you create a collection of addresses and services that should be notified in case of an alert. This could be a Discord channel or Slack for example. When a linked alert is triggered, everyone within the contact point receives a message. The messages can be preconfigured and are specific to every service or contact.
Notification Policies: In a notification policy, you establish the connection of a contact point with the desired alerts. This is done by adding the labels of the desired alerts and the contact point to the policy.
Mute Timing: In case you do not want to receive messages during a recurring time period, you can add a mute timing to Grafana. If added to the notification policy, no notifications will be sent out by the contact point. This could be times without shifts, like weekends or during regular maintenance.
Silence: You can also add silences for a specific time frame and labels, in case you only want to mute alerts once.
An alert is only sent out once after being triggered. For the next alert, it has to return to the normal state, so the data no longer violates the rule.
What are the limitations?
It can be complicated to select and manipulate the desired values to create the correct function for your application. Grafana cannot differentiate between data points of the same source. For example, you want to make a temperature threshold based on a single sensor. If your query selects the last three values and two of them are above the threshold, Grafana will fire two alerts which it cannot tell apart. This results in errors. You have to configure the rule to reduce the selected values to only one per source to avoid this. It can be complicated to create such a specific rule with this limitation, and it requires some testing.
Another thing to keep in mind is that the alerts can only work with data from the database. It also does not work with the machine status; these values only exist in a raw, unprocessed form in TimescaleDB and are not processed through an API like process values.