The United Manufacturing Hub is an Open-Source Helm Chart for Kubernetes, which combines state-of -the-art IT / OT tools & technologies and brings them into the hands of the engineer.
Bringing the worlds best IT and OT tools into the hands of the engineer
Why start from scratch when you can leverage a proven open-source blueprint? Kafka, MQTT, Node-RED, TimescaleDB and Grafana with the press of a button - tailored for manufacturing and ready-to-go
What can you do with it?
Everything That You Need To Do To Generate Value On The Shopfloor
Exchange and store data using HiveMQ for IoT devices, Apache Kafka as enterprise message broker, and TimescaleDB as a reliable relational and time-series storage solution
Visualize data using Grafana and factoryinsight to build powerful shopfloor dashboards
Prevent Vendor Lock-In and Customize to Your Needs
The only requirement is Kubernetes, which is available in various flavors, including k3s, bare-metal k8s, and Kubernetes-as-a-service offerings like AWS EKS or Azure AKS
Swap components with other options at any time. Not a fan of Node-RED? Replace it with Kepware. Prefer a different MQTT broker? Use it!
Leverage existing systems and add only what you need.
Get Started Immediately
Download & install now, so you can show results instead of drawing nice boxes in PowerPoint
Connect with Like-Minded People
Tap into our community of experts and ask anything. No need to depend on external consultants or system integrators.
Leverage community content, from tutorials and Node-RED flows to Grafana dashboards. Although not all content is enterprise-supported, starting with a working solution saves you time and resources.
Get honest answers in a world where many companies spend millions on advertising.
How does it work?
Only requirement: a Kubernetes cluster (and we'll even help you with that!). You only need to install the United Manufacturing Hub Helm Chart on that cluster and configure it.
The United Manufacturing Hub will then generate all the required files for Kubernetes, including auto-generated secrets, various microservices like bridges between MQTT / Kafka, datamodels and configurations. From there on, Kubernetes will take care of all the container management.
Yes - the United Manufacturing Hub is targeting specifically people and companies, who do not have the budget and/or knowledge to work on their own / develop everything from scratch.
With our extensive documentation, guides and knowledge sections you can learn everything that you need.
The United Manufacturing Hub abstracts these tools and technologies so that you can leverage all advantages, but still focus on what really matters: digitizing your production.
With our commercial Management Console you can manage your entire IT / OT infrastructure and work with Grafana / Node-RED without the need to ever touch or understand Kubernetes, Docker, Firewalls, Networking or similar.
Additionally, you can get support licenses providing unlimited support during development and maintenance of the system. Take a look at our website if you want to get more information on this.
Because very often these solutions do not target the actual pains of an engineer: implementation and maintenance. And then companies struggle in rolling out IIoT as the projects take much longer and cost way more than originally proposed.
In the United Manufacturing Hub, implementation and maintenance of the system are the first priority. We've had these pains too often ourselves and therefore incorporated and developed tools & technologies to avoid them.
For example, with sensorconnect we can retrofit production machines where it is impossible at the moment to extract data. Or, with our modular architecture we can fit the security needs of all IT departments -
from integration into a demilitarized zone to on-premise and private cloud. With Apache Kafka we solve the pain of corrupted or missing messages when scaling out the system
How to proceed?
1 - Get Started!
You want to get started right away? Go ahead and jump into the action!
Great to see you’re ready to start! This guide has 3 steps:
Installation, Data Acquisition and Manipulation, and Data Visualization.
Contact Us!
Do you still have questions on how to get started? Message us on our
Discord Server.
1.1 - 1. Installation
Install the United Manufacturing Hub together with all required tools on a Linux Operating System.
If you are new to the United Manufacturing Hub and need a place to start, this
is the place to be. You will be guided through setting up an account,
installing your first instance and connecting to an OPC UA simulator in no time.
Requirements
Device
You will need an edge device, bare metal server or virtual machine with
internet access. The device should meet the following
minimum requirements or the installation will fail:
ARM-based systems, such as a Raspberry Pi, are not currently supported.
Operating System
We support the following operating systems:
You can find the image for Rocky in the Management Console, when you are
setting up your first instance.
Newer or older versions of the operating system, or other operating systems
such as Ubuntu, may work, but please note that we do not support them
commercially.
Network
A personal computer with a recent browser to access the
Management Console.
Ensure that management.umh.app is allowlisted on TCP port 443 for HTTPS traffic.
Once logged in with your new account, click on
Add Your First Instance.
Create your first Instance
First you have to set up your device and install the operating system.
We support , but we strongly recommend
using Rocky. You can find a list of the requirements and the image for
Rocky by clicking on the REQUIREMENTS button on the right hand side of the
Management Console.
Once you have successfully installed
your operating system, you can configure your instance in the Management
Console. For the first instance you should only change the Name and
Location of the instance. These will help you to identify an instance
if you have more than one.
Once the name and location are set,
continue by clicking on the Add Instance button. To install the UMH, copy
the command shown in the dialogue box, SSH into the new machine, paste the
command and follow the instructions. This command will run the installation
script for the UMH and connect it to your Management Console account.
If the UMH installation was
successful, you can click the Continue button. Your instance should appear in the Instances and Topology sections of the left-hand menu after a few minutes.
What’s next?
Once you installed UMH, you can continue with the
next page to learn how to
connect an OPC UA server to your instance.
1.2 - 2. Data Acquisition and Manipulation
Learn how to connect your UMH to an OPC UA server and format data into the UMH data model.
Once your UMH instance is up and running, you can follow this guide to learn
how to connect the instance to an OPC UA server. For this example we will use
the OPC UA simulator that is provided by your instance.
Connect to external devices
You can connect your UMH instances to external devices using a variety of
protocols. This is done in the Management Console and consists of two steps.
First you connect to the device and check that your instance can reach it. The
second step is to create a protocol converter for the connection, in which you
define the data you want to collect and how it should be structured in your
unified namespace.
To allow you to experience the UMH as quickly as possible, the
connection to the internal OPC UA simulator is already pre-configured.
Therefore, the Add a Connection section is included for reference
only. You can skip to Add a Protocol Converter below.
Add a Connection
To create a new connection, navigate to the Connections section in the
left hand menu and click on the + Add Connection button in the top right
hand corner.
If you want to configure the connection to the OPC UA simulator by yourself,
delete the preconfigured connection named
default-opcua-simulator-connection.
Having two connections to the same device can cause errors when deploying
the Protocol Converter!
Under General Settings select your instance and give the connection a
name. Enter the address and port of the device you want to connect to. To
connect to the OPC UA Simulator, use
You can also set additional location fields to help you keep track of your
of your connections. The fields already set by the selected instance are
inherited and cannot be changed.
Once everything is set up, you can click Save & Deploy. The instance
will now attempt to connect to the device on the specified IP and port. If
there is no error, it will be listed in the Connections section.
Click on the connection to view its details, to edit click on the
Configuration button in the side panel.
Add a Protocol Converter
To access the data from the OPC UA Simulator you need to add a
Protocol Converter to the connection.
Click on the connection to the OPC UA Simulator in the Connections table.
If you are using the preconfigured one, it is called default-opcua-simulator-connection.
Click on the + Add Protocol Converter button in the opening menu.
First you need to select the protocol used to communicate with the device,
in this case OPC UA. This can be found under General.
Input: Many of the required details are already set based on the
connection details. For this tutorial, we will subscribe to a tag and a folder
on the OPC UA server. Tags and folders can be selected manually using
the NodeID or by using the OPC UA Browser.
If you want to select the nodes via the OPC UA Browser, uncheck the Root
box, navigate to Root/Objects/Boilers/Boiler #2 and select the
ParameterSet folder. Next navigate to
Root/Objects/OpcPlc/Telemetry/Fast and select the FastUInt1 tag, then
click Apply at the bottom.
To add the nodes manually, close the OPC UA Browser by clicking on the
OPC UA BROWSER button at the right edge of the window.
The nodes must be added as Namespaced String NodeIDs.
Now copy the code below and replace the current nodeIDs: with it.
nodeIDs:- ns=4;i=5020- ns=3;s=FastUInt1
In the input section, you must also specify the OPC UA server username and
password, if it uses one.
The Input should now look like this.
Note that the indentation is important.
Processing: In this section you can manipulate the incoming data and
sort it into the desired asset. The auto-generated configuration will sort
each tag into the same asset based on the location used for the instance and
connection, while the tag name will be based on the name of the tag on the OPC
UA server.
Further information can be found in the OPC UA Processor section next to
the Processing field, for example how to create individual assets for
each tag.
Output: The output section is generated entirely automatically by the
Management Console.
Now click on Save & Deploy. Your Protocol Converter will be added.
To view the data, navigate to the Tag Browser on the left. Here, you can
see all your tags. The tree on the left is build from the asset of each tag,
you can navigate it by clicking on the asset parts.
Next, we’ll dive into Data Visualisation,
where you’ll learn how to create Grafana dashboards with your newly collected
data.
1.3 - 3. Data Visualization
Build a simple Grafana dashboard with the gathered data.
After bringing the data from the OPC UA simulator into your Unified Namespace,
you can use Grafana to create dashboards to display it. If you haven’t
already connected the OPC UA simulator to your instance, you can follow the
previous guide.
Accessing Grafana
Make sure you are on the same network as your instance to access Grafana.
In the Management Console, select Applications from the left hand
menu. Click on Historian (Grafana) for the instance to which you have
connected the OPC UA simulator. You can search for your instance’s applications
by entering its name in the Filter by name or instance field at the top of
the page.
Click on the URL displayed in the side panel that opens. This will only work
if you have set the correct IP address for your instance during the
installation script. If you can’t connect to Grafana, find out the IP address
of your instance and enter the following URL in your browser
http://<IP-address-of-your-instance>:8080
Copy the Grafana password by clicking on it in the side panel of the
application. The user name is admin.
To add a dashboard, click on Dashboards in the left menu and then on the
blue + Create Dashboard button in the middle of the page. On the next page
click + Add visualisation.
Select the UMH TimescaleDB data source in the Select data source
dialogue box.
To access data from your Unified Namespace in Grafana, you can use SQL
queries generated by the Management Console for each tag. Open the
Tag Browser and navigate to the desired tag, e.g. CurrentTemperature
from the ParameterSet folder of the OPC UA Simulator. The query is located
at the bottom of the page under SQL Query. Make sure it is set to
Grafana and then copy it.
In Grafana, change the query mode to Code by toggling the
Builder/Code switch located on the right hand side of the page next to the
blue Run query button. Paste the query you copied from the Management
Console into the text box.
Click the blue Run query button. If everything is set up correctly you
should see the data displayed in a graph.
There are many ways to customise the graph on the right hand side of the
page. For example, you can use different colours or chart styles. You can also
add more queries at the bottom of the page.
When you are happy, click the blue Apply button in the top right
corner. You will now see the dashboard. You can adjust the size and position
of the graph or access other options by clicking on the three dots in the top
right hand corner of the graph. To add another graph, click Add and then
Visualisation from the menu bar at the top of the page.
To save your dashboard, press Ctrl + S.
Further Reading
If you would like to find out more about the United Manufacturing Hub, here
are some links to get you links to get you started.
General knowledge, updates and guidance? Check out our Learn page:
Do you want to understand the capabilities of the United Manufacturing Hub, but do not want to get lost in technical architecture diagrams? Here you can find all the features explained on few pages.
2.1 - Connectivity
Introduction to IIoT Connections and Data Sources Management in the
United Manufacturing Hub.
In IIoT infrastructures, sometimes can be challenging to extract and contextualize
data from from various systems into the Unified Namespace, because there is no
universal solution. It usually requires lots of different tools, each one
tailored to the specific infrastructure, making it hard to manage and maintain.
With the United Manufacturing Hub and the Management Console, we aim to solve
this problem by providing a simple and easy to use tool to manage all the assets
in your factory.
For lack of a better term, when talking about a system that can be connected to
and that provides data, we will use the term asset.
When should I use it?
Contextualizing data can present a variety of challenges, both technical and
at the organization level. The Connection Management functionality aims to reduce
the complexity that comes with these challenges.
Here are some common issues that can be solved with the Connection Management:
It is hard to get an overview of all the data sources and their connections'
status, as the concepts of “connection” and “data source” are often decoupled.
This leads to list the connections’ information into long spreadsheets, which
are hard to maintain and troubleshoot.
Handling uncommon communication protocols.
Dealing with non-standard connections, like a 4-20 mA sensor or a USB-connected
barcode reader.
Advanced IT tools like Apache Spark or
Apache Flink may be challenging for OT personnel
who have crucial domain knowledge.
Traditional OT tools often struggle in modern IT environments, lacking features
like Docker compatibility, monitoring, automated backups, or high availability.
What can I do with it?
The Connection Management functionality in the Management Console aims to
address those challenges by providing a simple and easy to use tool to manage
all the assets in your factory.
You can add, delete, and most importantly, visualize the status of all your
connections in a single place. For example, a periodic check is performed to
measure the latency of each connection, and the status of the connection is
displayed in the Management Console.
You can also add notes to each connection, so that you can keep all the documentation
in a single place.
You can then configure a data source for each connection, to start extracting
data from your assets. Once the data source is configured, specific information
about its status is displayed, prompting you in case of misconfigurations, data
not being received, or other any error that may occur.
How can I use it?
Add new connections from the Connection Management page of the Management Console.
Then, configure a data source for each of them by choosing one of the available
tools, depending on the type of connection.
The following tools come with the United Manufacturing Hub and are recommended
for extracting data from your assets:
Node-RED
Node-RED is a leading open-source tool for IIoT
connectivity. We recommend this tool for prototyping and integrating parts of the
shop floor that demand high levels of customization and domain knowledge.
Even though it may be unreliable in high-throughput scenarios, it has a vast
global community that provides a wide range of connectors for different protocols
and data sources, while remaining very user-friendly with its visual programming
approach.
Benthos UMH
Benthos UMH is a custom extension of the Benthos
project. It allows you to connect assets that communicate via the OPC UA protocol,
and it is recommended for scenarios involving the extraction of large data volumes
in a standardized format.
It is a lightweight, open-source tool that is easy to deploy and manage. It is
ideal for moving medium-sized data volumes more reliably then Node-RED, but it
requires some technical knowledge.
Other Tools
The United Manufacturing Hub also provides tools for connecting data sources
that uses other types of connections. For example, you can easily connect
ifm IO-Link
sensors or USB barcode readers.
Third-Party Tools
Any existing connectivity solution can be integrated with the United
Manufacturing Hub, assuming it can send data to either MQTT or Kafka.
Additionally, if you want to deploy those tools on the Device & Container
Infrastructure, they must be available as a Docker container (developed with best-practices).
Therefore, we recommend using the tools mentioned above, as they are the most
tested and reliable.
What are the limitations?
Some of the tools still require some technical knowledge to be used. We are
working on improving the user experience and documentation to make them more
accessible.
Where to get more information?
Follow the Get started guide to learn how to connect
your assets to the United Manufacturing Hub.
Connect devices on the shop floor using Node-RED with United Manufacturing Hub’s Unified Namespace. Simplify data integration across PLCs, Quality Stations, and MES/ERP systems with a user-friendly UI.
One feature of the United Manufacturing Hub is to connect devices on the shopfloor such as PLCs, Quality Stations or
MES / ERP systems with the Unified Namespace using Node-RED.
Node-RED has a large library of nodes, which lets you connect various protocols. It also has a user-friendly UI
with little code, making it easy to configure the desired nodes.
When should I use it?
Sometimes it is necessary to connect a lot of different protocols (e.g Siemens-S7, OPC-UA, Serial, …) and node-RED can be a maintainable
solution to connect all these protocols without the need for other data connectivity tools. Node-RED is largely known in
the IT/OT-Community making it a familiar tool for a lot of users.
What can I do with it?
By default, there are connector nodes for common protocols:
connect to MQTT using the MQTT node
connect to HTTP using the HTTP node
connect to TCP using the TCP node
connect to IP using the UDP node
Furthermore, you can install packages to support more connection protocols. For example:
You can additionally contextualize the data, using function or other different nodes do manipulate the
received data.
How can I use it?
Node-RED comes preinstalled as a microservice with the United Manufacturing Hub.
To access Node-RED, simply open the following URL in your browser:
http://<instance-ip-address>:1880/nodered
Begin exploring right away! If you require inspiration on where to start, we provide a variety of guides to help you
become familiar with various node-red workflows, including how to process data and align it with the UMH datamodel:
Alternatively, visit learning page where you can find multiple best practices for using Node-RED
What are the limitations?
Most packages have no enterprise support. If you encounter any errors, you need to ask the community.
However, we found that these packages are often more stable than the commercial ones out there,
as they have been battle tested by way more users than commercial software.
Having many flows without following a strict structure, leads in general to confusion.
One additional limitation is “the speed of development of Node-RED”. After a big Node-RED and JavaScript update
dependencies most likely break, and those single community maintained nodes need to be updated.
Where to get more information?
Learn more about Node-RED and the United Manufacturing Hub by following our Get started guide .
Configure protocol converters to stream data to the Unified Namespace directly in the Management Console.
Benthos is a stream processing tool that
is designed to make common data engineering tasks such as transformations,
integrations, and multiplexing easy to perform and manage. It uses declarative,
unit-testable configuration, allowing users to easily adapt their data
pipelines as requirements change. Benthos is able to connect to a wide range of
sources and sinks, and can use different languages for processing and mapping
data.
Benthos UMH is a custom extension of
Benthos that is designed to connect to OPC-UA servers and stream data into the
Unified Namespace.
When should I use it?
Benthos UMH is valuable for integrating different protocols with the Unified Namespace.
With it, you can configure various protocol converters, define the data you want to
stream, and send it to the Unified Namespace.
Furthermore, in our tests,
Benthos has proven more reliable than tools like Node-RED, when it comes to
handling large amounts of data.
What can I do with it?
Benthos UMH offers some benefits, including:
Management Console integration: Configure and deploy any number of protocol converters via
Benthos UMH directly from the Management Console.
OPC-UA support: Connect to any OPC-UA server and stream data into the
Unified Namespace.
Report by exception: By configuring the OPC-UA nodes in subscribe mode,
you can only stream data when the value of the node changes.
Per-node configuration: Define the nodes you want to stream and configure
them individually.
Broad customization: Use Benthos’ extensive configuration options to
customize your data pipeline.
Easy deployment: Deploy Benthos UMH as a standalone Docker container or
directly from the Management Console.
Fully open source: Benthos UMH is fully open source and available on
Github.
How can I use it?
With the Management Console
The easiest way to use Benthos UMH is to deploy it directly from the Management
Console.
After adding your network device or service, you can initialize the protocol
converter. Simply click on the Play button next to the network device/service
at the Protocol Converters tab.
From there, you’ll have two options to choose from when configuring the
protocol converter:
OPC-UA: Select this option if you specifically need to configure
OPC-UA protocol converters. It offers direct integration with OPC-UA servers
and improved data contextualization. This is particularly useful when you need
to assign tags to specific data points within the Unified Namespace. You’ll be
asked to define OPC-UA nodes in YAML format, detailing the nodes you want to stream
from the OPC-UA server.
Universal Protocol Converter: Opt for this choice if you need to configure
protocol converters for various supported protocols other than OPC-UA. This option
will prompt you to define the Benthos input and processor configuration in YAML format.
For OPC-UA, ensure your YAML configuration follows the format below:
Required fields are opcuaID, enterprise, tagName and schema. opcuaID
is the NodeID in OPC-UA and can also be a folder (see README
for more information). The remaining components are components of the resulting
topic / ISA-95 structure (see also our datamodel). By default,
the schema will always be in _historian, and tagName is the keyname.
Standalone
Benthos UMH can be manually deployed as part of the UMH stack using the provided Docker
image and following the instructions outlined in the README.
For more specialized use cases requiring precise configuration, standalone deployment
offers full control over the setup. However, this manual approach is more complex
compared to using the Universal Protocol Converter feature directly from the
Management Console.
Read the official Benthos documentation
for more information on how to use different components.
What are the limitations?
Benthos UMH excels in scalability, making it a robust choice for complex setups
managing large amounts of data. However, its initial learning curve may be steeper
due to its scripting language and a more hands-on approach to configuration.
As an alternative, Node-RED offers ease of use with its low-code approach and the
popularity of JavaScript. It’s particularly easy to start with, but as your setup grows,
it becomes harder to manage, leading to confusion and loss of oversight.
2.1.3.1 - Retrofitting with ifm IO-link master and sensorconnect
Upgrade older machines with ifm IO-Link master and Sensorconnect for seamless data collection and integration. Retrofit your shop floor with plug-and-play sensors for valuable insights and improved efficiency.
Retrofitting older machines with sensors is sometimes the only-way to capture process-relevant information.
In this article, we will focus on retrofitting with ifm IO-Link master and
Sensorconnect, a microservice of the United Manufacturing Hub, that finds and reads out ifm IO-Link masters in the
network and pushes sensor data to MQTT/Kafka for further processing.
When should I use it?
Retrofitting with ifm IO-Link master such as the AL1350 and using Sensorconnect is ideal when dealing with older machines that are not
equipped with any connectable hardware to read relevant information out of the machine itself. By placing sensors on
the machine and connecting them with IO-Link master, required information can be gathered for valuable
insights. Sensorconnect helps to easily connect to all sensors correctly and properly capture the large
amount of sensor data provided.
What can I do with it?
With ifm IO-Link master and Sensorconnect, you can collect data from sensors and make it accessible for further use.
Sensorconnect offers:
Automatic detection of ifm IO-Link masters in the network.
Identification of IO-Link and alternative digital or analog sensors connected to the master using converter such as the DP2200.
Digital Sensors employ a voltage range from 10 to 30V DC, producing binary outputs of true or false. In contrast, analog sensors operate at 24V DC, with a current range spanning from 4 to 20 mA. Utilizing the appropriate converter, analog outputs can be effectively transformed into digital signals.
Constant polling of data from the detected sensors.
Interpreting the received data based on a sensor database containing thousands of entries.
Sending data in JSON format to MQTT and Kafka for further data processing.
How can I use it?
To use ifm IO-link gateways and Sensorconnect please follow these instructions:
Ensure all IO-Link gateways are in the same network or accessible from your instance of the United Manufacturing Hub.
Retrofit the machines by connecting the desired sensors and establish a connection with ifm IO-Link gateways.
Deploy the sensorconnect feature and configure the Sensorconnect IP-range to either match the IP address using subnet notation /32, or, in cases involving multiple masters, configure it to scan an entire range, for example /24. To deploy the feature and change the value, execute the following command with your IP range:
Once completed, the data should be available in your Unified Namespace.
What are the limitations?
The current ifm firmware has a software bug, that will cause the IO-Link master to crash if it receives to many requests.
To resolve this issue, you can either request an experimental firmware, which is available exclusively from ifm, or re-connect the power to the IO-Link gateway.
Integrate USB barcode scanners with United Manufacturing Hub’s barcodereader microservice for seamless data publishing to Unified Namespace. Ideal for inventory, order processing, and quality testing stations.
The barcodereader microservice enables the processing of barcodes from USB-linked scanner devices, subsequently publishing the acquired
data to the Unified Namespace.
When should I use it?
When you need to connect a barcode reader or any other USB devices acting as a keyboard (HID). These cases could be to scan an order
at the production machine from the accompanying order sheet. Or To scan material for inventory and track and trace.
What can I do with it?
You can connect USB devices acting as a keyboard to the Unified Namespace. It will record all inputs and send it out once
a return / enter character has been detected. A lof of barcode scanners work that way. Additionally, you can also connect
something like a quality testing station (we once connected a Mitutoyo quality testing station).
How can I use it?
To use the microservice barcode reader, you will need configure the helm-chart and enable it.
Enable the barcodereader feature by executing the following command:
During startup, it will show all connected USB devices. Remember yours and then change the INPUT_DEVICE_NAME and INPUT_DEVICE_PATH. Also set ASSET_ID, CUSTOMER_ID, etc. as this will then send it into the topic ia/ASSET_ID/.../barcode. You can change these values of the helm chart using helm upgrade. You find the list of parameters here. The following command should be executed, for example:
Scan a device, and it will be written into the topic ia/ASSET_ID/.../barcode.
Once installed, you can configure the microservice by
setting the needed environment variables. The program will continuously scan for barcodes using the device and publish
the data to the Kafka topic.
What are the limitations?
Sometimes special characters are not parsed correctly. They need to be adjusted afterward in the Unified Namespace.
This page describes the data infrastructure of the United Manufacturing Hub.
2.2.1 - Unified Namespace
Seamlessly connect and communicate across shopfloor equipment, IT/OT systems,
and microservices.
The Unified Namespace is a centralized, standardized, event-driven data
architecture that enables for seamless integration and communication across
various devices and systems in an industrial environment. It operates on the
principle that all data, regardless of whether there is an immediate consumer,
should be published and made available for consumption. This means that any
node in the network can work as either a producer or a consumer, depending on
the needs of the system at any given time.
This architecture is the foundation of the United Manufacturing Hub, and you
can read more about it in the Learning Hub article.
When should I use it?
In our opinion, the Unified Namespace provides the best tradeoff for connecting
systems in manufacturing / shopfloor scenarios. It effectively eliminates the
complexity of spaghetti diagrams and enables real-time data processing.
While data can be shared through databases,
REST APIs,
or message brokers, we believe that a message broker approach is most suitable
for most manufacturing applications. Consequently, every piece of information
within the United Manufacturing Hub is transmitted via a message broker.
Both MQTT and Kafka are used in the United Manufacturing Hub. MQTT is designed
for the safe message delivery between devices and simplifies gathering data on
the shopfloor. However, it is not designed for reliable stream processing.
Although Kafka does not provide a simple way to collect data, it is suitable
for contextualizing and processing data. Therefore, we are combining both the
strengths of MQTT and Kafka. You can get more information from this article.
What can I do with it?
The Unified Namespace in the United Manufacturing Hub provides you the following
functionalities and applications:
Seamless Integration with MQTT: Facilitates straightforward connection
with modern industrial equipment using the MQTT protocol.
Legacy Equipment Compatibility: Provides easy integration with older
systems using tools like Node-RED
or Benthos UMH,
supporting various protocols like Siemens S7, OPC-UA, and Modbus.
Real-time Notifications: Enables instant alerting and data transmission
through MQTT, crucial for time-sensitive operations.
Historical Data Access: Offers the ability to view and analyze past
messages stored in Kafka logs, which is essential for troubleshooting and
understanding historical trends.
Scalable Message Processing: Designed to handle a large amount of data
from a lot of devices efficiently, ensuring reliable message delivery even
over unstable network connections. By using IT standard tools, we can
theoretically process data in the measure of GB/second instead of
messages/second.
Data Transformation and Transfer: Utilizes the
Data Bridge
to adapt and transmit data between different formats and systems, maintaining
data consistency and reliability.
Each feature opens up possibilities for enhanced data management, real-time
monitoring, and system optimization in industrial settings.
You can view the Unified Namespace by using the Management Console like in the picture
below. The picture shows data under the topic
umh/v1/demo-pharma-enterprise/Cologne/_historian/rainfall/isRaining, where
umh/v1 is a versioning prefix.
demo-pharma-enterprise is a sample enterprise tag.
Cologne is a sample site tag.
_historian is a schema tag. Data with this tag will be stored in the UMH’s database.
rainfall/isRaining is a sample schema dependent context, where rainfall is a tag group and
isRaining is a tag belonging to it.
The full tag name uniquely identifies a single tag, it can be found in the Publisher & Subscriber Info table.
The above image showcases the Tag Browser, our main tool for navigating the Unified Namespace. It includes the
following features:
Data Aggregation: Automatically consolidates data from all connected instances / brokers.
Topic Structure: Displays the hierarchical structure of topics and which data belongs to which namespace.
Tag Folder Structure: Facilitates browsing through tag folders or groups within a single asset.
Schema validation: Introduces validation for known schemas such as _historian. In case of validation
failure, the corresponding errors are displayed.
Tag Error Tracing: Enables error tracing within the Unified Namespace tree. When errors are detected in tags
or schemas, all affected nodes are highlighted with warnings, making it easier to track down the troubled
source tags or schemas.
Publisher & Subscriber Info: Provides various details, such as the origins and destinations of the data,
the instance it was published from, the messages per minute to get an overview on how much data is flowing,
and the full tag name to uniquely identify the selected tag.
Payload Visualization: Displays payloads under validated schemas in a formatted/structured manner, enhancing
readability. For unknown schemas without strict validation, the raw payload is displayed instead.
Tag Value History: Shows the last 100 received values for the selected tag, allowing you to track the
changes in the data over time. Keep in mind that this feature is only available for tags that are part of the
_historian schema.
Example SQL Query: Generates example SQL queries based on the selected tag, which can be used to query the
data in the UMH’s database or in Grafana for visualization purposes.
Kafka Origin: Provides information about the Kafka key, topic and the actual payload that was sent via Kafka.
It’s important to note that data displayed in the Tag Browser represent snapshots; hence, data sent at
intervals shorter than 10 seconds may not be accurately reflected.
You can find more detailed information about the topic structure here.
You can also use tools like MQTT Explorer
(not included in the UMH) or Redpanda Console (enabled by defualt, accessible
via port 8090) to view data from a single instance (but single instance only).
How can I use it?
To effectively use the Unified Namespace in the United Manufacturing Hub, start
by configuring your IoT devices to communicate with the UMH’s MQTT broker,
considering the necessary security protocols. While MQTT is recommended for
gathering data on the shopfloor, you can send messages to Kafka as well.
Once the devices are set up, handle the incoming data messages using tools like
Node-RED
or Benthos UMH. This step involves
adjusting payloads and topics as needed. It’s also important to understand and
follow the ISA95 standard model for data organization, using JSON as the
primary format.
Additionally, the Data Bridge
microservice plays a crucial role in transferring and transforming data between
MQTT and Kafka, ensuring that it adheres to the UMH data model. You can
configure a merge point to consolidate messages from multiple MQTT topics into
a single Kafka topic. For instance, if you set a merge point of 3, the Data
Bridge will consolidate messages from more detailed topics like
umh/v1/plant1/machineA/temperature into a broader topic like umh/v1/plant1.
This process helps in organizing and managing data efficiently, ensuring that
messages are grouped logically while retaining key information for each topic
in the Kafka message key.
Recommendation: Send messages from IoT devices via MQTT and then work in
Kafka only.
What are the limitations?
While JSON is the only supported payload format due to its accessibility, it’s
important to note that it can be more resource-intensive compared to formats
like Protobuf or Avro.
Learn how the United Manufacturing Hub’s Historian feature provides reliable data storage and analysis for your manufacturing data.
The Historian / Data Storage feature in the United Manufacturing Hub provides
reliable data storage and analysis for your manufacturing data. Essentially, a
Historian is just another term for a data storage system, designed specifically
for time-series data in manufacturing.
When should I use it?
If you want to reliably store data from your shop floor that is not designed to
fulfill any legal purposes, such as GxP, we recommend you to use the United Manufacturing Hub’s
Historian feature. In our opinion, open-source databases such as TimescaleDB are
superior to traditional historians
in terms of reliability, scalability and maintainability,
but can be challenging to use for the OT engineer. The United Manufacturing Hub
fills this usability gap, allowing OT engineers to easily ingest, process, and
store data permanently in an open-source database.
What can I do with it?
The Historian / Data Storage feature of the United Manufacturing Hub allows you
to:
Store and analyze data
Store data in TimescaleDB by using either the
_historian
or _analytics_schemas
in the topics within the Unified Namespace.
Data can be sent to the Unified Namespace
from various sources,
allowing you to store tags from your PLC and production lines reliably.
Optionally, you can use tag groups to manage a large number of
tags and reduce the system load.
Our Data Model page
assists you in learning data modeling in the Unified Namespace.
Conduct basic data analysis, including automatic downsampling, gap filling,
and statistical functions such as Min, Max, and Avg.
Query and visualize data
Query data in an ISA95 compliant model,
from enterprise to site, area, production line, and work cell.
Visualize your data in Grafana to easily monitor and troubleshoot your
production processes.
Compress and retain data to reduce database size using various techniques.
How can I use it?
To store your data in TimescaleDB, simply use the _historian or _analytics_schemas in your Data Model v1
compliant topic. This can be directly done in the OPC UA data source
when the data is first inserted into the stack. Alternatively, it can be handled
in Node-RED, which is useful if you’re still utilizing the old data model,
or if you’re gathering data from non-OPC UA sources via Node-RED or
sensorconnect.
Data sent with a different _schema will not be stored in
TimescaleDB.
Data stored in TimescaleDB can be viewed in Grafana. An example can be found in
the Get Started guide.
In Grafana you can select tags by using SQL queries. Here, you see an example:
get_asset_id_immutable is a custom plpgsql function that we provide to simplify the
process of querying tag data from a specific asset. To learn more about our
database, visit this page.
Also, you have the option to query data in your custom code by utilizing the
API in factoryinsight or
processing the data in the
Unified Namespace.
For more information about what exactly is behind the Historian feature, check
out our our architecture page.
What are the limitations?
In order to store messages, you should transform data and use our topic
structure. The payload must be in JSON using
a specific format,
and the message must be tagged with _historian.
Learn more about the United Manufacturing Hub’s architecture by visiting
our architecture page.
Learn more about our Data Model by visiting this page.
Learn more about our database for _historian schema by visiting
our documentation.
2.2.3 - Shopfloor KPIs / Analytics (v1)
The Shopfloor KPI/Analytics feature of the United Manufacturing Hub provides equipment-based KPIs, configurable dashboards, and detailed analytics for production transparency. Configure OEE calculation and track root causes of low OEE using drill-downs. Easily ingest, process, and analyze data in Grafana.
The Shopfloor KPI / Analytics feature of the United Manufacturing Hub provides a configurable and plug-and-play approach to create “Shopfloor Dashboards” for production transparency consisting of various KPIs and drill-downs.
If you want to create production dashboards that are highly configurable and can drill down into specific KPIs, the Shopfloor KPI / Analytics feature of the United Manufacturing Hub is an ideal choice. This feature is designed to help you quickly and easily create dashboards that provide a clear view of your shop floor performance.
What can I do with it?
The Shopfloor KPI / Analytics feature of the United Manufacturing Hub allows you to:
Query and visualize
In Grafana, you can:
Calculate the OEE (Overall Equipment Effectiveness) and view trends over time
Availability is calculated using the formula (plannedTime - stopTime) / plannedTime, where plannedTime is the duration of time for all machines states that do not belong in the Availability or Performance category, and stopTime is the duration of all machine states configured to be an availability stop.
Performance is calculated using the formula runningTime / (runningTime + stopTime), where runningTime is the duration of all machine states that consider the machine to be running, and stopTime is the duration of all machine states that are considered a performance loss. Note that this formula does not take into account losses caused by letting the machine run at a lower speed than possible. To approximate this, you can use the LowSpeedThresholdInPcsPerHour configuration option (see further below).
Quality is calculated using the formula good pieces / total pieces
Drill down into stop reasons (including histograms) to identify the root-causes for a potentially low OEE.
List all produced and planned orders including target vs actual produced pieces, total production time, stop reasons per order, and more using job and product tables.
See machine states, shifts, and orders on timelines to get a clear view of what happened during a specific time range.
View production speed and produced pieces over time.
Configure
In the database, you can configure:
Stop Reasons Configuration: Configure which stop reasons belong into which category for the OEE calculation and whether they should be included in the OEE calculation at all. For instance, some companies define changeovers as availability losses, some as performance losses. You can easily move them into the correct category.
Automatic Detection and Classification: Configure whether to automatically detect/classify certain types of machine states and stops:
AutomaticallyIdentifyChangeovers: If the machine state was an unspecified machine stop (UnknownStop), but an order was recently started, the time between the start of the order until the machine state turns to running, will be considered a Changeover Preparation State (10010). If this happens at the end of the order, it will be a Changeover Post-processing State (10020).
MicrostopDurationInSeconds: If an unspecified stop (UnknownStop) has a duration smaller than a configurable threshold (e.g., 120 seconds), it will be considered a Microstop State (50000) instead. Some companies put small unknown stops into a different category (performance) than larger unknown stops, which usually land up in the availability loss bucket.
IgnoreMicrostopUnderThisDurationInSeconds: In some cases, the machine can actually stop for a couple of seconds in routine intervals, which might be unwanted as it makes analysis difficult. One can set a threshold to ignore microstops that are smaller than a configurable threshold (usually like 1-2 seconds).
MinimumRunningTimeInSeconds: Same logic if the machine is running for a couple of seconds only. With this configurable threshold, small run-times can be ignored. These can happen, for example, during the changeover phase.
ThresholdForNoShiftsConsideredBreakInSeconds: If no shift was planned, an UnknownStop will always be classified as a NoShift state. Some companies move smaller NoShift’s into their category called “Break” and move them either into Availability or Performance.
LowSpeedThresholdInPcsPerHour: For a simplified performance calculation, a threshold can be set, and if the machine has a lower speed than this, it could be considered a LowSpeedState and could be categorized into the performance loss bucket.
Language Configuration: The language of the machine states can be configured using the languageCode configuration option (or overwritten in Grafana).
The Shopfloor KPI/Analytics feature of the United Manufacturing Hub provides equipment-based KPIs, configurable dashboards, and detailed analytics for production transparency. Configure OEE calculation and track root causes of low OEE using drill-downs. Easily ingest, process, and analyze data in Grafana.
The Shopfloor KPI / Analytics feature of the United Manufacturing Hub provides a configurable and plug-and-play approach to create “Shopfloor Dashboards” for production transparency consisting of various KPIs and drill-downs.
When should I use it?
If you want to create production dashboards that are highly configurable and can drill down into specific KPIs, the Shopfloor KPI / Analytics feature of the United Manufacturing Hub is an ideal choice. This feature is designed to help you quickly and easily create dashboards that provide a clear view of your shop floor performance.
What can I do with it?
The Shopfloor KPI / Analytics feature of the United Manufacturing Hub allows you to:
Query and visualize
In Grafana, you can:
Calculate the OEE (Overall Equipment Effectiveness) and view trends over time
Availability is calculated using the formula (plannedTime - stopTime) / plannedTime, where plannedTime is the duration of time for all machines states that do not belong in the Availability or Performance category, and stopTime is the duration of all machine states configured to be an availability stop.
Performance is calculated using the formula runningTime / (runningTime + stopTime), where runningTime is the duration of all machine states that consider the machine to be running, and stopTime is the duration of all machine states that are considered a performance loss. Note that this formula does not take into account losses caused by letting the machine run at a lower speed than possible. To approximate this, you can use the LowSpeedThresholdInPcsPerHour configuration option (see further below).
Quality is calculated using the formula good pieces / total pieces
Drill down into stop reasons (including histograms) to identify the root-causes for a potentially low OEE.
List all produced and planned orders including target vs actual produced pieces, total production time, stop reasons per order, and more using job and product tables.
See machine states, shifts, and orders on timelines to get a clear view of what happened during a specific time range.
View production speed and produced pieces over time.
Configure
In the database, you can configure:
Stop Reasons Configuration: Configure which stop reasons belong into which category for the OEE calculation and whether they should be included in the OEE calculation at all. For instance, some companies define changeovers as availability losses, some as performance losses. You can easily move them into the correct category.
Automatic Detection and Classification: Configure whether to automatically detect/classify certain types of machine states and stops:
AutomaticallyIdentifyChangeovers: If the machine state was an unspecified machine stop (UnknownStop), but an order was recently started, the time between the start of the order until the machine state turns to running, will be considered a Changeover Preparation State (10010). If this happens at the end of the order, it will be a Changeover Post-processing State (10020).
MicrostopDurationInSeconds: If an unspecified stop (UnknownStop) has a duration smaller than a configurable threshold (e.g., 120 seconds), it will be considered a Microstop State (50000) instead. Some companies put small unknown stops into a different category (performance) than larger unknown stops, which usually land up in the availability loss bucket.
IgnoreMicrostopUnderThisDurationInSeconds: In some cases, the machine can actually stop for a couple of seconds in routine intervals, which might be unwanted as it makes analysis difficult. One can set a threshold to ignore microstops that are smaller than a configurable threshold (usually like 1-2 seconds).
MinimumRunningTimeInSeconds: Same logic if the machine is running for a couple of seconds only. With this configurable threshold, small run-times can be ignored. These can happen, for example, during the changeover phase.
ThresholdForNoShiftsConsideredBreakInSeconds: If no shift was planned, an UnknownStop will always be classified as a NoShift state. Some companies move smaller NoShift’s into their category called “Break” and move them either into Availability or Performance.
LowSpeedThresholdInPcsPerHour: For a simplified performance calculation, a threshold can be set, and if the machine has a lower speed than this, it could be considered a LowSpeedState and could be categorized into the performance loss bucket.
Language Configuration: The language of the machine states can be configured using the languageCode configuration option (or overwritten in Grafana).
Monitor and maintain your manufacturing processes with real-time Grafana alerts from the United Manufacturing Hub. Get notified of potential issues and reduce downtime by proactively addressing problems.
The United Manufacturing Hub utilizes a TimescaleDB database, which is based
on PostgreSQL. Therefore, you can use the PostgreSQL plugin in Grafana to
implement and configure alerts and notifications.
Why should I use it?
Alerts based on real-time data enable proactive problem detection.
For example, you will receive a notification if the temperature of machine
oil or an electrical component of a production line exceeds limitations.
By utilizing such alerts, you can schedule maintenance, enhance efficiency,
and reduce downtime in your factories.
What can I do with it?
Grafana alerts help you keep an eye on your production and manufacturing
processes. By setting up alerts, you can quickly identify problems,
ensuring smooth operations and high-quality products.
An example of using alerts is the tracking of the temperature
of an industrial oven. If the temperature goes too high or too low, you
will get an alert, and the responsible team can take action before any damage
occurs. Alerts can be configured in many different ways, for example,
to set off an alarm if a maximum is reached once or if it exceeds a limit when
averaged over a time period. It is also possible to include several values
to create an alert, for example if a temperature surpasses a limit and/or the
concentration of a component is too low. Notifications can be sent
simultaneously across many services like Discord, Mail, Slack, Webhook,
Telegram, or Microsoft Teams. It is also possible to forward the alert with
SMS over a personal Webhook. A complete list can be found on the
Grafana page
about alerting.
How can I use it?
Follow this tutorial to set up an alert.
Alert Rule
When creating an alert, you first have to set the alert rule in Grafana. Here
you set a name, specify which values are used for the rule, and
when the rule is fired. Additionally, you can add labels for your rules,
to link them to the correct contact points. You have to use SQL to select the
desired values.
To add a new rule, hover over the bell symbol on the left and click on Alert rules.
Then click on the blue Create alert rule button.
Choose a name for your rule.
In the next step, you need to select and manipulate the value that triggers your alert and declare the function for the alert.
Subsection A is, by default the selection of your values: You can use the Grafana builder for this, but it is not useful, as it cannot select a time interval even though there is a selector for it. If you choose, for example, the last 20 seconds, your query will select values from hours ago. Therefore, it is necessary to use SQL directly. To add command manually, switch to Code in the right corner of the section.
First, you must select the value you want to create an alert for. In the United Manufacturing Hub’s data structure, a process value is stored in the table tag. Unfortunately Grafana cannot differentiate between different values of the same sensor; if you select the ConcentrationNH3 value from the example and more than one of the selected values violates your rule in the selected time interval, it will trigger multiple alerts. Because Grafana is not able to tell the alerts apart, this results in errors. To solve this, you need to add the value "timestamp" to the Select part. So the first part of the SQL command is: SELECT value, "timestamp".
The source is tag, so add FROM tag at the end.
The different values are distinguished by the variable name in the tag, so add WHERE name = '<key-name>' to select only the value you need. If you followed Get Started guide, you can use temperature as the name.
Since the selection of the time interval in Grafana is not working, you must add this manually as an addition to the WHERE command: AND "timestamp" > (NOW() - INTERVAL 'X seconds'). X is the number of past seconds you want to query. It’s not useful to set X to less than 10 seconds, as this is the fastest interval Grafana can check your rule, and you might miss values.
In subsection B, you need to reduce the values to numbers, Grafana can work with. By default, Reduce will already be selected. However, you can change it to a different option by clicking the pencil icon next to the letter B. For this example, we will create an upper limit. So selecting Max as the Function is the best choice. Set Input as A (the output of the first section) and choose Strict for the Mode. So subsection B will output the maximum value the query in A selects as a single number.
In subsection C, you can establish the rule. If you select Math, you can utilize expressions like $B > 120 to trigger an alert when a value from section B ($B means the output from section B) exceeds 50. In this case, only the largest value selected in A is passed through the reduce function from B to C. A simpler way to set such a limit is by choosing Threshold instead of Math.
To add more queries or expressions, find the buttons at the end of section two and click on the desired option. You can also preview the results of your queries and functions by clicking on Preview and check if they function correctly and fire an alert.
Define the rule location, the time interval for rule checking, and the duration for which the rule has to be broken before an alert is triggered.
Select a name for your rule’s folder or add it to an existing one by clicking the arrow. Find all your rules grouped in these folders on the Alert rules page under Alerting.
An Evaluation group is a grouping of rules, which are checked after the same time interval. Creating a new group requires setting a time interval for rule checking. The minimum interval from Grafana is ten seconds.
Specify the duration the rule must be violated before triggering the alert. For example, with a ten-second check interval and a 20-second duration, the rule must be broken twice in a row before an alert is fired.
Add details and descriptions for your rule.
In the next step, you will be required to assign labels to your alert, ensuring it is directed to the appropriate contacts. For example, you may designate a label team with alertrule1: team = operator and alertrule2: team = management. It can be helpful to use labels more than once, like alertrule3: team = operator, to link multiple alerts to a contact point at once.
Your rule is now completed; click on Save and Exit on the right upper corner, next to section one.
Contact Point
In a contact point you create a collection of addresses and services that
should be notified in case of an alert. This could be a Discord channel or
Slack for example. When a linked alert is triggered, everyone within the
contact point receives a message. The messages can be preconfigured and are
specific to every service or contact. The following steps shall be done to create a contact point.
Navigate to Contact points, located at the top of the Grafana alerting page.
Click on the blue + Add contact point button.
Now, you should be able to see setting page. Choose a name for your contact point.
Pick the receiving service; in this example, Discord.
Generate a new Webhook in your Discord server (Server Settings ⇒ Integrations ⇒ View Webhooks ⇒ New Webhook or create Webhook). Assign a name to the Webhook and designate the messaging channel. Copy the Webhook URL from Discord and insert it into the corresponding field in Grafana. Customize the message to Discord under Optional Discord settings if desired.
If you need, add more services to the contact point, by clicking + Add contact point integration.
Save the contact point; you can see it in the Contact points list, below the grafana-default-email contact point.
Notification Policies
In a notification policy, you establish the connection of a contact point
with the desired alerts. To add the notification policy, you need to do the following steps.
Go to the Notification policies section in the Grafana alerting page, next to the Contact points.
Select + New specific policy to create a new policy, followed by + Add matcher to choose the label and value from the alert (for example team = operator). In this example, both alert1 and alert3 will be forwarded to the associated contact point. You can include multiple labels in a single notification policy.
Choose the contact point designated to receive the alert notifications. Now, the inputs should be like in the picture.
Press Save policy to finalize your settings. Your new policy will now be displayed in the list.
Mute Timing
In case you do not want to receive messages during a recurring time
period, you can add a mute timing to Grafana. You can set up a mute timing in the Notification policies section.
Select + Add mute timing below the notification policies.
Choose a name for the mute timing.
Specify the time during which notifications should not be forwarded.
Time has to be given in UTC time and formatted as HH:MM. Use 06:00 instead of 6:00 to avoid an error in Grafana.
You can combine several time intervals into one mute timing by clicking on the + Add another time interval button at the end of the page.
Click Submit to save your settings.
To apply the mute timing to a notification policy, click Edit on the right side of the notification policy, and then select the desired mute timing from the drop-down menu at the bottom of the policy. Click on Save Policy to apply the change.
Silence
You can also add silences for a specific time frame and labels, in case
you only want to mute alerts once. To add a silence, switch to the Silences section, next to Notification policies.
Click on + Add Silence.
Specify the beginning for the silence and its duration.
Select the labels and their values you want silenced.
If you need, you can add a comment to the silence.
Click the Submit button at the bottom of the page.
What are the limitations?
It can be complicated to select and manipulate the desired values to create
the correct function for your application. Grafana cannot
differentiate between data points of the same source. For example, you
want to make a temperature threshold based on a single sensor.
If your query selects the last three values and two of them are above the
threshold, Grafana will fire two alerts which it cannot tell apart.
This results in errors. You have to configure the rule to reduce the selected
values to only one per source to avoid this.
It can be complicated to create such a specific rule with this limitation, and
it requires some testing.
Another thing to keep in mind is that the alerts can only work with data from
the database. It also does not work with the machine status; these values only
exist in a raw, unprocessed form in TimescaleDB and are not processed through
an API like process values.
Whether you have a bare metal server, and edge device, or a virtual machine,
you can easily provision the whole United Manufacturing Hub.
Choose to deploy only the Data Infrastructure on an existing OS, or provision
the entire Device & Container Infrastructure, OS included.
What can I do with it?
You can leverage our custom iPXE bootstrapping process to install the flatcar
operating system, along with the Device & Container Infrastructure and the
Data Infrastructure.
If you already have an operating system installed, you can use the Management
Console to provision the Data Infrastructure on top of it. You can also choose
to use an existing UMH installation and only connect it to the Management
Console.
How can I use it?
If you need to install the operating system from scratch, you can follow the
Flatcar Installation guide,
which will help you to deploy the default version of the United Manufacturing
Hub.
Contact our Sales Team to get help on
customizing the installation process in order to fit your enterprise needs.
If you already have an operating system installed, you can follow the
Getting Started guide to provision the Data
Infrastructure and setup the Management Companion agent on your system.
What are the limitations?
Provisioning the Device & Container Infrastructure requires manual interaction
and is not yet available from the Management Console.
ARM systems are not supported.
Where to get more information?
The Get Started! guide assists you to set up
the United Manufacturing Hub.
Monitor and manage both the Data and the Device & Container Infrastructures using the Management Console.
The Management Console supports you to monitor and manage the Data Infrastructure
and the Device & Container Infrastructure.
When should I use it?
Once initial deployment of the United Manufacturing Hub is completed, you can
monitor and manage it using the Management Console. If you have not deployed yet,
navigate to the Get Started! guide.
What can I do with it?
You can monitor the statuses of the following items using the Management Console:
Modules: A Module refers to a grouped set of related Kubernetes components
like Pods, StatefulSets, and Services. It provides a way to monitor and manage
these components as a single unit.
System:
Resource Utilization: CPU, RAM, and DISK usages.
OS information: the used operating system, kernel version, and instruction
set architecture.
Datastream: the rate of Kafka/TimescaleDB messages per second, the health of both connections and data sources.
Kubernetes: the number of error events and the deployed management
companion’s and UMH’s versions.
In addition, you can check the topic structure used by data sources and the
corresponding payloads.
Moreover, you can create a new connection and initialize the created connection to
deploy a data source.
How can I use it?
From the Component View, in the overview tab, you can click and open each status on this tab.
The Connection Management tab shows the status of all the instance’s connections and their associated
data sources. Moreover, you can create a new connection, as well as initialize them.
Read more about the Connection Management in the Connectivity section.
The Tag Browser provides a comprehensive view of the tag structure, allowing automation engineers to
manage and navigate through all their tags without concerning themselves with underlying technical complexities,
such as topics, keys or payload structures.
Tags typically represent variables associated with devices in an ISA-95 model.
For instance, it could represent a temperature reading from a specific sensor or a status indication from
a machine component. These tags are transported through various technical methods across the Unified
Namespace (UNS) into the database. This includes organizing them within a folder structure or embedding them
as JSON objects within the message payload. Tags can be sent into the same topic or utilizing various sub-topics.
Due to the nature of MQTT and Kafka, the topics may differ, but the following formula applies:
MQTT Topic = Kafka topic + Kafka Key
The Kafka topic and key depend on the configured merge point, read more about it
here.
Presently, removing a UMH instance from the Management Console is not supported.
After overwriting an instance, the old one will display an offline status.
Where to get more information?
The Get Started! guide assists you to set up
the United Manufacturing Hub.
Understand the purpose and features of the UMH Lite, as well as the differences between UMH Lite and UMH Classic.
If you are already using Unified Namespace, or have a Kafka / MQTT broker, you might want to try out the basic features of UMH. For this purpose, the UMH Lite installation is available.
When should I use it?
If you want the full-featured UMH experience, we recommend installing the Classic version. This version provides a comprehensive suite of features, including analytics, data visualization, message brokers, alerting, and more. Below, you can see a comparison of the features between the two versions.
What can I do with it?
Differences between UMH Classic and Lite
Feature
Classic
Lite
Connectivity
OPC UA
✓
✓
Node-RED
✓
Data Infrastructure
Historian
✓
Analytics
✓
Data Visualization
✓
UNS (Kafka and MQTT)
✓
Alerting
✓
UMH Data Model v1
✓
✓
Tag Browser for your UNS
✓
Device & Container Infrastructure
Network Monitoring
✓
✓
Connect devices and add protocol converters
You can connect external devices like a PLC with an OPC UA server to a running UMH Lite instance and contextualize the data from it with a protocol converter.
For contextualization, you have to use the UMH Data Model v1.
Send data to your own infrastructure
All the data that your instance is gathering is sent to your own data infrastructure. You can configure a target MQTT broker in the instance settings by clicking on it in the Management Console.
Monitor your network health
By using the UMH Lite in conjunction with the Management Console, you can spot errors in the network. If a connection is faulty, the Management Console will mark it.
How can I use it?
To add a new UMH Lite instance, simply follow the regular installation process and select UMH Lite instead of UMH Classic. You can follow the next steps in the linked guide to learn how to connect devices and add a protocol converter.
Convert to UMH Classic
Should you find the UMH Lite insufficient and require the features offered by UMH Classic, you can upgrade through the Management Console. To convert a UMH Lite instance to a UMH Classic instance:
Go to the Management Console.
Navigate to Component View.
Select the instance from the list.
Click on ‘Instance Settings’.
You will find an option to convert your instance to Classic.
This change will preserve the configurations of your devices and protocol converters: Their data continues to be forwarded to your initial MQTT broker, while also becoming accessible within your new Unified Namespace and database.
Any protocol converters introduced post-upgrade will also support the original MQTT broker as an additional output. You can manually remove the original MQTT broker as an output after the upgrade. Once removed, data will no longer be forwarded to the initial MQTT broker.
What are the limitations?
The UMH Lite is a very basic version and only offers you the gathering and contextualization of your data as well as the monitoring of the network. Other features like a historian, data visualization, and a Unified Namespace are only available by using the UMH Classic.
Additionally, converting to a UMH Classic requires an SSH connection, similar to what is needed during the initial installation.
Read about our Data Model to keep your data organized and contextualized.
2.3.4 - Layered Scaling
Efficiently scale your United Manufacturing Hub deployment across edge devices and servers using Layered Scaling.
Layered Scaling is an architectural approach in the United Manufacturing Hub that enables efficient scaling of your
deployment across edge devices and servers. It is part of the Plant centric infrastructure
, by dividing the processing workload across multiple layers or tiers, each
with a specific set of responsibilities, Layered Scaling allows for better management of resources,
improved performance, and easier deployment of software components.
Layered Scaling follows the standard IoT infrastructure, by additionally connection a lot of IoT-devices typically via MQTT.
When should I use it?
Layered Scaling is ideal when:
You need to process and exchange data close to the source for latency reasons and independence from internet and
network outages. For example, if you are taking pictures locally, analyzing them using machine learning, and then
scrapping the product if the quality is poor. In this case, you don’t want the machine to be idle if something happens
in the network. Also, it would not be acceptable for a message to arrive a few hundred milliseconds later, as the
process is quicker than that.
High-frequency data might be useful to not send to the “higher” instance and store there. It can put unnecessary
stress on those instances. You have an edge device that takes care of it. For example, you are taking and processing
images (e.g., for quality reasons) or using an accelerometer and microphone for predictive maintenance reasons on the
machine and do not want to send data streams with 20 kHz (20,000 times per second) to the next instance.
Organizational reasons. For the OT person, it might be better to configure the data contextualization using Node-RED
directly at the production machine. They could experiment with it, configure it without endangering other machines,
and see immediate results (e.g., when they move the position of a sensor). If the instance is “somewhere in IT,”
they may feel they do not have control over it anymore and that it is not their system.
What can I do with it?
With Layered Scaling in the United Manufacturing Hub, you can:
Deploy minimized versions of the Helm Chart on edge devices, focusing on specific features required for that
environment (e.g., without the Historian and Analytics features enabled, but with the IFM retrofitting feature using
sensorconnect, with the barcodereader retrofit feature using
barcodereader, or with the data connectivity via Node-RED feature enabled).
Seamlessly communicate between edge devices, on-premise servers, and cloud instances using the kafka-bridge
microservice, allowing data to be buffered in between in case the internet or network connection drops.
Allow plant-overarching analysis / benchmarking, multi-plant kpis, connections to enterprise-IT, etc..
We typically recommend sending only data processed by our API factoryinsight.
How can I use it?
To implement Layered Scaling in the United Manufacturing Hub:
Deploy a minimized version of the Helm Chart on edge devices, tailored to the specific features required for that
environment. You can either install the whole version using flatcar and then disable functionalities you do not need,
or use the Management Console. If the feature is not available in the Management Console, you could try asking nicely
in the Discord and we will, can provide you with a token you can enter during the flatcar installation, so that your
edge devices are pre-configured depending on your needs (incl. demilitarized zones, multiple networks, etc.)
Deploy the full Helm Chart with all features enabled on a central instance, such as a server.
Configure the Kafka Bridge microservice to transmit data from the edge devices to the central instance for further
processing and analysis.
For MQTT connections, you can just connect external devices via MQTT, and it will land up in kafka directly. To connect
on-premise servers with the cloud (plant-overarching architecture), you can use kafka-bridge or write service in benthos
or Node-RED that regularly fetches data from factoryinsight and pushes it into your cloud instance.
What are the limitations?
Be aware that each device increases the complexity over the entire system. We recommend using the
Management Console to manage them centrally.
Because Kafka is used to reliably transmit messages from the edge devices to the server, and it struggles with devices
repeatedly going offline and online again, ethernet connections should be used. Also, the total amount of edge devices
should not “escalate”. If you have a lot of edge devices (e.g., you want to connect each PLC), we recommend connecting
them via MQTT to an instance of the UMH instead.
Where to get more information?
Learn more about the United Manufacturing Hub’s architecture by visiting our architecture page.
For more information about the Helm Chart and how to deploy it, refer to the Helm Chart documentation.
To get an overview of the microservices in the United Manufacturing Hub, check out the microservices documentation.
2.3.5 - Upgrading
Discover how to keep your UMH Instance’s up-to-date.
Upgrading is a vital aspect of maintaining your United Manufacturing Hub (UMH)
instance. This feature ensures that your UMH environment stays current, secure,
and optimized with the latest enhancements. Explore the details below to make
the most of the upgrading capabilities.
When Should I Use It?
Upgrade your UMH instance whenever a new version is released to access the
latest features, improvements, and security enhancements. Regular upgrades
are recommended for a seamless experience.
What Can I Do With It?
Enhance your UMH instance in the following ways:
Keep it up-to-date with the latest features and improvements.
Enhance security and performance.
Take advantage of new functionalities and optimizations introduced in each
release.
How Can I Use It?
To upgrade your UMH instance, follow the detailed instructions provided in the
Upgrading Guide.
What Are The Limitations?
As of now, the upgrade process for the UMH stack is not integrated into the
Management Console and must be performed manually.
Ensure compatibility with the recommended prerequisites before initiating an
upgrade.
3 - Concepts
The Concepts section helps you learn about the parts of the United
Manufacturing Hub system, and helps you obtain a deeper understanding of
how it works.
3.1 - Security
3.1.1 - Management Console
Concepts related to the security of the Management Console.
The web-based nature of the Management Console means that it is exposed to the
same security risks as any other web application. This section describes the
measures that we adopt to mitigate these risks.
Encrypted Communication
The Management Console is served over HTTPS, which means that all communication
between the browser and the server is encrypted. This prevents attackers from
eavesdropping on the communication and stealing sensitive information such as
passwords and session cookies.
Cyphered Messages
This feature is currently in development and is subject to change.
Other than the standard TLS encryption provided by HTTPS, we also provide an
additional layer of encryption for the messages exchanged between the Management
Console and your UMH instance.
Every action that you perform on the Management Console, such as creating a new
data source, and every information that you retrieve, such as the messages in
the Unified Namespace, is encrypted using a secret key that is only known to
you and your UMH instance. This ensures that no one, not even us, can see, read
or reverse engineer the content of these messages.
The process we use (which is now patent pending) is simple yet effective:
When you create a new user on the Management Console, we generate a new
private key and we encrypt it using your password. This means that only you
can decrypt it.
The encrypted private key and your hashed password are stored in our database.
When you login to the Management Console, the encrypted private key associated
with your user is downloaded to your browser and decrypted using your
password. This ensures that your password is never sent to our server, and
that the private key is only available to you.
When you add a new UMH instance to the Management Console, it generates a
token that the Management Companion (aka your instance) will use to
authenticate itself. This token works the same way as your user password: it
is used to encrypt a private key that only the UMH instance can decrypt.
The instance encrypted private key and the hashed token are stored in our
database. A relationship is also created between the user and the instance.
All the messages exchanged between the Management Console and the UMH
instance are encrypted using the private keys, and then encrypted again using
the TLS encryption provided by HTTPS.
The only drawback to this approach is that, if you forget your password, we
won’t be able to recover your private key. This means that you will have to
create a new user and reconfigure all your UMH instances. But your data will
still be safe and secure.
However, even though we are unable to read any private key, there is some information
that we can inevitably see:
IP addresses of the devices using the Management Console and of the UMH instances
that they are connected to
The time at which the devices connect to the Management Console
Amount of data exchanged between the devices and the Management Console (but
not its content)
4 - Data Model (v1)
This page describes the data model of the UMH stack - from the message payloads up to database tables.
The Data Infrastructure of the UMH consists out of the three components: Connectivity, Unified Namespace, and Historian (see also Architecture). Each of the components has their own standards and best-practices, so a consistent data model across
multiple building blocks need to combine all of them.
If you like to learn more about our data model & ADR’s checkout our learn article.
Connectivity
Incoming data is often unstructured, therefore our standard allows either conformant data in our _historian schema, or any kind of data in any other schema.
Our key considerations where:
Event driven architecture: We only look at changes, reducing network and system load
Ease of use: We allow any data in, allowing OT & IT to process it as they wish
The UNS employs MQTT and Kafka in a hybrid approach, utilizing MQTT for efficient data collection and Kafka for robust data processing.
The UNS is designed to be reliable, scalable, and maintainable, facilitating real-time data processing and seamless integration or removal of system components.
These elements are the foundation for our data model in UNS:
Incoming data based on OT standards: Data needs to be contextualized here not by IT people, but by OT people.
They want to model their data (topic hierarchy and payloads) according to ISA-95, Weihenstephaner Standard, Omron PackML, Euromap84, (or similar) standards, and need e.g., JSON as payload to better understand it.
Hybrid Architecture: Combining MQTT’s user-friendliness and widespread adoption in Operational Technology (OT) with Kafka’s advanced processing capabilities.
Topics and payloads can not be interchanged fully between them due to limitations in MQTT and Kafka, so some trade-offs needs to be done.
Processed data based on IT standards: Data is sent after processing to IT systems, and needs to adhere with standards: the data inside of the UNS needs to be easy processable for either contextualization, or storing it in a Historian or Data Lake.
IT best-practice: used SQL and Postgres for easy compatibility, and therefore TimescaleDb
Straightforward queries: we aim to make easy SQL queries, so that everyone can build dashboards
Performance: because of time-series and typical workload, the database layout might not be optimized fully on usability, but we did some trade-offs that allow it to store millions of data points per second
4.1 - Unified Namespace
Describes all available _schema and there structure
Topic structure
Versioning Prefix
The umh/v1 at the beginning is obligatory. It ensures that the structure can evolve over time without causing confusion or compatibility issues.
Topic Names & Rules
All part of this structure, except for enterprise and _schema are optional.
They can consist of any letters (a-z, A-Z), numbers (0-9) and therein symbols (- & _).
Be careful to avoid ., +, # or / as these are special symbols in Kafka or MQTT.
Ensure that your topic always begins with umh/v1, otherwise our system will ignore your messages.
Be aware that our topics are case-sensitive, therefore umh.v1.ACMEIncorperated is not the same as umh.v1.acmeincorperated.
Throughout this documentation we will use the MQTT syntax for topics (umh/v1), the corresponding Kafka topic names are the same but / replaced with .
Topic validator
OriginID
This part identifies where the data is coming from.
Good options include the senders MAC address, hostname, container id.
Examples for originID: 00-80-41-ae-fd-7e, E588974, e5f484a1791d
Messages tagged with _analytics will be processed by our analytics pipeline.
They are used for automatic calculation of KPI’s and other statistics.
_local
This key might contain any data, that you do not want to bridge to other nodes (it will however be MQTT-Kafka bridged on its node).
For example this could be data you want to pre-process on your local node, and then put into another _schema.
This data must not necessarily be JSON.
Other
Any other schema, which starts with an underscore (for example: _images), will be forwarded by both MQTT-Kafka & Kafka-Kafka bridges but never processed or stored.
This data must not necessarily be JSON.
Converting other data models
Most data models already follow a location based naming structure.
KKS Identification System for Power Stations
KKS (Kraftwerk-Kennzeichensystem) is a standardized system for identifying and classifying equipment and systems in power plants, particularly in German-speaking countries.
In a flow diagram, the designation is: 1 2LAC03 CT002 QT12
Level 0 Classification:
Block 1 of a power plant site is designated as 1 in this level.
Level 1 Classification:
The designation for the 3rd feedwater pump in the 2nd steam-water circuit is 2LAC03. This means:
Main group 2L: 2nd steam, water, gas circuit
Subgroup (2L)A: Feedwater system
Subgroup (2LA)C: Feedwater pump system
Counter (2LAC)03: third feedwater pump system
Level 2 Classification:
For the 2nd temperature measurement, the designation CT002 is used. This means:
Main group C: Direct measurement
Subgroup (C)T: Temperature measurement
Counter (CT)002: second temperature measurement
Level 3 Classification:
For the 12th immersion sleeve as a sensor protection, the designation QT12 is used. This means:
Main group Q: Control technology equipment
Subgroup (Q)T: Protective tubes and immersion sleeves as sensor protection
Counter (QT)12: twelfth protective tube or immersion sleeve
The above example refers to the 12th immersion sleeve at the 2nd temperature measurement of the 3rd feed pump in block 1 of a power plant site.
Translating this in our data model could result in:
umh/v1/nuclearCo/1/2LAC03/CT002/QT12/_schema
Where:
nuclearCo: Represents the enterprise or the name of the nuclear company.
1: Maps to the site, corresponding to Block 1 of the power plant as per the KKS number.
2LAC03: Fits into the area, representing the 3rd feedwater pump in the 2nd steam-water circuit.
CT002: Aligns with productionLine, indicating the 2nd temperature measurement in this context.
QT12: Serves as the workCell or originID, denoting the 12th immersion sleeve.
_schema: Placeholder for the specific data schema being applied.
4.1.1 - _analytics
Messages for our analytics feature
Topic structure
Work Order
Create
Use this topic to create a new work order.
This replaces the addOrder message from our v0 data model.
Fields
external_work_order_id (string): The work order ID from your MES or ERP system.
product (object): The product being produced.
external_product_id (string): The product ID from your MES or ERP system.
cycle_time_ms (number) (optional): The cycle time for the product in seconds. Only include this if the product has not been previously created.
quantity (number): The quantity of the product to be produced.
status (number) (optional): The status of the work order. Defaults to 0 (created).
0 - Planned
1 - In progress
2 - Completed
start_time_unix_ms (number) (optional): The start time of the work order. Will be set by the corresponding start message if not provided.
end_time_unix_ms (number) (optional): The end time of the work order. Will be set by the corresponding stop message if not provided.
start_time_unix_ms (number): The start time of the state.
end_time_unix_ms (number): The end time of the state.
4.1.2 - _historian
Messages for our historian feature
Topic structure
Message structure
Our _historian messages are JSON containing a unix timestamp as milliseconds (timestamp_ms) and one or more key value pairs.
Each key value pair will be inserted at the given timestamp into the database.
If you use a boolean value, it will be interpreted as a number.
Tag grouping
Sometimes it makes sense to further group data together.
In the following example we have a CNC cutter, emitting data about it’s head position.
If we want to group this for easier access in Grafana, we could use two types of grouping.
Using Tags / Tag Groups in the Topic:
This will result in 3 new database entries, grouped by head & pos.
This function is an optimized version of get_asset_id that is defined as immutable.
It is the fastest of the three functions and should be used for all queries, except when you plan to manually modify values inside the asset table.
This function returns the id of the given asset.
It takes a variable number of arguments, where only the first (enterprise) is mandatory.
This function is only kept for compatibility reasons and should not be used in new queries, see get_asset_id_stable or get_asset_id_immutable instead.
There is no immutable version of get_asset_ids, as the returned values will probably change over time.
[Legacy] get_asset_ids
This function returns the ids of the given assets.
It takes a variable number of arguments, where only the first (enterprise) is mandatory.
It is only kept for compatibility reasons and should not be used in new queries, see get_asset_ids_stable instead.
This table holds all assets.
An asset for us is the unique combination of enterprise, site, area, line, workcell & origin_id.
All keys except for id and enterprise are optional.
In our example we have just started our CNC cutter, so it’s unique asset will get inserted into the database.
It already contains some data we inserted before so the new asset will be inserted at id: 8
id
enterprise
site
area
line
workcell
origin_id
1
acme-corporation
2
acme-corporation
new-york
3
acme-corporation
london
north
assembly
4
stark-industries
berlin
south
fabrication
cell-a1
3002
5
stark-industries
tokyo
east
testing
cell-b3
3005
6
stark-industries
paris
west
packaging
cell-c2
3009
7
umh
cologne
office
dev
server1
sensor0
8
cuttingincoperated
cologne
cnc-cutter
work_order
This table holds all work orders.
A work order is a unique combination of external_work_order_id and asset_id.
work_order_id
external_work_order_id
asset_id
product_type_id
quantity
status
start_time
end_time
1
#2475
8
1
100
0
2022-01-01T08:00:00Z
2022-01-01T18:00:00Z
product_type
This table holds all product types.
A product type is a unique combination of external_product_type_id and asset_id.
product_type_id
external_product_type_id
cycle_time_ms
asset_id
1
desk-leg-0112
10.0
8
product
This table holds all products.
product_type_id
product_batch_id
asset_id
start_time
end_time
quantity
bad_quantity
1
batch-n113
8
2022-01-01T08:00:00Z
2022-01-01T08:10:00Z
100
7
shift
This table holds all shifts.
A shift is a unique combination of asset_id and start_time.
shiftId
asset_id
start_time
end_time
1
8
2022-01-01T08:00:00Z
2022-01-01T19:00:00Z
state
This table holds all states.
A state is a unique combination of asset_id and start_time.
asset_id
start_time
state
8
2022-01-01T08:00:00Z
20000
8
2022-01-01T08:10:00Z
10000
4.2.2 - Historian
How _historian data is stored and can be queried
Our database for the umh.v1 _historian datamodel currently consists of three tables.
These are used for the _historian schema.
We choose this layout to enable easy lookups based on the asset features, while maintaining separation between data and names.
The split into tag & tag_string prevents accidental lookups of the wrong datatype, which might break queries such as aggregations, averages, …
asset
This table holds all assets.
An asset for us is the unique combination of enterprise, site, area, line, workcell & origin_id.
All keys except for id and enterprise are optional.
In our example we have just started our CNC cutter, so it’s unique asset will get inserted into the database.
It already contains some data we inserted before so the new asset will be inserted at id: 8
id
enterprise
site
area
line
workcell
origin_id
1
acme-corporation
2
acme-corporation
new-york
3
acme-corporation
london
north
assembly
4
stark-industries
berlin
south
fabrication
cell-a1
3002
5
stark-industries
tokyo
east
testing
cell-b3
3005
6
stark-industries
paris
west
packaging
cell-c2
3009
7
umh
cologne
office
dev
server1
sensor0
8
cuttingincoperated
cologne
cnc-cutter
tag
This table is a timescale hypertable.
These tables are optimized to contain a large amount of data which is roughly sorted by time.
In our example we send data to umh/v1/cuttingincorperated/cologne/cnc-cutter/_historian/head using the following JSON:
The origin is a placeholder for a later feature, and currently defaults to unknown.
tag_string
This table is the same as tag, but for string data.
Our CNC cutter also emits the G-Code currently processed.
umh/v1/cuttingincorperated/cologne/cnc-cutter/_historian
Unknown (30000-59999): These states represent that the asset is in an unspecified state.
Glossary
OEE: Overall Equipment Effectiveness
KPI: Key Performance Indicator
Conclusion
This documentation provides a comprehensive overview of the states used in the United Manufacturing Hub software stack and their respective categories. For more information on each state category and its individual states, please refer to the corresponding subpages.
4.3.1 - Active (10000-29999)
These states represent that the asset is actively producing
10000: ProducingAtFullSpeedState
This asset is running at full speed.
Examples for ProducingAtFullSpeedState
WS_Cur_State: Operating
PackML/Tobacco: Execute
20000: ProducingAtLowerThanFullSpeedState
Asset is producing, but not at full speed.
Examples for ProducingAtLowerThanFullSpeedState
WS_Cur_Prog: StartUp
WS_Cur_Prog: RunDown
WS_Cur_State: Stopping
PackML/Tobacco : Stopping
WS_Cur_State: Aborting
PackML/Tobacco: Aborting
WS_Cur_State: Holding
Ws_Cur_State: Unholding
PackML:Tobacco: Unholding
WS_Cur_State Suspending
PackML/Tobacco: Suspending
WS_Cur_State: Unsuspending
PackML/Tobacco: Unsuspending
PackML/Tobacco: Completing
WS_Cur_Prog: Production
EUROMAP: MANUAL_RUN
EUROMAP: CONTROLLED_RUN
Currently not included:
WS_Prog_Step: all
4.3.2 - Unknown (30000-59999)
These states represent that the asset is in an unspecified state
30000: UnknownState
Data for that particular asset is not available (e.g. connection to the PLC is disrupted)
Examples for UnknownState
WS_Cur_Prog: Undefined
EUROMAP: Offline
40000 UnspecifiedStopState
The asset is not producing, but the reason is unknown at the time.
Examples for UnspecifiedStopState
WS_Cur_State: Clearing
PackML/Tobacco: Clearing
WS_Cur_State: Emergency Stop
WS_Cur_State: Resetting
PackML/Tobacco: Clearing
WS_Cur_State: Held
EUROMAP: Idle
Tobacco: Other
WS_Cur_State: Stopped
PackML/Tobacco: Stopped
WS_Cur_State: Starting
PackML/Tobacco: Starting
WS_Cur_State: Prepared
WS_Cur_State: Idle
PackML/Tobacco: Idle
PackML/Tobacco: Complete
EUROMAP: READY_TO_RUN
50000: MicrostopState
The asset is not producing for a short period (typically around five minutes), but the reason is unknown at the time.
4.3.3 - Material (60000-99999)
These states represent that the asset has issues regarding materials.
60000 InletJamState
This machine does not perform its intended function due to a lack of material flow in the infeed of the machine, detected by the sensor system of the control system (machine stop). In the case of machines that have several inlets, the condition o lack in the inlet refers to the main flow , i.e. to the material (crate, bottle) that is fed in the direction of the filling machine (Central machine). The defect in the infeed is an extraneous defect, but because of its importance for visualization and technical reporting, it is recorded separately.
Examples for InletJamState
WS_Cur_State: Lack
70000: OutletJamState
The machine does not perform its intended function as a result of a jam in the good flow discharge of the machine, detected by the sensor system of the control system (machine stop). In the case of machines that have several discharges, the jam in the discharge condition refers to the main flow, i.e. to the good (crate, bottle) that is fed in the direction of the filling machine (central machine) or is fed away from the filling machine. The jam in the outfeed is an external fault 1v, but it is recorded separately, because of its importance for visualization and technical reporting.
Examples for OutletJamState
WS_Cur_State: Tailback
80000: CongestionBypassState
The machine does not perform its intended function due to a shortage in the bypass supply or a jam in the bypass discharge of the machine, detected by the sensor system of the control system (machine stop). This condition can only occur in machines with two outlets or inlets and in which the bypass is in turn the inlet or outlet of an upstream or downstream machine of the filling line (packaging and palleting machines). The jam/shortage in the auxiliary flow is an external fault, but it is recoded separately due to its importance for visualization and technical reporting.
Examples for the CongestionBypassState
WS_Cur_State: Lack/Tailback Branch Line
90000: MaterialIssueOtherState
The asset has a material issue, but it is not further specified.
Examples for MaterialIssueOtherState
WS_Mat_Ready (Information of which material is lacking)
PackML/Tobacco: Suspended
4.3.4 - Process(100000-139999)
These states represent that the asset is in a stop, which belongs to the process and cannot be avoided.
100000: ChangeoverState
The asset is in a changeover process between products.
Examples for ChangeoverState
WS_Cur_Prog: Program-Changeover
Tobacco: CHANGE OVER
110000: CleaningState
The asset is currently in a cleaning process.
Examples for CleaningState
WS_Cur_Prog: Program-Cleaning
Tobacco: CLEAN
120000: EmptyingState
The asset is currently emptied, e.g. to prevent mold for food products over the long breaks, e.g. the weekend.
Examples for EmptyingState
Tobacco: EMPTY OUT
130000: SettingUpState
This machine is currently preparing itself for production, e.g. heating up.
Examples for SettingUpState
EUROMAP: PREPARING
4.3.5 - Operator (140000-159999)
These states represent that the asset is stopped because of operator related issues.
140000: OperatorNotAtMachineState
The operator is not at the machine.
150000: OperatorBreakState
The operator is taking a break.
This is different from a planned shift as it could contribute to performance losses.
Examples for OperatorBreakState
WS_Cur_Prog: Program-Break
4.3.6 - Planning (160000-179999)
These states represent that the asset is stopped as it is planned to stopped (planned idle time).
160000: NoShiftState
There is no shift planned at that asset.
170000: NO OrderState
There is no order planned at that asset.
4.3.7 - Technical (180000-229999)
These states represent that the asset has a technical issue.
180000: EquipmentFailureState
The asset itself is defect, e.g. a broken engine.
Examples for EquipmentFailureState
WS_Cur_State: Equipment Failure
190000: ExternalFailureState
There is an external failure, e.g. missing compressed air.
Examples for ExternalFailureState
WS_Cur_State: External Failure
200000: ExternalInterferenceState
There is an external interference, e.g. the crane to move the material is currently unavailable.
210000: PreventiveMaintenanceStop
A planned maintenance action.
Examples for PreventiveMaintenanceStop
WS_Cur_Prog: Program-Maintenance
PackML: Maintenance
EUROMAP: MAINTENANCE
Tobacco: MAINTENANCE
220000: TechnicalOtherStop
The asset has a technical issue, but it is not specified further.
Examples for TechnicalOtherStop
WS_Not_Of_Fail_Code
PackML: Held
EUROMAP: MALFUNCTION
Tobacco: MANUAL
Tobacco: SET UP
Tobacco: REMOTE SERVICE
5 - Data Model (v0)
This page describes the data model of the UMH stack - from the message payloads up to database tables.
Raw Data
If you have events that you just want to send to the message broker / Unified Namespace without the need for it to be stored, simply send it to the raw topic.
This data will not be processed by the UMH stack, but you can use it to build your own data processing pipeline.
ProcessValue Data
If you have data that does not fit in the other topics (such as your PLC tags or sensor data), you can use the processValue topic. It will be saved in the database in the processValue or processValueString and can be queried using factorysinsight or the umh-datasource Grafana plugin.
Production Data
In a production environment, you should first declare products using addProduct.
This allows you to create an order using addOrder. Once you have created an order,
send an state message to tell the database that the machine is working (or not working) on the order.
When the machine is ordered to produce a product, send a startOrder message.
When the machine has finished producing the product, send an endOrder message.
Send count messages if the machine has produced a product, but it does not make sense to give the product its ID. Especially useful for bottling or any other use case with a large amount of products, where not each product is traced.
Recommendation: Start with addShift and state and continue from there on
Modifying Data
If you have accidentally sent the wrong state or if you want to modify a value, you can use the modifyState message.
Unique Product Tracking
You can use uniqueProduct to tell the database that a new instance of a product has been created.
If the produced product is scrapped, you can use scrapUniqueProduct to change its state to scrapped.
5.1 - Messages
For each message topic you will find a short description what the message is used for and which structure it has, as well as what structure the payload is excepted to have.
Introduction
The United Manufacturing Hub provides a specific structure for messages/topics, each with its own unique purpose.
By adhering to this structure, the UMH will automatically calculate KPIs for you, while also making it easier to maintain
consistency in your topic structure.
5.1.1 - activity
activity messages are sent when a new order is added.
This is part of our recommended workflow to create machine states. The data sent here will not be stored in the database automatically, as it will be required to be converted into a state. In the future, there will be a microservice, which converts these automatically.
A message is sent here each time a new order is added.
Content
key
data type
description
product_id
string
current product name
order_id
string
current order name
target_units
int64
amount of units to be produced
The product needs to be added before adding the order. Otherwise, this message will be discarded
One order is always specific to that asset and can, by definition, not be used across machines. For this case one would need to create one order and product for each asset (reason: one product might go through multiple machines, but might have different target durations or even target units, e.g. one big 100m batch get split up into multiple pieces)
JSON
Examples
One order was started for 100 units of product “test”:
This message can be emitted to add a child product to a parent product.
It can be sent multiple times, if a parent product is split up into multiple child’s or multiple parents are combined into one child. One example for this if multiple parts are assembled to a single product.
detectedAnomaly messages are sent when an asset has stopped and the reason is identified.
This is part of our recommended workflow to create machine states. The data sent here will not be stored in the database automatically, as it will be required to be converted into a state. In the future, there will be a microservice, which converts these automatically.
If you have a lot of processValues, we’d recommend not using the /processValue as topic, but to append the tag name as well, e.g., /processValue/energyConsumption. This will structure it better for usage in MQTT Explorer or for data processing only certain processValues.
For automatic data storage in kafka-to-postgresql both will work fine as long as the payload is correct.
Please be aware that the values may only be int or float, other character are not valid, so make sure there is no quotation marks or anything
sneaking in there. Also be cautious of using the JavaScript ToFixed() function, as it is converting a float into a string.
Usage
A message is sent each time a process value has been prepared. The key has a unique name.
Content
key
data type
description
timestamp_ms
int64
unix timestamp of message creation
<valuename>
int64 or float64
Represents a process value, e.g. temperature
Pre 0.10.0:
As <valuename> is either of type ´int64´ or ´float64´, you cannot use booleans. Convert to integers as needed; e.g., true = “1”, false = “0”
A message is sent each time a process value has been prepared. The key has a unique name. This message is used when the datatype of the process value is a string instead of a number.
Content
key
data type
description
timestamp_ms
int64
unix timestamp of message creation
<valuename>
string
Represents a process value, e.g. temperature
JSON
Example
At the shown timestamp the custom process value “customer” had a readout of “miller”.
recommendation are action recommendations, which require concrete and rapid action in order to quickly eliminate efficiency losses on the store floor.
Content
key
data type
description
uid
string
UniqueID of the product
timestamp_ms
int64
unix timestamp of message creation
customer
string
the customer ID in the data structure
location
string
the location in the data structure
asset
string
the asset ID in the data structure
recommendationType
int32
Name of the product
enabled
bool
-
recommendationValues
map
Map of values based on which this recommendation is created
diagnoseTextDE
string
Diagnosis of the recommendation in german
diagnoseTextEN
string
Diagnosis of the recommendation in english
recommendationTextDE
string
Recommendation in german
recommendationTextEN
string
Recommendation in english
JSON
Example
A recommendation for the demonstrator at the shown location has not been running for a while, so a recommendation is sent to either start the machine or specify a reason why it is not running.
{
"UID": "43298756",
"timestamp_ms": 15888796894,
"customer": "united-manufacturing-hub",
"location": "dccaachen",
"asset": "DCCAachen-Demonstrator",
"recommendationType": "1",
"enabled": true,
"recommendationValues": { "Treshold": 30, "StoppedForTime": 612685 },
"diagnoseTextDE": "Maschine DCCAachen-Demonstrator steht seit 612685 Sekunden still (Status: 8, Schwellwert: 30)" ,
"diagnoseTextEN": "Machine DCCAachen-Demonstrator is not running since 612685 seconds (status: 8, threshold: 30)",
"recommendationTextDE":"Maschine DCCAachen-Demonstrator einschalten oder Stoppgrund auswählen.",
"recommendationTextEN": "Start machine DCCAachen-Demonstrator or specify stop reason.",
}
Here a message is sent every time products should be marked as scrap. It works as follows: A message with scrap and timestamp_ms is sent. It starts with the count that is directly before timestamp_ms. It is now iterated step by step back in time and step by step the existing counts are set to scrap until a total of scrap products have been scraped.
Content
timestamp_ms is the unix timestamp, you want to go back from
scrap number of item to be considered as scrap.
You can specify maximum of 24h to be scrapped to avoid accidents
(NOT IMPLEMENTED YET) If counts does not equal scrap, e.g. the count is 5 but only 2 more need to be scrapped, it will scrap exactly 2. Currently, it would ignore these 2. see also #125
(NOT IMPLEMENTED YET) If no counts are available for this asset, but uniqueProducts are available, they can also be marked as scrap.
JSON
Examples
Ten items where scrapped:
{
"timestamp_ms":1589788888888,
"scrap":10}
Schema
{
"$schema": "http://json-schema.org/draft/2019-09/schema",
"$id": "https://learn.umh.app/content/docs/architecture/datamodel/messages/scrapCount.json",
"type": "object",
"default": {},
"title": "Root Schema",
"required": [
"timestamp_ms",
"scrap" ],
"properties": {
"timestamp_ms": {
"type": "integer",
"default": 0,
"minimum": 0,
"title": "The unix timestamp you want to go back from",
"examples": [
1589788888888 ]
},
"scrap": {
"type": "integer",
"default": 0,
"minimum": 0,
"title": "Number of items to be considered as scrap",
"examples": [
10 ]
}
},
"examples": [
{
"timestamp_ms": 1589788888888,
"scrap": 10 },
{
"timestamp_ms": 1589788888888,
"scrap": 5 }
]
}
A message is sent here each time the asset changes status. Subsequent changes are not possible. Different statuses can also be process steps, such as “setup”, “post-processing”, etc. You can find a list of all supported states here.
Content
key
data type
description
state
uint32
value of the state according to the link above
timestamp_ms
uint64
unix timestamp of message creation
JSON
Example
The asset has a state of 10000, which means it is actively producing.
A message is sent here each time a product has been produced or modified. A modification can take place, for example, due to a downstream quality control.
There are two cases of when to send a message under the uniqueProduct topic:
The exact product doesn’t already have a UID (-> This is the case, if it has not been produced at an asset incorporated in the digital shadow). Specify a space holder asset = “storage” in the MQTT message for the uniqueProduct topic.
The product was produced at the current asset (it is now different from before, e.g. after machining or after something was screwed in). The newly produced product is always the “child” of the process. Products it was made out of are called the “parents”.
Content
key
data type
description
begin_timestamp_ms
int64
unix timestamp of start time
end_timestamp_ms
int64
unix timestamp of completion time
product_id
string
product ID of the currently produced product
isScrap
bool
optional information whether the current product is of poor quality and will be sorted out. Is considered false if not specified.
uniqueProductAlternativeID
string
alternative ID of the product
JSON
Example
The processing of product “Beilinger 30x15” with the AID 216381 started and ended at the designated timestamps. It is of low quality and due to be scrapped.
The database stores the messages in different tables.
Introduction
We are using the database TimescaleDB, which is based on PostgreSQL and supports standard relational SQL database work,
while also supporting time-series databases.
This allows for usage of regular SQL queries, while also allowing to process and store time-series data.
Postgresql has proven itself reliable over the last 25 years, so we are happy to use it.
If you want to learn more about database paradigms, please refer to the knowledge article about that topic.
It also includes a concise video summarizing what you need to know about different paradigms.
Our database model is designed to represent a physical manufacturing process. It keeps track of the following data:
The state of the machine
The products that are produced
The orders for the products
The workers’ shifts
Arbitrary process values (sensor data)
The producible products
Recommendations for the production
Please note that our database does not use a retention policy. This means that your database can grow quite fast if you save a lot of process values. Take a look at our guide on enabling data compression and retention in TimescaleDB to customize the database to your needs.
A good method to check your db size would be to use the following commands inside postgres shell:
CREATETABLEIFNOTEXISTScountTable(timestampTIMESTAMPTZNOTNULL,asset_idSERIALREFERENCESassetTable(id),countINTEGERCHECK(count>0),UNIQUE(timestamp,asset_id));-- creating hypertable
SELECTcreate_hypertable('countTable','timestamp');-- creating an index to increase performance
CREATEINDEXONcountTable(asset_id,timestampDESC);
This table stores process values, for example toner level of a printer, flow rate of a pump, etc.
This table, has a closely related table for storing number values, processValueTable.
CREATETABLEIFNOTEXISTSprocessValueStringTable(timestampTIMESTAMPTZNOTNULL,asset_idSERIALREFERENCESassetTable(id),valueNameTEXTNOTNULL,valueTESTNULL,UNIQUE(timestamp,asset_id,valueName));-- creating hypertable
SELECTcreate_hypertable('processValueStringTable','timestamp');-- creating an index to increase performance
CREATEINDEXONprocessValueStringTable(asset_id,timestampDESC);-- creating an index to increase performance
CREATEINDEXONprocessValueStringTable(valuename);
5.2.6 - processValueTable
processValueTable contains process values.
Usage
This table stores process values, for example toner level of a printer, flow rate of a pump, etc.
This table, has a closely related table for storing string values, processValueStringTable.
CREATETABLEIFNOTEXISTSprocessValueTable(timestampTIMESTAMPTZNOTNULL,asset_idSERIALREFERENCESassetTable(id),valueNameTEXTNOTNULL,valueDOUBLEPRECISIONNULL,UNIQUE(timestamp,asset_id,valueName));-- creating hypertable
SELECTcreate_hypertable('processValueTable','timestamp');-- creating an index to increase performance
CREATEINDEXONprocessValueTable(asset_id,timestampDESC);-- creating an index to increase performance
CREATEINDEXONprocessValueTable(valuename);
CREATETABLEIFNOTEXISTSstateTable(timestampTIMESTAMPTZNOTNULL,asset_idSERIALREFERENCESassetTable(id),stateINTEGERCHECK(state>=0),UNIQUE(timestamp,asset_id));-- creating hypertable
SELECTcreate_hypertable('stateTable','timestamp');-- creating an index to increase performance
CREATEINDEXONstateTable(asset_id,timestampDESC);
5.2.11 - uniqueProductTable
uniqueProductTable contains unique products and their IDs.
CREATETABLEIFNOTEXISTSuniqueProductTable(uidTEXTNOTNULL,asset_idSERIALREFERENCESassetTable(id),begin_timestamp_msTIMESTAMPTZNOTNULL,end_timestamp_msTIMESTAMPTZNOTNULL,product_idTEXTNOTNULL,is_scrapBOOLEANNOTNULL,quality_classTEXTNOTNULL,station_idTEXTNOTNULL,UNIQUE(uid,asset_id,station_id),CHECK(begin_timestamp_ms<end_timestamp_ms));-- creating an index to increase performance
CREATEINDEXONuniqueProductTable(asset_id,uid,station_id);
5.3 - States
States are the core of the database model. They represent the state of the machine at a given point in time.
States Documentation Index
Introduction
This documentation outlines the various states used in the United Manufacturing Hub software stack to calculate OEE/KPI and other production metrics.
State Categories
Active (10000-29999): These states represent that the asset is actively producing.
Material (60000-99999): These states represent that the asset has issues regarding materials.
Operator (140000-159999): These states represent that the asset is stopped because of operator related issues.
Planning (160000-179999): These states represent that the asset is stopped as it is planned to stop (planned idle time).
Process (100000-139999): These states represent that the asset is in a stop, which belongs to the process and cannot be avoided.
Unknown (30000-59999): These states represent that the asset is in an unspecified state.
Glossary
OEE: Overall Equipment Effectiveness
KPI: Key Performance Indicator
Conclusion
This documentation provides a comprehensive overview of the states used in the United Manufacturing Hub software stack and their respective categories. For more information on each state category and its individual states, please refer to the corresponding subpages.
5.3.1 - Active (10000-29999)
These states represent that the asset is actively producing
10000: ProducingAtFullSpeedState
This asset is running at full speed.
Examples for ProducingAtFullSpeedState
WS_Cur_State: Operating
PackML/Tobacco: Execute
20000: ProducingAtLowerThanFullSpeedState
Asset is producing, but not at full speed.
Examples for ProducingAtLowerThanFullSpeedState
WS_Cur_Prog: StartUp
WS_Cur_Prog: RunDown
WS_Cur_State: Stopping
PackML/Tobacco : Stopping
WS_Cur_State: Aborting
PackML/Tobacco: Aborting
WS_Cur_State: Holding
Ws_Cur_State: Unholding
PackML:Tobacco: Unholding
WS_Cur_State Suspending
PackML/Tobacco: Suspending
WS_Cur_State: Unsuspending
PackML/Tobacco: Unsuspending
PackML/Tobacco: Completing
WS_Cur_Prog: Production
EUROMAP: MANUAL_RUN
EUROMAP: CONTROLLED_RUN
Currently not included:
WS_Prog_Step: all
5.3.2 - Unknown (30000-59999)
These states represent that the asset is in an unspecified state
30000: UnknownState
Data for that particular asset is not available (e.g. connection to the PLC is disrupted)
Examples for UnknownState
WS_Cur_Prog: Undefined
EUROMAP: Offline
40000 UnspecifiedStopState
The asset is not producing, but the reason is unknown at the time.
Examples for UnspecifiedStopState
WS_Cur_State: Clearing
PackML/Tobacco: Clearing
WS_Cur_State: Emergency Stop
WS_Cur_State: Resetting
PackML/Tobacco: Clearing
WS_Cur_State: Held
EUROMAP: Idle
Tobacco: Other
WS_Cur_State: Stopped
PackML/Tobacco: Stopped
WS_Cur_State: Starting
PackML/Tobacco: Starting
WS_Cur_State: Prepared
WS_Cur_State: Idle
PackML/Tobacco: Idle
PackML/Tobacco: Complete
EUROMAP: READY_TO_RUN
50000: MicrostopState
The asset is not producing for a short period (typically around five minutes), but the reason is unknown at the time.
5.3.3 - Material (60000-99999)
These states represent that the asset has issues regarding materials.
60000 InletJamState
This machine does not perform its intended function due to a lack of material flow in the infeed of the machine, detected by the sensor system of the control system (machine stop). In the case of machines that have several inlets, the condition o lack in the inlet refers to the main flow , i.e. to the material (crate, bottle) that is fed in the direction of the filling machine (Central machine). The defect in the infeed is an extraneous defect, but because of its importance for visualization and technical reporting, it is recorded separately.
Examples for InletJamState
WS_Cur_State: Lack
70000: OutletJamState
The machine does not perform its intended function as a result of a jam in the good flow discharge of the machine, detected by the sensor system of the control system (machine stop). In the case of machines that have several discharges, the jam in the discharge condition refers to the main flow, i.e. to the good (crate, bottle) that is fed in the direction of the filling machine (central machine) or is fed away from the filling machine. The jam in the outfeed is an external fault 1v, but it is recorded separately, because of its importance for visualization and technical reporting.
Examples for OutletJamState
WS_Cur_State: Tailback
80000: CongestionBypassState
The machine does not perform its intended function due to a shortage in the bypass supply or a jam in the bypass discharge of the machine, detected by the sensor system of the control system (machine stop). This condition can only occur in machines with two outlets or inlets and in which the bypass is in turn the inlet or outlet of an upstream or downstream machine of the filling line (packaging and palleting machines). The jam/shortage in the auxiliary flow is an external fault, but it is recoded separately due to its importance for visualization and technical reporting.
Examples for the CongestionBypassState
WS_Cur_State: Lack/Tailback Branch Line
90000: MaterialIssueOtherState
The asset has a material issue, but it is not further specified.
Examples for MaterialIssueOtherState
WS_Mat_Ready (Information of which material is lacking)
PackML/Tobacco: Suspended
5.3.4 - Process(100000-139999)
These states represent that the asset is in a stop, which belongs to the process and cannot be avoided.
100000: ChangeoverState
The asset is in a changeover process between products.
Examples for ChangeoverState
WS_Cur_Prog: Program-Changeover
Tobacco: CHANGE OVER
110000: CleaningState
The asset is currently in a cleaning process.
Examples for CleaningState
WS_Cur_Prog: Program-Cleaning
Tobacco: CLEAN
120000: EmptyingState
The asset is currently emptied, e.g. to prevent mold for food products over the long breaks, e.g. the weekend.
Examples for EmptyingState
Tobacco: EMPTY OUT
130000: SettingUpState
This machine is currently preparing itself for production, e.g. heating up.
Examples for SettingUpState
EUROMAP: PREPARING
5.3.5 - Operator (140000-159999)
These states represent that the asset is stopped because of operator related issues.
140000: OperatorNotAtMachineState
The operator is not at the machine.
150000: OperatorBreakState
The operator is taking a break.
This is different from a planned shift as it could contribute to performance losses.
Examples for OperatorBreakState
WS_Cur_Prog: Program-Break
5.3.6 - Planning (160000-179999)
These states represent that the asset is stopped as it is planned to stopped (planned idle time).
160000: NoShiftState
There is no shift planned at that asset.
170000: NO OrderState
There is no order planned at that asset.
5.3.7 - Technical (180000-229999)
These states represent that the asset has a technical issue.
180000: EquipmentFailureState
The asset itself is defect, e.g. a broken engine.
Examples for EquipmentFailureState
WS_Cur_State: Equipment Failure
190000: ExternalFailureState
There is an external failure, e.g. missing compressed air.
Examples for ExternalFailureState
WS_Cur_State: External Failure
200000: ExternalInterferenceState
There is an external interference, e.g. the crane to move the material is currently unavailable.
210000: PreventiveMaintenanceStop
A planned maintenance action.
Examples for PreventiveMaintenanceStop
WS_Cur_Prog: Program-Maintenance
PackML: Maintenance
EUROMAP: MAINTENANCE
Tobacco: MAINTENANCE
220000: TechnicalOtherStop
The asset has a technical issue, but it is not specified further.
Examples for TechnicalOtherStop
WS_Not_Of_Fail_Code
PackML: Held
EUROMAP: MALFUNCTION
Tobacco: MANUAL
Tobacco: SET UP
Tobacco: REMOTE SERVICE
6 - Architecture
A comprehensive overview of the United Manufacturing Hub architecture,
detailing its deployment, management, and data processing capabilities.
The United Manufacturing Hub is a comprehensive Helm Chart for Kubernetes,
integrating a variety of open source software, including notable third-party
applications such as Node-RED and Grafana. Designed for versatility, UMH is
deployable across a wide spectrum of environments, from edge devices to virtual
machines, and even managed Kubernetes services, catering to diverse industrial
needs.
The following diagram depicts the interaction dynamics between UMH’s components
and user types, offering a visual guide to its architecture and operational
mechanisms.
Management Console
The Management Console
of the United Manufacturing Hub is a robust web application designed to configure,
manage, and monitor the various aspects of Data and Device & Container
Infrastructures within UMH. Acting as the central command center, it provides a
comprehensive overview and control over the system’s functionalities, ensuring
efficient operation and maintenance. The console simplifies complex processes,
making it accessible for users to oversee the vast array of services and operations
integral to UMH.
Device & Container Infrastructure
The Device & Container Infrastructure
lays the foundation of the United Manufacturing Hub’s architecture, streamlining
the deployment and setup of essential software and operating systems across devices.
This infrastructure is pivotal in automating the installation process, ensuring
that the essential software components and operating systems are efficiently and
reliably established. It provides the groundwork upon which the Data Infrastructure
is built, embodying a robust and scalable base for the entire architecture.
Data Infrastructure
The Data Infrastructure is the heart of
the United Manufacturing Hub, orchestrating the interconnection of data sources,
storage, monitoring, and analysis solutions. It comprises three key components:
Data Connectivity: Facilitates the integration of diverse data sources into
UMH, enabling uninterrupted data exchange.
Unified Namespace (UNS): Centralizes and standardizes data within UMH into
a cohesive model, by linking each layer of the ISA-95 automation pyramid to the
UNS and assimilating non-traditional data sources.
Historian: Stores data in TimescaleDB, a PostgreSQL-based time-series
database, allowing real-time and historical data analysis through Grafana or
other tools.
The UMH Data Infrastructure leverages Industrial IoT to expand the ISA95 Automation
Pyramid, enabling high-speed data processing using systems like Kafka. It enhances
system availability through Kubernetes and simplifies maintenance with Docker and
Prometheus. Additionally, it facilitates the use of AI, predictive maintenance,
and digital twin technologies
Expandability
The United Manufacturing Hub is architecturally designed for high expandability,
enabling integration of custom microservices or Docker containers. This adaptability
allows for users to establish connections with third-party systems or to implement
specialized data analysis tools. The platform also accommodates any third-party
application available as a Helm Chart, Kubernetes resource, or Docker Compose,
offering vast potential for customization to suit evolving industrial demands.
6.1 - Data Infrastructure
An overview of UMH’s Data Infrastructure, integrating and managing diverse data
sources.
The United Manufacturing Hub’s Data Infrastructure is where all data converges.
It extends the ISA95 Automation Pyramid, the usual model for data flow in factory
settings. This infrastructure links each level of the traditional pyramid to the
Unified Namespace (UNS), incorporating extra data sources that the typical automation
pyramid doesn’t include. The data is then organized, stored, and analyzed to offer
useful information for frontline workers. Afterwards, it can be sent to the a data
lake or analytics platform, where business analysts can access it for deeper insights.
It comprises three primary elements:
Data Connectivity:
This component includes an array of tools and services designed
to connect various systems and sensors on the shop floor, facilitating the flow
of data into the Unified Namespace.
Unified Namespace:
Acts as the central hub for all events and messages on the
shop floor, ensuring data consistency and accessibility.
Historian: Responsible
for storing events in a time-series database, it also provides tools for data
visualization, enabling both real-time and historical analytics.
Together, these elements provide a comprehensive framework for collecting,
storing, and analyzing data, enhancing the operational efficiency and
decision-making processes on the shop floor.
6.1.1 - Data Connectivity
Learn about the tools and services in UMH’s Data Connectivity for integrating
shop floor systems.
The Data Connectivity module in the United Manufacturing Hub is designed to enable
seamless integration of various data sources from the manufacturing environment
into the Unified Namespace. Key components include:
Node-RED:
A versatile programming tool that links hardware devices, APIs, and online services.
barcodereader:
Connects to USB barcode readers, pushing data to the message broker.
benthos-umh: A specialized version of benthos featuring an OPC UA plugin for
efficient data extraction.
sensorconnect:
Integrates with IO-Link Masters and their sensors, relaying data to the message broker.
These tools collectively facilitate the extraction and contextualization of data
from diverse sources, adhering to the ISA-95 automation pyramid model, and
enhancing the Management Console’s capability to monitor and manage data flow
within the UMH ecosystem.
6.1.1.1 - Barcodereader
This microservice is still in development and is not considered stable for production use.
Barcodereader is a microservice that reads barcodes and sends the data to the Kafka broker.
How it works
Connect a barcode scanner to the system and the microservice will read the barcodes and send the data to the Kafka broker.
What’s next
Read the Barcodereader reference
documentation to learn more about the technical details of the Barcodereader
microservice.
6.1.1.2 - Node Red
Node-RED is a programming tool for wiring together
hardware devices, APIs and online services in new and interesting ways. It
provides a browser-based editor that makes it easy to wire together flows using
the wide range of nodes in the Node-RED library.
How it works
Node-RED is a JavaScript-based tool that can be used to create flows that
interact with the other microservices in the United Manufacturing Hub or
external services.
Read the Node-RED reference
documentation to learn more about the technical details of the Node-RED
microservice.
6.1.1.3 - Sensorconnect
Sensorconnect automatically detects ifm gateways
connected to the network and reads data from the connected IO-Link
sensors.
How it works
Sensorconnect continuosly scans the given IP range for gateways, making it
effectively a plug-and-play solution. Once a gateway is found, it automatically
download the IODD files for the connected sensors and starts reading the data at
the configured interval. Then it processes the data and sends it to the MQTT or
Kafka broker, to be consumed by other microservices.
If you want to learn more about how to use sensors in your asstes, check out the
retrofitting section of the UMH Learn
website.
IODD files
The IODD files are used to describe the sensors connected to the gateway. They
contain information about the data type, the unit of measurement, the minimum and
maximum values, etc. The IODD files are downloaded automatically from
IODDFinder once a sensor is found, and are
stored in a Persistent Volume. If downloading from internet is not possible,
for example in a closed network, you can download the IODD files manually and
store them in the folder specified by the IODD_FILE_PATH environment variable.
If no IODD file is found for a sensor, the data will not be processed, but sent
to the broker as-is.
What’s next
Read the Sensorconnect reference
documentation to learn more about the technical details of the Sensorconnect
microservice.
6.1.2 - Unified Namespace
Discover the Unified Namespace’s role as a central hub for shop floor data in
UMH.
The Unified Namespace (UNS) within the United Manufacturing Hub is a vital module
facilitating the streamlined flow and management of data. It comprises various
microservices:
data-bridge:
Bridges data between MQTT and Kafka and between multiple Kafka instances, ensuring
efficient data transmission.
HiveMQ:
An MQTT broker crucial for receiving data from IoT devices on the shop floor.
Redpanda (Kafka):
Manages large-scale data processing and orchestrates communication between microservices.
Redpanda Console:
Offers a graphical interface for monitoring Kafka topics and messages.
The UNS serves as a pivotal point in the UMH architecture, ensuring data from shop
floor systems and sensors (gathered via the Data Connectivity module) is effectively
processed and relayed to the Historian and external Data Warehouses/Data Lakes
for storage and analysis.
6.1.2.1 - Data Bridge
Data-bridge is a microservice specifically tailored to adhere to the
UNS
data model. It consumes topics from a message broker, translates them to
the proper format and publishes them to the other message broker.
How it works
Data-bridge connects to the source broker, that can be either Kafka or MQTT,
and subscribes to the topics specified in the configuration. It then processes
the messages, and publishes them to the destination broker, that can be either
Kafka or MQTT.
In the case where the destination broker is Kafka, messages from multiple topics
can be merged into a single topic, making use of the message key to identify
the source topic.
For example, subscribing to a topic using a wildcard, such as
umh.v1.acme.anytown..*, and a merge point of 4, will result in
messages from the topics umh.v1.acme.anytown.foo.bar,
umh.v1.acme.anytown.foo.baz, umh.v1.acme.anytown and umh.v1.acme.anytown.frob
being merged into a single topic, umh.v1.acme.anytown, with the message key
being the missing part of the topic name, in this case foo.bar, foo.baz, etc.
Here is a diagram showing the flow of messages:
The value of the message is not changed, only the topic and key are modified.
Another important feature is that it is possible to configure multiple data
bridges, each with its own source and destination brokers, and each with its
own set of topics to subscribe to and merge point.
The brokers can be local or remote, and, in case of MQTT, they can be secured
using TLS.
What’s next
Read the Data Bridge reference
documentation to learn more about the technical details of the data-bridge microservice.
6.1.2.2 - Kafka Broker
The Kafka broker in the United Manufacturing Hub is RedPanda,
a Kafka-compatible event streaming platform. It’s used to store and process
messages, in order to stream real-time data between the microservices.
How it works
RedPanda is a distributed system that is made up of a cluster of brokers,
designed for maximum performance and reliability. It does not depend on external
systems like ZooKeeper, as it’s shipped as a single binary.
Read the Kafka Broker reference documentation
to learn more about the technical details of the Kafka broker microservice
6.1.2.3 - Kafka Console
Kafka-console uses Redpanda Console
to help you manage and debug your Kafka workloads effortlessy.
With it, you can explore your Kafka topics, view messages, list the active
consumers, and more.
How it works
You can access the Kafka console via its Service.
It’s automatically connected to the Kafka broker, so you can start using it
right away.
You can view the Kafka broker configuration in the Broker tab, and explore the
topics in the Topics tab.
What’s next
Read the Kafka Console reference documentation
to learn more about the technical details of the Kafka Console microservice.
6.1.2.4 - MQTT Broker
The MQTT broker in the United Manufacturing Hub is HiveMQ
and is customized to fit the needs of the stack. It’s a core component of
the stack and is used to communicate between the different microservices.
How it works
The MQTT broker is responsible for receiving MQTT messages from the
different microservices and forwarding them to the
MQTT Kafka bridge.
What’s next
Read the MQTT Broker reference documentation
to learn more about the technical details of the MQTT Broker microservice.
6.1.3 - Historian
Insight into the Historian’s role in storing and visualizing data within the
UMH ecosystem.
The Historian in the United Manufacturing Hub serves as a comprehensive data
management and visualization system. It includes:
kafka-to-postgresql-v2:
Archives Kafka messages adhering to the Data Model V2 schema into the database.
TimescaleDB:
An open-source SQL database specialized in time-series data storage.
Grafana:
A software tool for data visualization and analytics.
factoryinsight:
An analytics tool designed for data analysis, including calculating operational efficiency metrics like OEE.
Redis:
Utilized as an in-memory data structure store for caching purposes.
This structure ensures that data from the Unified Namespace is systematically
stored, processed, and made visually accessible, providing OT professionals with
real-time insights and analytics on shop floor operations.
6.1.3.1 - Cache
The cache in the United Manufacturing Hub is Redis, a
key-value store that is used as a cache for the other microservices.
How it works
Recently used data is stored in the cache to reduce the load on the database.
All the microservices that need to access the database will first check if the
data is available in the cache. If it is, it will be used, otherwise the
microservice will query the database and store the result in the cache.
By default, Redis is configured to run in standalone mode, which means that it
will only have one master node.
What’s next
Read the Cache reference documentation
to learn more about the technical details of the cache microservice.
6.1.3.2 - Database
The database microservice is the central component of the United Manufacturing
Hub and is based on TimescaleDB, an open-source relational database built for
handling time-series data. TimescaleDB is designed to provide scalable and
efficient storage, processing, and analysis of time-series data.
You can find more information on the datamodel of the database in the
Data Model section, and read
about the choice to use TimescaleDB in the
blog article.
How it works
When deployed, the database microservice will create two databases, with the
related usernames and passwords:
grafana: This database is used by Grafana to store the dashboards and
other data.
factoryinsight: This database is the main database of the United Manufacturing
Hub. It contains all the data that is collected by the microservices.
Read the Database reference documentation
to learn more about the technical details of the database microservice.
6.1.3.3 - Factoryinsight
Factoryinsight is a microservice that provides a set of REST APIs to access the
data from the database. It is particularly useful to calculate the Key
Performance Indicators (KPIs) of the factories.
How it works
Factoryinsight exposes REST APIs to access the data from the database or calculate
the KPIs. By default, it’s only accessible from the internal network of the
cluster, but it can be configured to be
accessible from the external network.
The APIs require authentication, that can be either a Basic Auth or a Bearer
token. Both of these can be found in the Secret factoryinsight-secret.
What’s next
Read the Factoryinsight reference documentation
to learn more about the technical details of the Factoryinsight microservice.
6.1.3.4 - Grafana
The grafana microservice is a web application that provides visualization and
analytics capabilities. Grafana allows you to query, visualize, alert on and
understand your metrics no matter where they are stored.
It has a rich ecosystem of plugins that allow you to extend its functionality
beyond the core features.
How it works
Grafana is a web application that can be accessed through a web browser. It
let’s you create dashboards that can be used to visualize data from the database.
Thanks to some custom datasource plugins,
Grafana can use the various APIs of the United Manufacturing Hub to query the
database and display useful information.
What’s next
Read the Grafana reference documentation
to learn more about the technical details of the grafana microservice.
6.1.3.5 - Kafka to Postgresql V2
The Kafka to PostgreSQL v2 microservice plays a crucial role in consuming and
translating Kafka messages for storage in a PostgreSQL database. It aligns with
the specifications outlined in the Data Model v2.
How it works
Utilizing Data Model v2, Kafka to PostgreSQL v2 is specifically configured to
process messages from topics beginning with umh.v1.. Each new topic undergoes
validation against Data Model v2 before message consumption begins. This ensures
adherence to the defined data structure and standards.
Message payloads are scrutinized for structural validity prior to database insertion.
Messages with invalid payloads are systematically rejected to maintain data integrity.
The microservice then evaluates the payload to determine the appropriate table
for insertion within the PostgreSQL database. The decision is based on the data
type of the payload field, adhering to the following rules:
Numeric data types are directed to the tag table.
String data types are directed to the tag_string table.
What’s next
Read the Kafka to Postgresql v2
reference documentation to learn more about the technical details of the
Kafka to Postgresql v2 microservice.
6.1.3.6 - Umh Datasource V2
The plugin, umh-datasource-v2, is a Grafana data source plugin that allows you to fetch
resources from a database and build queries for your dashboard.
How it works
When creating a new panel, select umh-datasource-v2 from the Data source drop-down menu. It will then fetch the resources
from the database. The loading time may depend on your internet speed.
Select the resources in the cascade menu to build your query. DefaultArea and DefaultProductionLine are placeholders
for the future implementation of the new data model.
Only the available values for the specified work cell will be fetched from the database. You can then select which data value you want to query.
Next you can specify how to transform the data, depending on what value you selected.
For example, all the custom tags will have the aggregation options available. For example if you query a processValue:
Time bucket: lets you group data in a time bucket
Aggregates: common statistical aggregations (maximum, minimum, sum or count)
Handling missing values: lets you choose how missing data should be handled
Configuration
In Grafana, navigate to the Data sources configuration panel.
Select umh-v2-datasource to configure it.
Configurations:
Base URL: the URL for the factoryinsight backend. Defaults to http://united-manufacturing-hub-factoryinsight-service/.
Enterprise name: previously customerID for the old datasource plugin. Defaults to factoryinsight.
API Key: authenticates the API calls to factoryinsight.
Can be found with UMHLens by going to Secrets → factoryinsight-secret → apiKey. It should follow the format Basic xxxxxxxx.
6.2 - Device & Container Infrastructure
Understand the automated deployment and setup process in UMH’s Device &
Container Infrastructure.
The Device & Container Infrastructure in the United Manufacturing Hub automates
the deployment and setup of the data infrastructure in various environments. It
is tailored for Edge deployments, particularly in Demilitarized Zones, to minimize
latency on-premise, and also extends into the Cloud to harness its functionalities.
It consists of several interconnected components:
Provisioning Server: Manages the initial bootstrapping of devices,
including iPXE configuration and ignition file distribution.
Flatcar Image Server: A central repository hosting various versions of
Flatcar Container Linux images, ensuring easy access and version control.
Customized iPXE: A specialized bootloader configured to streamline the
initial boot process by fetching UMH-specific settings and configurations.
First and Second Stage Flatcar OS: A two-stage operating system setup where
the first stage is a temporary OS used for installing the second stage, which
is the final operating system equipped with specific configurations and tools.
Installation Script: An automated script hosted at management.umh.app,
responsible for setting up and configuring the Kubernetes environment.
Kubernetes (k3s): A lightweight Kubernetes setup that forms the backbone
of the container orchestration system.
This infrastructure ensures a streamlined, automated installation process, laying
a robust foundation for the United Manufacturing Hub’s operation.
6.3 - Management Console
Delve into the functionalities and components of the UMH’s Management Console,
ensuring efficient system management.
The Management Console is pivotal in configuring, managing, and monitoring the
United Manufacturing Hub. It comprises a web application,
a backend API and the management companion agent, all designed to ensure secure and
efficient operation.
Web Application
The client-side Web Application, available at management.umh.app
enables users to register, add, and manage instances, and monitor the
infrastructure within the United Manufacturing Hub. All communications between
the Web Application and the user’s devices are end-to-end encrypted, ensuring
complete confidentiality from the backend.
Management Companion
Deployed on each UMH instance, the Management Companion acts as an agent responsible
for decrypting messages coming from the user via the Backend and executing
requested actions. Responses are end-to-end encrypted as well, maintaining a
secure and opaque channel to the Backend.
Management Updater
The Updater is a custom Job run by the Management Companion, responsible for
updating the Management Companion itself. Its purpose is to automate the process
of upgrading the Management Companion to the latest version, reducing the
administrative overhead of managing UMH instances.
Backend
The Backend is the public API for the Management Console. It functions as a bridge
between the Web Application and the Management Companion. Its primary role is to
verify user permissions for accessing UMH instances. Importantly, the backend
does not have access to the contents of the messages exchanged between the Web
Application and the Management Companion, ensuring that communication remains
opaque and secure.
6.4 - Legacy
This section gives an overview of the legacy microservices that can be found
in older versions of the United Manufacturing Hub.
This section provides a comprehensive overview of the legacy microservices within
the United Manufacturing Hub. These microservices are currently in a transitional
phase, being maintained and deployed alongside newer versions of UMH as we gradually
shift from Data Model v1 to v2. While these legacy components are set to be deprecated
in the future, they continue to play a crucial role in ensuring smooth operations
and compatibility during this transition period.
6.4.1 - Factoryinput
This microservice is still in development and is not considered stable for production use
Factoryinput provides REST endpoints for MQTT messages via HTTP requests.
This microservice is typically accessed via grafana-proxy
How it works
The factoryinput microservice provides REST endpoints for MQTT messages via HTTP requests.
The main endpoint is /api/v1/{customer}/{location}/{asset}/{value}, with a POST
request method. The customer, location, asset and value are all strings. And are
used to build the MQTT topic. The body of the HTTP request is used as the MQTT
payload.
What’s next
Read the Factoryinput reference
documentation to learn more about the technical details of the Factoryinput
microservice.
6.4.2 - Grafana Proxy
This microservice is still in development and is not considered stable for production use
How it works
The grafana-proxy microservice serves an HTTP REST endpoint located at
/api/v1/{service}/{data}. The service parameter specifies the backend
service to which the request should be proxied, like factoryinput or
factoryinsight. The data parameter specifies the API endpoint to forward to
the backend service. The body of the HTTP request is used as the payload for
the proxied request.
What’s next
Read the Grafana Proxy reference
documentation to learn more about the technical details of the Grafana Proxy
microservice.
6.4.3 - Kafka Bridge
Kafka-bridge is a microservice that connects two Kafka brokers and forwards
messages between them. It is used to connect the local broker of the edge computer
with the remote broker on the server.
How it works
This microservice has two ways of operation:
High Integrity: This mode is used for topics that are critical for the
user. It is garanteed that no messages are lost. This is achieved by
committing the message only after it has been successfully inserted into the
database. Ususally all the topics are forwarded in this mode, except for
processValue, processValueString and raw messages.
High Throughput: This mode is used for topics that are not critical for
the user. They are forwarded as fast as possible, but it is possible that
messages are lost, for example if the database struggles to keep up. Usually
only the processValue, processValueString and raw messages are forwarded in
this mode.
What’s next
Read the Kafka Bridge reference documentation
to learn more about the technical details of the Kafka Bridge microservice.
6.4.4 - Kafka State Detector
This microservice is still in development and is not considered stable for production use
How it works
What’s next
6.4.5 - Kafka to Postgresql
Kafka-to-postgresql is a microservice responsible for consuming kafka messages
and inserting the payload into a Postgresql database. Take a look at the
Datamodel to see how the data is structured.
This microservice requires that the Kafka Topic umh.v1.kafka.newTopic exits. This will happen automatically from version 0.9.12.
How it works
By default, kafka-to-postgresql sets up two Kafka consumers, one for the
High Integrity topics and one for the
High Throughput topics.
The graphic below shows the program flow of the microservice.
High integrity
The High integrity topics are forwarded to the database in a synchronous way.
This means that the microservice will wait for the database to respond with a
non error message before committing the message to the Kafka broker.
This way, the message is garanteed to be inserted into the database, even though
it might take a while.
Most of the topics are forwarded in this mode.
The picture below shows the program flow of the high integrity mode.
High throughput
The High throughput topics are forwarded to the database in an asynchronous way.
This means that the microservice will not wait for the database to respond with
a non error message before committing the message to the Kafka broker.
This way, the message is not garanteed to be inserted into the database, but
the microservice will try to insert the message into the database as soon as
possible. This mode is used for the topics that are expected to have a high
throughput.
Read the Kafka to Postgresql reference documentation
to learn more about the technical details of the Kafka to Postgresql microservice.
6.4.6 - MQTT Bridge
MQTT-bridge is a microservice that connects two MQTT brokers and forwards
messages between them. It is used to connect the local broker of the edge computer
with the remote broker on the server.
How it works
This microservice subscribes to topics on the local broker and publishes the
messages to the remote broker, while also subscribing to topics on the remote
broker and publishing the messages to the local broker.
What’s next
Read the MQTT Bridge reference documentation
to learn more about the technical details of the MQTT Bridge microservice.
6.4.7 - MQTT Kafka Bridge
Mqtt-kafka-bridge is a microservice that acts as a bridge between MQTT brokers
and Kafka brokers, transfering messages from one to the other and vice versa.
This microservice requires that the Kafka Topic umh.v1.kafka.newTopic exits.
This will happen automatically from version 0.9.12.
Since version 0.9.10, it allows all raw messages, even if their content is not
in a valid JSON format.
How it works
Mqtt-kafka-bridge consumes topics from a message broker, translates them to
the proper format and publishes them to the other message broker.
What’s next
Read the MQTT Kafka Bridge
reference documentation to learn more about the technical details of the
MQTT Kafka Bridge microservice.
6.4.8 - MQTT Simulator
This microservice is a community contribution and is not part of the main stack of the United Manufacturing Hub, but is enabled by default.
The IoTSensors MQTT Simulator is a microservice that simulates sensors sending data to the
MQTT broker. You can read the full documentation on the
The microservice publishes messages on the topic ia/raw/development/ioTSensors/,
creating a subtopic for each simulation. The subtopics are the names of the
simulations, which are Temperature, Humidity, and Pressure.
The values are calculated using a normal distribution with a mean and standard
deviation that can be configured.
What’s next
Read the IoTSensors MQTT Simulator reference
documentation to learn more about the technical details of the IoTSensors MQTT Simulator
microservice.
This microservice is deprecated and should not be used anymore in production.
Please use kafka-to-postgresql instead.
How it works
The mqtt-to-postgresql microservice subscribes to the MQTT broker and saves
the values of the messages on the topic ia/# in the database.
What’s next
Read the MQTT to Postgresql reference
documentation to learn more about the technical details of the MQTT to Postgresql
microservice.
6.4.10 - OPCUA Simulator
This microservice is a community contribution and is not part of the main stack of the United Manufacturing Hub, but is enabled by default.
How it works
The OPCUA Simulator is a microservice that simulates OPCUA devices. You can read
the full documentation on the
GitHub repository.
You can then connect to the simulated OPCUA server via Node-RED and read the
values of the simulated devices. Learn more about how to connect to the OPCUA
simulator to Node-RED in our guide.
What’s next
Read the OPCUA Simulator reference
documentation to learn more about the technical details of the OPCUA Simulator
microservice.
6.4.11 - PackML Simulator
This microservice is a community contribution and is not part of the main stack of the United Manufacturing Hub, but it is enabled by default.
PackML MQTT Simulator is a virtual line that interfaces using PackML implemented
over MQTT. It implements the following PackML State model and communicates
over MQTT topics as defined by environmental variables. The simulator can run
with either a basic MQTT topic structure or SparkPlugB.
Read the PackML Simulator reference
documentation to learn more about the technical details of the PackML Simulator
microservice.
6.4.12 - Tulip Connector
This microservice is still in development and is not considered stable for production use.
The tulip-connector microservice enables communication with the United
Manufacturing Hub by exposing internal APIs, like
factoryinsight, to the
internet. With this REST endpoint, users can access data stored in the UMH and
seamlessly integrate Tulip with a Unified Namespace and on-premise Historian.
Furthermore, the tulip-connector can be customized to meet specific customer
requirements, including integration with an on-premise MES system.
How it works
The tulip-connector acts as a proxy between the internet and the UMH. It
exposes an endpoint to forward requests to the UMH and returns the response.
What’s next
Read the Tulip Connector reference
documentation to learn more about the technical details of the Tulip Connector
microservice.
6.4.13 - Grafana Plugins
This section contains the overview of the custom Grafana plugins that can be
used to access the United Manufacturing Hub.
6.4.13.1 - Umh Datasource
This page contains the technical documentation of the plugin umh-datasource, which allows for easy data extraction from factoryinsight.
We are no longer maintaining this microservice. Use instead our new microservice datasource-v2 for data extraction from factoryinsight.
The umh datasource is a Grafana 8.X compatible plugin, that allows you to fetch resources from a database
and build queries for your dashboard.
How it works
When creating a new panel, select umh-datasource from the Data source drop-down menu. It will then fetch the resources
from the database. The loading time may depend on your internet speed.
Select your query parameters Location, Asset and Value to build your query.
Configuration
In Grafana, navigate to the Data sources configuration panel.
Select umh-datasource to configure it.
Configurations:
Base URL: the URL for the factoryinsight backend. Defaults to http://united-manufacturing-hub-factoryinsight-service/.
Enterprise name: previously customerID for the old datasource plugin. Defaults to factoryinsight.
API Key: authenticates the API calls to factoryinsight.
Can be found with UMHLens by going to Secrets → factoryinsight-secret → apiKey. It should follow the format Basic xxxxxxxx.
6.4.13.2 - Factoryinput Panel
This page contains the technical documentation of the plugin factoryinput-panel, which allows for easy execution of MQTT messages inside the UMH stack from a Grafana panel.
This plugin is still in development and is not considered stable for production use
Below you will find a schematic of this flow, through our stack.
7 - Production Guide
This section contains information about how to use the stack in a production
environment.
7.1 - Installation
This section contains guides on how to install the United Manufacturing Hub.
Learn how to install the United Manufacturing Hub using completely Free and Open
Source Software.
7.1.1 - Flatcar Installation
This page describes how to deploy the United Manufacturing Hub on Flatcar
Linux.
Here is a step-by-step guide on how to deploy the United Manufacturing Hub on
Flatcar Linux, a Linux distribution designed for
container workloads with high security and low maintenance. This will leverage
the UMH Device and Container Infrastructure.
The system can be installed either bare metal or in a virtual machine.
Before you begin
Ensure your system meets these minimum requirements:
4-core CPU
8 GB system RAM
32 GB available disk space
Internet access
You will also need the latest version of the iPXE boot image, suitable for your
system:
ipxe-x86_64-efi:
For modern systems, recommended for virtual machines.
For virtual machines, ensure UEFI boot is enabled when creating the VM.
Lastly, ensure you are on the same network as the device for SSH access post-installation.
System Preparation and Booting from iPXE
Identify the drive for Flatcar Linux installation. For virtual machines, this is
typically sda. For bare metal, the drive depends on your physical storage. The
troubleshooting section can help identify the correct drive.
Boot your device from the iPXE image. Consult your device or hypervisor
documentation for booting instructions.
At the first prompt, read and accept the license to proceed.
Next, configure your network settings. Select DHCP if uncertain.
The connection will be tested next. If it fails, revisit the network settings.
Ensure your device has internet access and no firewalls are blocking the connection.
Then, select the drive for Flatcar Linux installation.
A summary of the installation will appear. Check that everything is correct and
confirm to start the process.
Shortly after, you’ll see a green command line core@flatcar-0-install. Remove
the USB stick or the CD drive from the VM. The system will continue processing.
The installation will complete after a few minutes, and the system will reboot.
When you see the green core@flatcar-1-umh login prompt, the installation is
complete, and the device’s IP address will be displayed.
Installation time varies based on network speed and system performance.
Connect to the Device
With the system installed, access it via SSH.
For Windows 11 users, the default
Windows Terminal
is recommended. For other OS users, try MobaXTerm.
To do so, open you terminal of choice. We recommend the default
Windows Terminal,
or MobaXTerm if you are not on Windows 11.
Connect to the device using this command, substituting <ip-address> with your
device’s IP address:
ssh core@<ip-address>
When prompted, enter the default password for the core user: umh.
Troubleshooting
The Installation Stops at the First Green Login Prompt
If the installation halts at the first green login prompt, check the installation
status with:
systemctl status installer
A typical response for an ongoing installation will look like this:
● installer.service - Flatcar Linux Installer
Loaded: loaded (/usr/lib/systemd/system/installer.service; static; vendor preset: enabled)
Active: active (running) since Wed 2021-05-12 14:00:00 UTC; 1min 30s ago
If the status differs, the installation may have failed. Review the logs to
identify the issue.
Unsure Which Drive to Select
To determine the correct drive, refer to your device’s manual:
SATA drives (HDD or SSD): Typically labeled as sda.
NVMe drives: Usually labeled as nvm0n1.
For further verification, boot any Linux distribution on your device and execute:
lsblk
The output, resembling the following, will help identify the drive:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223.6G 0 disk
├─sda1 8:1 0 512M 0 part /boot
└─sda2 8:2 0 223.1G 0 part /
sdb 8:0 0 31.8G 0 disk
└─sdb1 8:1 0 31.8G 0 part /mnt/usb
In most cases, the correct drive is the first listed or the one not matching the
USB stick size.
No Resources in the Cluster
If you can access the cluster but see no resources, SSH into the edge device and
check the cluster status:
systemctl status k3s
If the status is not active (running), the cluster isn’t operational. Restart it with:
sudo systemctl restart k3s
If the cluster is active or restarting doesn’t resolve the issue, inspect the
installation logs:
systemctl status umh-install
systemctl status helm-install
Persistent errors may necessitate a system reinstallation.
I can’t SSH into the virtual machine
Ensure that your computer is on the same network as the virtual machine, with no
firewalls or VPNs blocking the connection.
What’s next
You can follow the Getting Started guide
to get familiar with the UMH stack.
If you already know your way around the United Manufacturing Hub, you can
follow the Administration guides to
configure the stack for production.
7.2 - Upgrading
This section contains all upgrade guides, from the Companion of the Management Console to the UMH stack.
7.2.1 - Upgrade to v0.15.0
This page describes how to upgrade the United Manufacturing Hub from version 0.14.0 to 0.15.0
This page describes how to upgrade the United Manufacturing Hub from version 0.14.0 to 0.15.0.
Before upgrading, remember to back up the
database,
Node-RED flows,
and your cluster configuration.
This page describes how to upgrade the United Manufacturing Hub from version 0.13.6 to 0.14.0
This page describes how to upgrade the United Manufacturing Hub from version 0.13.6 to 0.14.0.
Before upgrading, remember to back up the
database,
Node-RED flows,
and your cluster configuration.
This page describes how to upgrade the United Manufacturing Hub from version 0.13.6 to 0.13.7
This page describes how to upgrade the United Manufacturing Hub from version 0.13.6 to 0.13.7.
Before upgrading, remember to back up the
database,
Node-RED flows,
and your cluster configuration.
This page describes how to upgrade the United Manufacturing Hub to version 0.13.6
This page describes how to upgrade the United Manufacturing Hub to version 0.13.6.
Before upgrading, remember to back up the
database,
Node-RED flows,
and your cluster configuration.
This page describes how to upgrade the United Manufacturing Hub to version 0.10.6
This page describes how to upgrade the United Manufacturing Hub to version
0.10.6. Before upgrading, remember to back up the
database,
Node-RED flows,
and your cluster configuration.
All the following commands are to be run from the UMH instance’s shell.
Update Helm Repo
Fetch the latest Helm charts from the UMH repository:
Due to a limitation of Helm, we cannot automatically set grafana.env.GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=umh-datasource,umh-v2-datasource.
You could either ignore this (if your network is not restricuted to a single domain) or set it manually in the Grafana deployment.
We are also not able to manually overwrite grafana.extraInitContainers[0].image=management.umh.app/oci/united-manufacturing-hub/grafana-umh.
You could either ignore this (if your network is not restricuted to a single domain) or set it manually in the Grafana deployment.
Host system
Open the /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl using vi as root and add the following lines:
version = 2[plugins."io.containerd.internal.v1.opt"]
path = "/var/lib/rancher/k3s/agent/containerd"[plugins."io.containerd.grpc.v1.cri"]
stream_server_address = "127.0.0.1"stream_server_port = "10010"enable_selinux = falseenable_unprivileged_ports = trueenable_unprivileged_icmp = truesandbox_image = "management.umh.app/v2/rancher/mirrored-pause:3.6"[plugins."io.containerd.grpc.v1.cri".containerd]
snapshotter = "overlayfs"disable_snapshot_annotations = true[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/var/lib/rancher/k3s/data/ab2055bc72380bad965b219e8688ac02b2e1b665cad6bdde1f8f087637aa81df/bin"conf_dir = "/var/lib/rancher/k3s/agent/etc/cni/net.d"[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
# Mirror configuration for Docker Hub with fallback[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://management.umh.app/oci", "https://registry-1.docker.io"]
# Mirror configuration for GitHub Container Registry with fallback[plugins."io.containerd.grpc.v1.cri".registry.mirrors."ghcr.io"]
endpoint = ["https://management.umh.app/oci", "https://ghcr.io"]
# Mirror configuration for Quay with fallback[plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]
endpoint = ["https://management.umh.app/oci", "https://quay.io"]
# Catch-all configuration for any other registries[plugins."io.containerd.grpc.v1.cri".registry.mirrors."*"]
endpoint = ["https://management.umh.app/oci"]
Open /etc/flatcar/update.conf using vi as root and add the following lines:
This page describes how to perform the upgrades that are available for the Management Console.
Easily upgrade your UMH instance with the Management Console. This page offers clear, step-by-step instructions
for a smooth upgrade process.
Before you begin
Before proceeding with the upgrade of the Companion, ensure that you have the following:
A functioning UMH instance, verified as “online” and in good health.
A reliable internet connection.
Familiarity with the changelog of the new version you are upgrading to, especially to identify any breaking changes
or required manual interventions.
Management Companion
Upgrade your UMH instance seamlessly using the Management Console. Follow these steps:
Identify Outdated Instance
From the Overview tab, check for an upgrade icon next to your instance’s name, signaling an outdated Companion version.
Additionally, locate the Upgrade Companion button at the bottom of the tab.
Start the Upgrade
When you’re prepared to upgrade your UMH instance, start by pressing the Upgrade Companion button. This will open a modal,
initially displaying a changelog with a quick overview of the latest changes. You can expand the changelog for a detailed view
from your current version up to the latest one. Additionally, it may highlight any warnings requiring manual intervention.
Navigate through the changelog, and when comfortable, proceed by clicking the Next button. This step grants you access to
crucial information about recommended actions and precautions during the upgrade process.
With the necessary insights, take the next step by clicking the Upgrade button. The system will guide you through the upgrade
process, displaying real-time progress updates, including a progress bar and logs.
Upon successful completion, a confirmation message will appear. Simply click the Let’s Go button to return to the dashboard,
where you can seamlessly continue using your UMH instance with the latest enhancements.
United Manufacturing Hub
As of now, the upgrade of the UMH is not yet included in the Management Console, meaning that it has to be performed
manually. However, it is planned to be included in the future. Until then, you can follow the instructions in the
What’s New page.
Troubleshooting
I encountered an issue during the upgrade process. What should I do?
If you encounter issues during the upgrade process, consider the following steps:
Retry the Process: Sometimes, a transient issue may cause a hiccup. Retry the upgrade process to ensure it’s not a
temporary glitch.
Check Logs: Review the logs displayed during the upgrade process for any error messages or indications of what might
be causing the problem. This information can offer insights into potential issues.
If the problem persists after retrying and checking the logs, and you’ve confirmed that all prerequisites are met, please
reach out to our support team for assistance.
I installed the Management Companion before the 0.1.0 release. How do I upgrade it?
If you installed the Management Companion before the 0.1.0 release, you will need to reinstall it. This is because
we made some changes that are not compatible with the previous version.
Before reinstalling the Management Companion, you have to backup your configuration, so that you can restore
your connections after the upgrade. To do so, follow these steps:
Access your UMH instance via SSH.
Run the following command to backup your configuration:
sudo $(which kubectl) get configmap/mgmtcompanion-config --kubeconfig /etc/rancher/k3s/k3s.yaml -n mgmtcompanion -o=jsonpath='{.data}' | sed -e 's/^/{"data":/' | sed -e 's/$/}/'> mgmtcompanion-config.bak.json
This will create a file called mgmtcompanion-config.bak.json in your current directory.
For good measure, copy the file to your local machine:
Replace <user> with your username, and <ip> with the IP address of your UMH instance. You will be prompted
for your password.
Now you can reinstall the Management Companion. Follow the instructions in the Installation
guide. Your data will be preserved, and you will be able to restore your connections.
After the installation is complete, you can restore your connections by running
the following command:
The old Data Model will continue to work, and all the data will be still available.
Before you begin
You need to have a UMH cluster. If you do not already have a cluster, you can
create one by following the Getting Started
guide.
You also need to access the system where the cluster is running, either by
logging into it or by using a remote shell.
Upgrade Your Companion to the Latest Version
If you haven’t already, upgrade your Companion to the latest version. You can
easily do this from the Management Console by selecting your Instance and
clicking on the “Upgrade” button.
Upgrade the Helm Chart
The new Data Model was introduced in the 0.10 release of the Helm Chart. To upgrade
to the latest 0.10 release, you first need to update the Helm Chart to the latest
0.9 release and then upgrade to the latest 0.10 release.
There is no automatic way (yet!) to upgrade the Helm Chart, so you need to follow
the manual steps below.
First, after accessing your instance, find the Helm Chart version you are currently
using by running the following command:
Then, head to the upgrading archive
and follow the instructions to upgrade from your current version to the latest
version, one version at a time.
7.2.8 - Archive
This section is meant to archive the upgrading guides for the different versions of
the United Manufacturing Hub.
The United Manufacturing Hub is a continuously evolving product. This means that
new features and bug fixes are added to the product on a regular basis. This
section contains the upgrading guides for the different versions the United
Manufacturing Hub.
The upgrading process is done by upgrading the Helm chart.
7.2.8.1 - Upgrade to v0.9.34
This page describes how to upgrade the United Manufacturing Hub to version 0.9.34
This page describes how to upgrade the United Manufacturing Hub to version
0.9.34. Before upgrading, remember to backup the
database,
Node-RED flows,
and your cluster configuration.
All the following commands are to be run from the UMH instance’s shell.
Update Helm Repo
Fetch the latest Helm charts from the UMH repository:
This page describes how to upgrade the United Manufacturing Hub to version 0.9.15
This page describes how to upgrade the United Manufacturing Hub to version
0.9.15. Before upgrading, remember to backup the
database,
Node-RED flows,
and your cluster configuration.
Add Helm repo in UMHLens / OpenLens
Check if the UMH Helm repository is added in UMHLens / OpenLens.
To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,).
Click on the Kubernetes tab and check if the Helm Chart section contains
the https://repo.umh.app repository.
If it doesn’t, click the Add Custom Helm Repo button and fill in the following
values:
Some workloads need to be deleted before upgrading. This process does not delete
any data, but it will cause downtime. If a workload is missing, it means that it
was not enabled in your cluster, therefore you can skip it.
To delete a resource, you can select it using the box on the left of the
resource name and click the - button on the bottom right corner.
Open the Workloads tab.
From the Deployment section, delete the following deployments:
From the StatefulSet section, delete the following statefulsets:
united-manufacturing-hub-hivemqce
united-manufacturing-hub-kafka
united-manufacturing-hub-nodered
united-manufacturing-hub-sensorconnect
united-manufacturing-hub-mqttbridge
Open the Network tab.
From the Services section, delete the following services:
united-manufacturing-hub-kafka
Upgrade Helm Chart
Now everything is ready to upgrade the Helm chart.
Navigate to the Helm > Releases tab.
Select the united-manufacturing-hub release and click
Upgrade.
In the Helm Upgrade window, make sure that the Upgrade version field
contains the version you want to upgrade to.
You can also change the values of the Helm chart, if needed.
If you want to activate the new databridge you need to add & edit the following section
_000_commonConfig:...datamodel_v2:enabled:truebridges:- mode:mqtt-kafkabrokerA:united-manufacturing-hub-mqtt:1883# The flow is always from A->B, for omni-directional flow, setup a 2nd bridge with reversed broker setupbrokerB:united-manufacturing-hub-kafka:9092topic:umh.v1..* # accept mqtt or kafka topic format. after the topic seprator, you can use# for mqtt wildcard, or .* for kafka wildcardtopicMergePoint:5# This is a new feature of our datamodel_old, which splits topics in topic and key (only in Kafka), preventing having lots of topicspartitions: 6 # optional: number of partitions for the new kafka topic. default:6replicationFactor: 1 # optional: replication factor for the new kafka topic. default:1...
You can also enable the new container registry by changing the values in the
image or image.repository fields from unitedmanufacturinghub/<image-name>
to ghcr.io/united-manufacturing-hub/<image-name>.
Click Upgrade.
The upgrade process can take a few minutes. The upgrade is complete when the
Status field of the release is Deployed.
7.2.8.3 - Upgrade to v0.9.14
This page describes how to upgrade the United Manufacturing Hub to version 0.9.14
This page describes how to upgrade the United Manufacturing Hub to version
0.9.14. Before upgrading, remember to backup the
database,
Node-RED flows,
and your cluster configuration.
Add Helm repo in UMHLens / OpenLens
Check if the UMH Helm repository is added in UMHLens / OpenLens.
To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,).
Click on the Kubernetes tab and check if the Helm Chart section contains
the https://repo.umh.app repository.
If it doesn’t, click the Add Custom Helm Repo button and fill in the following
values:
Some workloads need to be deleted before upgrading. This process does not delete
any data, but it will cause downtime. If a workload is missing, it means that it
was not enabled in your cluster, therefore you can skip it.
To delete a resource, you can select it using the box on the left of the
resource name and click the - button on the bottom right corner.
Open the Workloads tab.
From the Deployment section, delete the following deployments:
From the StatefulSet section, delete the following statefulsets:
united-manufacturing-hub-hivemqce
united-manufacturing-hub-kafka
united-manufacturing-hub-nodered
united-manufacturing-hub-sensorconnect
united-manufacturing-hub-mqttbridge
Open the Network tab.
From the Services section, delete the following services:
united-manufacturing-hub-kafka
Upgrade Helm Chart
Now everything is ready to upgrade the Helm chart.
Navigate to the Helm > Releases tab.
Select the united-manufacturing-hub release and click
Upgrade.
In the Helm Upgrade window, make sure that the Upgrade version field
contains the version you want to upgrade to.
You can also change the values of the Helm chart, if needed. For example,
if you want to apply the new tweaks to the resources in order to avoid the
Out Of Memory crash of the MQTT Broker, you can change the following values:
You can also enable the new container registry by changing the values in the
image or image.repository fields from unitedmanufacturinghub/<image-name>
to ghcr.io/united-manufacturing-hub/<image-name>.
Click Upgrade.
The upgrade process can take a few minutes. The upgrade is complete when the
Status field of the release is Deployed.
7.2.8.4 - Upgrade to v0.9.13
This page describes how to upgrade the United Manufacturing Hub to version 0.9.13
This page describes how to upgrade the United Manufacturing Hub to version
0.9.13. Before upgrading, remember to backup the
database,
Node-RED flows,
and your cluster configuration.
Add Helm repo in UMHLens / OpenLens
Check if the UMH Helm repository is added in UMHLens / OpenLens.
To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,).
Click on the Kubernetes tab and check if the Helm Chart section contains
the https://repo.umh.app repository.
If it doesn’t, click the Add Custom Helm Repo button and fill in the following
values:
Some workloads need to be deleted before upgrading. This process does not delete
any data, but it will cause downtime. If a workload is missing, it means that it
was not enabled in your cluster, therefore you can skip it.
To delete a resource, you can select it using the box on the left of the
resource name and click the - button on the bottom right corner.
Open the Workloads tab.
From the Deployment section, delete the following deployments:
From the StatefulSet section, delete the following statefulsets:
united-manufacturing-hub-mqttbridge
united-manufacturing-hub-hivemqce
united-manufacturing-hub-nodered
united-manufacturing-hub-sensorconnect
Upgrade Helm Chart
Now everything is ready to upgrade the Helm chart.
Navigate to the Helm > Releases tab.
Select the united-manufacturing-hub release and click
Upgrade.
In the Helm Upgrade window, make sure that the Upgrade version field
contains the version you want to upgrade to.
You can also change the values of the Helm chart, if needed.
Click Upgrade.
The upgrade process can take a few minutes. The upgrade is complete when the
Status field of the release is Deployed.
7.2.8.5 - Upgrade to v0.9.12
This page describes how to upgrade the United Manufacturing Hub to version 0.9.12
This page describes how to upgrade the United Manufacturing Hub to version
0.9.12. Before upgrading, remember to backup the
database,
Node-RED flows,
and your cluster configuration.
Add Helm repo in UMHLens / OpenLens
Check if the UMH Helm repository is added in UMHLens / OpenLens.
To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,).
Click on the Kubernetes tab and check if the Helm Chart section contains
the https://repo.umh.app repository.
If it doesn’t, click the Add Custom Helm Repo button and fill in the following
values:
This step is only needed if you enabled RBAC for the MQTT Broker and changed the
default password. If you did not change the default password, you can skip this
step.
Navigate to Config > ConfigMaps.
Select the united-manufacturing-hub-hivemqce-extension
ConfigMap.
Copy the content of credentials.xml and save it in a safe place.
Clear Workloads
Some workloads need to be deleted before upgrading. This process does not delete
any data, but it will cause downtime. If a workload is missing, it means that it
was not enabled in your cluster, therefore you can skip it.
To delete a resource, you can select it using the box on the left of the
resource name and click the - button on the bottom right corner.
Open the Workloads tab.
From the Deployment section, delete the following deployments:
From the StatefulSet section, delete the following statefulsets:
united-manufacturing-hub-mqttbridge
united-manufacturing-hub-hivemqce
united-manufacturing-hub-nodered
united-manufacturing-hub-sensorconnect
Remove MQTT Broker extension PVC
In this version we reduced the size of the MQTT Broker extension PVC. To do so,
we need to delete the old PVC and create a new one. This process will set the
credentials of the MQTT Broker to the default ones. If you changed the default
password, you can restore them after the upgrade.
Navigate to Storage > Persistent Volume Claims.
Select the united-manufacturing-hub-hivemqce-claim-extensions PVC and
click Delete.
Upgrade Helm Chart
Now everything is ready to upgrade the Helm chart.
Navigate to the Helm > Releases tab.
Select the united-manufacturing-hub release and click
Upgrade.
In the Helm Upgrade window, make sure that the Upgrade version field
contains the version you want to upgrade to.
There are some incompatible changes in this version. To avoid errors, you
need to change the following values:
console:console:config:kafka:tls:passphrase:""# <- remove this line
console.extraContainers: remove the property and its content.
console:extraContainers:{}# <- remove this line
console.extraEnv: remove the property and its content.
console:extraEnv:""# <- remove this line
console.extraEnvFrom: remove the property and its content.
console:extraEnvFrom:""# <- remove this line
console.extraVolumeMounts: remove the |- characters right after the
property name. It should look like this:
console:extraVolumeMounts:# <- remove the `|-` characters in this line- name:united-manufacturing-hub-kowl-certificatesmountPath:/SSL_certs/kafkareadOnly:true
console.extraVolumes: remove the |- characters right after the
property name. It should look like this:
console:extraVolumes:# <- remove the `|-` characters in this line- name:united-manufacturing-hub-kowl-certificatessecret:secretName:united-manufacturing-hub-kowl-secrets
Change the console.service property to the following:
redis.sentinel: remove the property and its content.
redis:sentinel:{}# <- remove all the content of this section
Remove the property redis.master.command:
redis:master:command:/run.sh# <- remove this line
timescaledb-single.fullWalPrevention: remove the property and its content.
timescaledb-single:fullWalPrevention:# <- remove this linecheckFrequency:30# <- remove this lineenabled:false# <- remove this linethresholds:# <- remove this linereadOnlyFreeMB:64# <- remove this linereadOnlyFreePercent:5# <- remove this linereadWriteFreeMB:128# <- remove this linereadWriteFreePercent:8# <- remove this line
timescaledb-single.loadBalancer: remove the property and its content.
timescaledb-single:loadBalancer:# <- remove this lineannotations:# <- remove this lineservice.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout:"4000"# <- remove this lineenabled:true# <- remove this lineport:5432# <- remove this line
timescaledb-single.replicaLoadBalancer: remove the property and its content.
timescaledb-single:replicaLoadBalancer:annotations:# <- remove this lineservice.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout:"4000"# <- remove this lineenabled:false# <- remove this lineport:5432# <- remove this line
timescaledb-single.secretNames: remove the property and its content.
timescaledb-single:secretNames:{}# <- remove this line
timescaledb-single.unsafe: remove the property and its content.
timescaledb-single:unsafe:false# <- remove this line
Change the value of the timescaledb-single.service.primary.type property
to LoadBalancer:
The upgrade process can take a few minutes. The upgrade is complete when the
Status field of the release is Deployed.
7.2.8.6 - Upgrade to v0.9.11
This page describes how to upgrade the United Manufacturing Hub to version 0.9.11
This page describes how to upgrade the United Manufacturing Hub to version
0.9.11. Before upgrading, remember to backup the
database,
Node-RED flows,
and your cluster configuration.
Add Helm repo in UMHLens / OpenLens
Check if the UMH Helm repository is added in UMHLens / OpenLens.
To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,).
Click on the Kubernetes tab and check if the Helm Chart section contains
the https://repo.umh.app repository.
If it doesn’t, click the Add Custom Helm Repo button and fill in the following
values:
Some workloads need to be deleted before upgrading. This process does not delete
any data, but it will cause downtime. If a workload is missing, it means that it
was not enabled in your cluster, therefore you can skip it.
To delete a resource, you can select it using the box on the left of the
resource name and click the - button on the bottom right corner.
Open the Workloads tab.
From the Deployment section, delete the following deployments:
From the StatefulSet section, delete the following statefulsets:
united-manufacturing-hub-mqttbridge
united-manufacturing-hub-hivemqce
united-manufacturing-hub-nodered
united-manufacturing-hub-sensorconnect
Upgrade Helm Chart
Now everything is ready to upgrade the Helm chart.
Navigate to the Helm > Releases tab.
Select the united-manufacturing-hub release and click
Upgrade.
In the Helm Upgrade window, make sure that the Upgrade version field
contains the version you want to upgrade to.
You can also change the values of the Helm chart, if needed.
Click Upgrade.
The upgrade process can take a few minutes. The upgrade is complete when the
Status field of the release is Deployed.
7.2.8.7 - Upgrade to v0.9.10
This page describes how to upgrade the United Manufacturing Hub to version 0.9.10
This page describes how to upgrade the United Manufacturing Hub to version
0.9.10. Before upgrading, remember to backup the
database,
Node-RED flows,
and your cluster configuration.
Add Helm repo in UMHLens / OpenLens
Check if the UMH Helm repository is added in UMHLens / OpenLens.
To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,).
Click on the Kubernetes tab and check if the Helm Chart section contains
the https://repo.umh.app repository.
If it doesn’t, click the Add Custom Helm Repo button and fill in the following
values:
In this release, the Grafana version has been updated from 8.5.9 to 9.3.1.
Check the release notes for
further information about the changes.
Additionally, the way default plugins are installed has changed. Unfortunatly,
it is necesary to manually install all the plugins that were previously installed.
If you didn’t install any plugin other than the default ones, you can skip this
section.
Follow these steps to see the list of plugins installed in your cluster:
Open the browser and go to the Grafana dashboard.
Navigate to the Configuration > Plugins tab.
Select the Installed filter.
Write down all the plugins that you manually installed. You can recognize
them by not having the Core tag.
The following ones are installed by default, therefore you can skip them:
ACE.SVG by Andrew Rodgers
Button Panel by UMH Systems Gmbh
Button Panel by CloudSpout LLC
Discrete by Natel Energy
Dynamic Text by Marcus Olsson
FlowCharting by agent
Pareto Chart by isaozler
Pie Chart (old) by Grafana Labs
Timepicker Buttons Panel by williamvenner
UMH Datasource by UMH Systems Gmbh
Untimely by factry
Worldmap Panel by Grafana Labs
Clear Workloads
Some workloads need to be deleted before upgrading. This process does not delete
any data, but it will cause downtime. If a workload is missing, it means that it
was not enabled in your cluster, therefore you can skip it.
To delete a resource, you can select it using the box on the left of the
resource name and click the - button on the bottom right corner.
Open the Workloads tab.
From the Deployment section, delete the following deployments:
The upgrade process can take a few minutes. The upgrade is complete when the
Status field of the release is Deployed.
Afterwards, you can reinstall the additional Grafana plugins.
Replace VerneMQ with HiveMQ
In this upgrade we switched from using VerneMQ to HiveMQ as our MQTT Broker
(you can read the
blog article
about it).
While this process is fully backwards compatible, we suggest to update NodeRed
flows and any other additional service that uses MQTT, to use the new service
broker called united-manufacturing-hub-mqtt. The old
united-manufacturing-hub-vernemq is still functional and,
despite the name, also points to HiveMQ, but in future upgrades will be removed.
Please double-check if all of your services can connect to the new MQTT broker.
It might be needed for them to be restarted, so that they can resolve the DNS
name and get the new IP. Also, it can happen with tools like chirpstack, that you
need to specify the client-id as the automatically generated ID worked with
VerneMQ, but is now declined by HiveMQ.
Troubleshooting
Some microservices can’t connect to the new MQTT broker
If you are using the united-manufacturing-hub-mqtt service,
but some microservice can’t connect to it, restarting the microservice might
solve the issue. To do so, you can delete the Pod of the microservice and let
Kubernetes recreate it.
ChirpStack can’t connect to the new MQTT broker
ChirpStack uses a generated client-id to connect to the MQTT broker. This
client-id is not accepted by HiveMQ. To solve this issue, you can set the
client_id field in the integration.mqtt section of the chirpstack configuration
file to a fixed value:
This page describes how to upgrade the United Manufacturing Hub to version 0.9.9
This page describes how to upgrade the United Manufacturing Hub to version
0.9.9. Before upgrading, remember to backup the
database,
Node-RED flows,
and your cluster configuration.
Add Helm repo in UMHLens / OpenLens
Check if the UMH Helm repository is added in UMHLens / OpenLens.
To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,).
Click on the Kubernetes tab and check if the Helm Chart section contains
the https://repo.umh.app repository.
If it doesn’t, click the Add Custom Helm Repo button and fill in the following
values:
Some workloads need to be deleted before upgrading. This process does not delete
any data, but it will cause downtime. If a workload is missing, it means that it
was not enabled in your cluster, therefore you can skip it.
To delete a resource, you can select it using the box on the left of the
resource name and click the - button on the bottom right corner.
Open the Workloads tab.
From the Deployment section, delete the following deployments:
From the StatefulSet section, delete the following statefulsets:
united-manufacturing-hub-mqttbridge
united-manufacturing-hub-hivemqce
united-manufacturing-hub-nodered
united-manufacturing-hub-sensorconnect
Upgrade Helm Chart
Now everything is ready to upgrade the Helm chart.
Navigate to the Helm > Releases tab.
Select the united-manufacturing-hub release and click
Upgrade.
In the Helm Upgrade window, make sure that the Upgrade version field
contains the version you want to upgrade to.
You can also change the values of the Helm chart, if needed.
In the grafana section, find the extraInitContainers field and change the
value of the image field to unitedmanufacturinghub/grafana-plugin-extractor:0.1.4.
Click Upgrade.
The upgrade process can take a few minutes. The upgrade is complete when the
Status field of the release is Deployed.
7.2.8.9 - Upgrade to v0.9.8
This page describes how to upgrade the United Manufacturing Hub to version 0.9.8
This page describes how to upgrade the United Manufacturing Hub to version
0.9.8. Before upgrading, remember to backup the
database,
Node-RED flows,
and your cluster configuration.
Add Helm repo in UMHLens / OpenLens
Check if the UMH Helm repository is added in UMHLens / OpenLens.
To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,).
Click on the Kubernetes tab and check if the Helm Chart section contains
the https://repo.umh.app repository.
If it doesn’t, click the Add Custom Helm Repo button and fill in the following
values:
Some workloads need to be deleted before upgrading. This process does not delete
any data, but it will cause downtime. If a workload is missing, it means that it
was not enabled in your cluster, therefore you can skip it.
To delete a resource, you can select it using the box on the left of the
resource name and click the - button on the bottom right corner.
Open the Workloads tab.
From the Deployment section, delete the following deployments:
From the StatefulSet section, delete the following statefulsets:
united-manufacturing-hub-mqttbridge
united-manufacturing-hub-hivemqce
united-manufacturing-hub-nodered
united-manufacturing-hub-sensorconnect
Upgrade Helm Chart
Now everything is ready to upgrade the Helm chart.
Navigate to the Helm > Releases tab.
Select the united-manufacturing-hub release and click
Upgrade.
In the Helm Upgrade window, make sure that the Upgrade version field
contains the version you want to upgrade to.
You can also change the values of the Helm chart, if needed.
Click Upgrade.
The upgrade process can take a few minutes. The upgrade is complete when the
Status field of the release is Deployed.
7.2.8.10 - Upgrade to v0.9.7
This page describes how to upgrade the United Manufacturing Hub to version 0.9.7
This page describes how to upgrade the United Manufacturing Hub to version
0.9.7. Before upgrading, remember to backup the
database,
Node-RED flows,
and your cluster configuration.
Add Helm repo in UMHLens / OpenLens
Check if the UMH Helm repository is added in UMHLens / OpenLens.
To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,).
Click on the Kubernetes tab and check if the Helm Chart section contains
the https://repo.umh.app repository.
If it doesn’t, click the Add Custom Helm Repo button and fill in the following
values:
Some workloads need to be deleted before upgrading. This process does not delete
any data, but it will cause downtime. If a workload is missing, it means that it
was not enabled in your cluster, therefore you can skip it.
To delete a resource, you can select it using the box on the left of the
resource name and click the - button on the bottom right corner.
Open the Workloads tab.
From the Deployment section, delete the following deployments:
In the timescaledb-single section, make sure that the image.tag field
is set to pg13.8-ts2.8.0-p1.
Click Upgrade.
The upgrade process can take a few minutes. The upgrade is complete when the
Status field of the release is Deployed.
Change Factoryinsight API version
The Factoryinsight API version has changed from v1 to v2. To make sure that
you are using the new version, click on any Factoryinsight Pod and check that the
VERSION environment variable is set to 2.
If it’s not, follow these steps:
Navigate to the Workloads > Deployments tab.
Select the united-manufacturing-hub-factoryinsight-deployment deployment.
Click the Edit button to open the deployment’s configuration.
Find the spec.template.spec.containers[0].env field.
Set the value field of the VERSION variable to 2.
7.2.8.11 - Upgrade to v0.9.6
This page describes how to upgrade the United Manufacturing Hub to version 0.9.6
This page describes how to upgrade the United Manufacturing Hub to version
0.9.6. Before upgrading, remember to backup the
database,
Node-RED flows,
and your cluster configuration.
Add Helm repo in UMHLens / OpenLens
Check if the UMH Helm repository is added in UMHLens / OpenLens.
To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,).
Click on the Kubernetes tab and check if the Helm Chart section contains
the https://repo.umh.app repository.
If it doesn’t, click the Add Custom Helm Repo button and fill in the following
values:
This command could take a while to complete, especially on larger tables.
Type exit to close the shell.
Clear Workloads
Some workloads need to be deleted before upgrading. This process does not delete
any data, but it will cause downtime. If a workload is missing, it means that it
was not enabled in your cluster, therefore you can skip it.
To delete a resource, you can select it using the box on the left of the
resource name and click the - button on the bottom right corner.
Open the Workloads tab.
From the Deployment section, delete the following deployments:
From the StatefulSet section, delete the following statefulsets:
united-manufacturing-hub-mqttbridge
united-manufacturing-hub-hivemqce
united-manufacturing-hub-nodered
united-manufacturing-hub-sensorconnect
Upgrade Helm Chart
Now everything is ready to upgrade the Helm chart.
Navigate to the Helm > Releases tab.
Select the united-manufacturing-hub release and click
Upgrade.
In the Helm Upgrade window, make sure that the Upgrade version field
contains the version you want to upgrade to.
You can also change the values of the Helm chart, if needed.
Click Upgrade.
The upgrade process can take a few minutes. The upgrade is complete when the
Status field of the release is Deployed.
7.2.8.12 - Upgrade to v0.9.5
This page describes how to upgrade the United Manufacturing Hub to version 0.9.5
This page describes how to upgrade the United Manufacturing Hub to version
0.9.5. Before upgrading, remember to backup the
database,
Node-RED flows,
and your cluster configuration.
Add Helm repo in UMHLens / OpenLens
Check if the UMH Helm repository is added in UMHLens / OpenLens.
To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,).
Click on the Kubernetes tab and check if the Helm Chart section contains
the https://repo.umh.app repository.
If it doesn’t, click the Add Custom Helm Repo button and fill in the following
values:
Now you can close the shell by typing exit and continue with the upgrade process.
Clear Workloads
Some workloads need to be deleted before upgrading. This process does not delete
any data, but it will cause downtime. If a workload is missing, it means that it
was not enabled in your cluster, therefore you can skip it.
To delete a resource, you can select it using the box on the left of the
resource name and click the - button on the bottom right corner.
Open the Workloads tab.
From the Deployment section, delete the following deployments:
end_time_stamp has been renamed to timestamp_ms_end
deleteShiftByAssetIdAndBeginTimestamp and deleteShiftById have been removed.
Use the deleteShift
message instead.
7.2.8.13 - Upgrade to v0.9.4
This page describes how to upgrade the United Manufacturing Hub to version 0.9.4
This page describes how to upgrade the United Manufacturing Hub to version
0.9.4. Before upgrading, remember to backup the
database,
Node-RED flows,
and your cluster configuration.
Add Helm repo in UMHLens / OpenLens
Check if the UMH Helm repository is added in UMHLens / OpenLens.
To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,).
Click on the Kubernetes tab and check if the Helm Chart section contains
the https://repo.umh.app repository.
If it doesn’t, click the Add Custom Helm Repo button and fill in the following
values:
Some workloads need to be deleted before upgrading. This process does not delete
any data, but it will cause downtime. If a workload is missing, it means that it
was not enabled in your cluster, therefore you can skip it.
To delete a resource, you can select it using the box on the left of the
resource name and click the - button on the bottom right corner.
Open the Workloads tab.
From the Deployment section, delete the following deployments:
If you have enabled Barcodereader,
find the barcodereader section and set the
following values, adding the missing ones and updating the already existing
ones:
enabled:falseimage:pullPolicy:IfNotPresentresources:requests:cpu:"2m"memory:"30Mi"limits:cpu:"10m"memory:"60Mi"scanOnly:false# Debug mode, will not send data to kafka
Click Upgrade.
The upgrade process can take a few minutes. The process is complete when the
Status field of the release is Deployed.
7.3 - Administration
This section describes how to manage and configure the United Manufacturing Hub
cluster.
In this section, you will find information about how to manage and configure the
United Manufacturing Hub cluster, from customizing the cluster to access the
different services.
7.3.1 - Access the Database
This page describes how to access the United Manufacturing Hub database to
perform SQL operations using a database client or the CLI.
There are multiple ways to access the database. If you want to just visualize data,
then using Grafana or a database client is the easiest way. If you need to also
perform SQL commands, then using a database client or the CLI are the best options.
Generally, using a database client gives you the most flexibility, since you can
both visualize the data and manipulate the database. However, it requires you to
install a database client on your machine.
Using the CLI gives you more control over the database, but it requires you to
have a good understanding of SQL.
Grafana comes with a pre-configured PostgreSQL datasource, so you can use it to
visualize the data.
Before you begin
You need to have a UMH cluster. If you do not already have a cluster, you can
create one by following the Getting Started
guide.
You also need to access the system where the cluster is running, either by
logging into it or by using a remote shell.
Get the database credentials
If you are not using the CLI, you need to know the database credentials. You can
find them in the timescale-post-init-pw Secret. Run the
following command to get the credentials:
For the sake of this tutorial, pgAdmin will be used as an example, but other clients
have similar functionality. Refer to the specific client documentation for more
information.
Using pgAdmin
You can use pgAdmin to access the database. To do so,
you need to install the pgAdmin client on your machine. For more information, see
the pgAdmin documentation.
Once you have installed the client, you can add a new server from the main window.
In the General tab, give the server a meaningful name. In the Connection
tab, enter the database credentials:
The Host name/address is the IP address of your instance.
The Port is 5432.
The Maintenance database is postgres.
The Username and Password are the ones you found in the Secret.
Click Save to save the server.
You can now connect to the database by double-clicking the server.
Use the side menu to navigate through the server. The tables are listed under
the Schemas > public > Tables section of the factoryinsight database.
Refer to the pgAdmin documentation
for more information on how to use the client to perform database operations.
Access the database using the command line interface
You can access the database from the command line using the psql command
directly from the united-manufacturing-hub-timescaledb-0 Pod.
You will not need credentials to access the database from the Pod’s CLI.
The following steps need to be performed from the machine where the cluster is
running, either by logging into it or by using a remote shell.
This page describes how to access services from within the cluster.
All the services deployed in the cluster are visible to each other. That makes it
easy to connect them together.
Before you begin
You need to have a UMH cluster. If you do not already have a cluster, you can
create one by following the Getting Started
guide.
You also need to access the system where the cluster is running, either by
logging into it or by using a remote shell.
Connect to a service from another service
To connect to a service from another service, you can use the service name as the
host name.
To get a list of available services and related ports you can run the following
command from the instance:
sudo $(which kubectl) get svc -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml
All of them are available from within the cluster. The ones of type LoadBalancer
are also available from outside the cluster using the node IP and the port listed
in the Ports column.
Use the port on the left side of the colon (:) to connect to the service from
outside the cluster. For example, the database is available on port 5432.
Example
The most common use case is to connect to the MQTT Broker from Node-RED.
To do that, when you create the MQTT node, you can use the service name
united-manufacturing-hub-mqtt as the host name and one the ports
listed in the Ports column.
The MQTT service name has changed since version 0.9.10. If you are using an older
version, use united-manufacturing-hub-vernemq instead of
united-manufacturing-hub-mqtt.
This page describe how to access services from outside the cluster.
Some of the microservices in the United Manufacturing Hub are exposed outside
the cluster with a LoadBalancer service. A LoadBalancer is a service
that exposes a set of Pods on the same network as the cluster, but
not necessarily to the entire internet. The LoadBalancer service
provides a single IP address that can be used to access the Pods.
Before you begin
You need to have a UMH cluster. If you do not already have a cluster, you can
create one by following the Getting Started
guide.
You also need to access the system where the cluster is running, either by
logging into it or by using a remote shell.
Accessing the services
To get a list of available services and related ports you can run the following
command from the instance:
sudo $(which kubectl) get svc -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml
All of them are available from within the cluster. The ones of type LoadBalancer
are also available from outside the cluster using the node IP and the port listed
in the Ports column.
Use the port on the left side of the colon (:) to connect to the service from
outside the cluster. For example, the database is available on port 5432.
Services with LoadBalancer by default
The following services are exposed outside the cluster with a LoadBalancer
service by default:
To access Node-RED, you need to use the /nodered path, for example
http://192.168.1.100:1880/nodered.
Services with NodePort by default
The Kafka Broker uses the service
type NodePort by default.
Follow these steps to access the Kafka Broker outside the cluster:
Access your instance via SSH
Execute this command to check the host port of the Kafka Broker:
sudo $(which kubectl) get svc united-manufacturing-hub-kafka-external -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml
In the PORT(S) column, you should be able to see the port with 9094:<host-port>/TCP.
To access the Kafka Broker, use <instance-ip-address>:<host-port>.
Services with ClusterIP
Some of the microservices in the United Manufacturing Hub are exposed via
a ClusterIP service. That means that they are only accessible from within the
cluster itself. There are two options for enabling access them from outside the cluster:
Creating a LoadBalancer service:
A LoadBalancer is a service that exposes a set of Pods on the same network
as the cluster, but not necessarily to the entire internet.
Port forwarding:
You can just forward the port of a service to your local machine.
Port forwarding can be unstable, especially if the connection to the cluster is
slow. If you are experiencing issues, try to create a LoadBalancer service
instead.
Create a LoadBalancer service
Follow these steps to enable the LoadBalancer service for the corresponding microservice:
Execute the following command to list the services and note the name of
the one you want to access.
sudo $(which kubectl) get svc -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml
Start editing the service configuration by running this command:
Where <local-port> is the port on the host that you want to use,
and <remote-port> is the service port that you noted before.
Usually, it’s good practice to pick a high number (greater than 30000)
for the host port, in order to avoid conflicts.
You should be able to see logs like:
Forwarding from 127.0.0.1:31922 -> 9121
Forwarding from [::1]:31922 -> 9121
Handling connection for 31922
You can now access the service using the IP address of the node and
the port you choose.
Security considerations
MQTT broker
There are some security considerations to keep in mind when exposing the MQTT
broker.
By default, the MQTT broker is configured to allow anonymous connections. This
means that anyone can connect to the broker without providing any credentials.
This is not recommended for production environments.
To secure the MQTT broker, you can configure it to require authentication. For
that, you can either enable RBAC
or set up HiveMQ PKI (recommended
for production environments).
Troubleshooting
LoadBalancer service stuck in Pending state
If the LoadBalancer service is stuck in the Pending state, it probably means
that the host port is already in use. To fix this, edit the service and change
the section spec.ports.port to a different port number.
This page describes how to install custom drivers in NodeRed.
NodeRed is running on Alpine Linux as non-root user. This means that you can’t
install packages with apk. This tutorial shows you how to install packages
with proper security measures.
Before you begin
You need to have a UMH cluster. If you do not already have a cluster, you can
create one by following the Getting Started
guide.
You also need to access the system where the cluster is running, either by
logging into it or by using a remote shell.
This page describes how to execute Kafka shell scripts.
When working with Kafka, you may need to execute shell scripts to perform
administrative tasks. This page describes how to execute Kafka shell scripts.
Before you begin
You need to have a UMH cluster. If you do not already have a cluster, you can
create one by following the Getting Started
guide.
You also need to access the system where the cluster is running, either by
logging into it or by using a remote shell.
This page describes how to reduce the size of the United Manufacturing Hub database.
Over time, time-series data can consume a large amount of disk space. To reduce
the amount of disk space used by time-series data, there are three options:
Enable data compression. This reduces the required disk space by applying
mathematical compression to the data. This compression is lossless, so the data
is not changed in any way. However, it will take more time to compress and
decompress the data. For more information, see how
TimescaleDB compression works.
Enable data retention. This deletes old data that is no longer needed, by
setting policies that automatically delete data older than a specified time. This
can be beneficial for managing the size of the database, as well as adhering to
data retention regulations. However, by definition, data loss will occur. For
more information, see how
TimescaleDB data retention works.
Downsampling. This is a method of reducing the amount of data stored by
aggregating data points over a period of time. For example, you can aggregate
data points over a 30-minute period, instead of storing each data point. If exact
data is not required, downsampling can be useful to reduce database size.
However, data may be less accurate.
Before you begin
You need to have a UMH cluster. If you do not already have a cluster, you can
create one by following the Getting Started
guide.
You also need to access the system where the cluster is running, either by
logging into it or by using a remote shell.
You can find sample SQL commands to enable data compression here.
The first step is to turn on data compression on the target table, and set the compression options. Refer to the TimescaleDB documentation for a full list of options.
-- set "asset_id" as the key for the compressed segments and orders the table by "valuename".
ALTERTABLEprocessvaluetableSET(timescaledb.compress,timescaledb.compress_segmentby='asset_id',timescaledb.compress_orderby='valuename');
-- set "asset_id" as the key for the compressed segments and orders the table by "name".
ALTERTABLEtagSET(timescaledb.compress,timescaledb.compress_segmentby='asset_id',timescaledb.compress_orderby='name');
Then, you have to create the compression policy. The interval determines the age that the chunks of data need to reach before being compressed. Read the official documentation for more information.
-- set a compression policy on the "processvaluetable" table, which will compress data older than 7 days.
SELECTadd_compression_policy('processvaluetable',INTERVAL'7 days');
-- set a compression policy on the "tag" table, which will compress data older than 2 weeks.
SELECTadd_compression_policy('tag',INTERVAL'2 weeks');
Enable data retention
You can find sample SQL commands to enable data retention here.
Sample command for factoryinsight and umh_v2 databases:
Enabling data retention consists in only adding the policy with the desired
retention interval. Refer to the official documentation
for more detailed information about these queries.
-- Set a retention policy on the "processvaluetable" table, which will delete data older than 7 days.
SELECTadd_retention_policy('processvaluetable',INTERVAL'7 days');
-- set a retention policy on the "tag" table, which will delete data older than 3 months.
SELECTadd_retention_policy('tag',INTERVAL'3 months');
This page describes how to reduce the amount of Kafka Topics in order to
lower the overhead by using the merge point feature.
Kafka excels at processing a high volume of messages but can encounter difficulties
with excessive topics, which may lead to insufficient memory. The optimal Kafka
setup involves minimal topics, utilizing the
event key for logical
data segregation.
On the contrary, MQTT shines when handling a large number of topics with a small
number of messages. But when bridging MQTT to Kafka, the number of topics can
become overwhelming.
Specifically, with the default configuration, Kafka is able to handle around
100-150 topics. This is because there is a limit of 1000 partitions per broker,
and each topic requires has 6 partitions by default.
So, if you are experiencing memory issues with Kafka, you may want to consider
combining multiple topics into a single topic with different keys. The diagram
below illustrates how this principle simplifies topic management.
Before you begin
This tutorial is for advanced users. Contact us if you need assistance.
You need to have a UMH cluster. If you do not already have a cluster, you can
create one by following the Getting Started
guide.
You also need to access the system where the cluster is running, either by
logging into it or by using a remote shell.
There are two configurations for the topic merge point: one in the Companion
configuration for Benthos data sources and another in the Helm chart for data bridges.
Data Sources
To adjust the topic merge point for data sources, modify
mgmtcompanion-config configmap. This
can be easily done with the following command:
This command opens the current configuration in the default editor, allowing you
to set the umh_merge_point to your preferred value:
data:umh_merge_point:<numeric-value>
Ensure the value is at least 3 and update the lastUpdated field to the current
Unix timestamp to trigger the automatic refresh of existing data sources.
Data Bridge
For data bridges, the merge point is defined individually in the Helm chart values
for each bridge. Update the Helm chart installation with the new topicMergePoint
value for each bridge. See the Helm chart documentation
for more details.
Setting the topicMergePoint to -1 disables the merge feature.
7.3.9 - Delete Assets from the Database
This task shows you how to delete assets from the database.
This is useful if you have created assets by mistake, or to delete
the ones that are no longer needed.
This task deletes data from the database. Make sure you
have a backup of the database before you proceed.
Before you begin
You need to have a UMH cluster. If you do not already have a cluster, you can
create one by following the Getting Started
guide.
You also need to access the system where the cluster is running, either by
logging into it or by using a remote shell.
This command will open a psql shell connected to the default postgres database.
Connect to the factoryinsight database:
\c factoryinsight
Choose the assets to delete
You have multiple choices to delete assets, like deleting a single asset, or
deleting all assets in a location, or deleting all assets with a specific name.
To do so, you can customize the SQL command using different filters. Specifically,
a combination of the following filters:
assetid
location
customer
To filter an SQL command, you can use the WHERE clause. For example, using all
of the filters:
This command will open a psql shell connected to the default postgres database.
Connect to the umh_v2 database:
\c umh_v2
Choose the assets to delete
You have multiple choices to delete assets, like deleting a single asset, or
deleting all assets in a location, or deleting all assets with a specific name.
To do so, you can customize the SQL command using different filters. Specifically,
a combination of the following filters:
enterprise
site
area
line
workcell
origin_id
To filter an SQL command, you can use the WHERE clause. For example, you can filter
by enterprise, site, and area:
This page shows how to explore cached data in the United Manufacturing Hub.
When working with the United Manufacturing Hub, you might want to visualize
information about the cached data. This page shows how you can access the cache
and explore the data.
Before you begin
You need to have a UMH cluster. If you do not already have a cluster, you can
create one by following the Getting Started
guide.
You also need to access the system where the cluster is running, either by
logging into it or by using a remote shell.
Open a shell in the cache Pod
Get access to the instance’s shell and execute the following commands.
Download the backup scripts and extract the content in a folder of your choice.
For this task, you need to have PostgreSQL
installed on your machine.
You also need to have enough space on your machine to store the backup. To check
the size of the database, ssh into the system and follow the steps below:
To run the backup script, you’ll first need to obtain a copy of the Kubernetes
configuration file from your instance. This is essential for providing the
script with access to the instance.
In the shell of your instance, execute the following command to display the
Kubernetes configuration:
sudo cat /etc/rancher/k3s/k3s.yaml
Make sure to copy the entire output to your clipboard.
This tutorial is based on the assumption that your kubeconfig file is located
at /etc/rancher/k3s/k3s.yaml. Depending on your setup, the actual file location
might be different.
Open a text editor, like Notepad, on your local machine and paste the copied content.
In the pasted content, find the server field. It usually defaults to https://127.0.0.1:6443.
Replace this with your instance’s IP address
server:https://<INSTANCE_IP>:6443
Save the file as k3s.yaml inside the backup folder you downloaded earlier.
Backup using the script
The backup script is located inside the folder you downloaded earlier.
You can find a list of all available parameters down below.
If OutputPath is not set, the backup will be stored in the current folder.
This script might take a while to finish, depending on the size of your database
and your connection speed.
If the connection is interrupted, there is currently no option to resume the process, therefore you will need to start again.
Here is a list of all available parameters:
Available parameters
Parameter
Description
Required
Default value
GrafanaToken
Grafana API key
Yes
IP
IP of the cluster to backup
Yes
KubeconfigPath
Path to the kubeconfig file
Yes
DatabaseDatabase
Name of the databse to backup
No
factoryinsight
DatabasePassword
Password of the database user
No
changeme
DatabasePort
Port of the database
No
5432
DatabaseUser
Database user
No
factoryinsight
DaysPerJob
Number of days worth of data to backup in each parallel job
No
31
EnableGpgEncryption
Set to true if you want to encrypt the backup
No
false
EnableGpgSigning
Set to true if you want to sign the backup
No
false
GpgEncryptionKeyId
ID of the GPG key used for encryption
No
GpgSigningKeyId
ID of the GPG key used for signing
No
GrafanaPort
External port of the Grafana service
No
8080
OutputPath
Path to the folder where the backup will be stored
No
Current folder
ParallelJobs
Number of parallel job backups to run
No
4
SkipDiskSpaceCheck
Skip checking available disk space
No
false
SkipGpgQuestions
Set to true if you want to sign or encrypt the backup
No
false
Restore
Each component of the United Manufacturing Hub can be restored separately, in
order to allow for more flexibility and to reduce the damage in case of a
failure.
Copy kubeconfig file
To run the backup script, you’ll first need to obtain a copy of the Kubernetes
configuration file from your instance. This is essential for providing the
script with access to the instance.
In the shell of your instance, execute the following command to display the
Kubernetes configuration:
sudo cat /etc/rancher/k3s/k3s.yaml
Make sure to copy the entire output to your clipboard.
This tutorial is based on the assumption that your kubeconfig file is located
at /etc/rancher/k3s/k3s.yaml. Depending on your setup, the actual file location
might be different.
Open a text editor, like Notepad, on your local machine and paste the copied content.
In the pasted content, find the server field. It usually defaults to https://127.0.0.1:6443.
Replace this with your instance’s IP address
server:https://<INSTANCE_IP>:6443
Save the file as k3s.yaml inside the backup folder you downloaded earlier.
Cluster configuration
To restore the Kubernetes cluster, execute the .\restore-helm.ps1 script with
the following parameters:
Execute the .\restore-timescale.ps1 and .\restore-timescale-v2.ps1 script with the
following parameters to restore factoryinsight and umh_v2 databases:
Unable to connect to the server: x509: certificate signed …
This issue may occur when the device’s IP address changes from DHCP to static
after installation. A quick solution is skipping TLS validation. If you want
to enable insecure-skip-tls-verify option, run the following command on
the instance’s shell before copying kubeconfig on the server:
This page describes how to backup and restore the database.
Before you begin
For this task, you need to have PostgreSQL
installed on your machine. Make sure that its version is compatible with the version
installed on the UMH.
Also, enough free space is required on your machine to store the backup. To check
the size of the database, ssh into the system and follow the steps below:
If you want to backup the Grafana or umh_v2 database, you can follow the same steps
as above, but you need to replace any occurence of factoryinsight with grafana.
In addition, you need to write down the credentials in the
grafana-secret Secret, as they are necessary
to access the dashboard after restoring the database.
The default username for umh_v2 database is kafkatopostgresqlv2, and the password is
changemetoo.
Restoring the database
For this section, we assume that you are restoring the data to a fresh United
Manufacturing Hub installation with an empty database.
Temporarly disable kafkatopostrgesql, kafkatopostgresqlv2, and factoryinsight
Since kafkatopostrgesql, kafkatopostgresqlv2, and factoryinsight microservices
might write actual data into the database while restoring it, they should be
disabled. Connect to your server via SSH and run the following command:
This section shows an example for restoring factoryinsight. If you want to restore
grafana, you need to replace any occurence of factoryinsight with grafana.
For umh_v2, you should use kafkatopostgresqlv2 for the user name and
changemetoo for the password.
Make sure that your device is connected to server via SSH and run the following command:
This page describes how to import and export Node-RED flows.
Export Node-RED Flows
To export Node-RED flows, please follow the steps below:
Access Node-RED by navigating to http://<CLUSTER-IP>:1880/nodered in your
browser. Replace <CLUSTER-IP> with the IP address of your cluster, or
localhost if you are running the cluster locally.
From the top-right menu, select Export.
From the Export dialog, select wich nodes or flows you want to export.
Click Download to download the exported flows, or Copy to clipboard to
copy the exported flows to the clipboard.
The credentials of the connector nodes are not exported. You will need to
re-enter them after importing the flows.
Import Node-RED Flows
To import Node-RED flows, please follow the steps below:
Access Node-RED by navigating to http://<CLUSTER-IP>:1880/nodered in your
browser. Replace <CLUSTER-IP> with the IP address of your cluster, or
localhost if you are running the cluster locally.
From the top-right menu, select Import.
From the Import dialog, select the file containing the exported flows, or
paste the exported flows from the clipboard.
Click Import to import the flows.
7.5 - Security
This section contains information about how to secure the United Manufacturing
Hub.
7.5.1 - Enable RBAC for the MQTT Broker
This page describes how to enable Role-Based Access Control (RBAC) for the
MQTT broker.
Enable RBAC
Enable RBAC by upgrading the value in the Helm chart.
Replace <version> with the version of the HiveMQ CE extension. If you are
not sure which version is installed, you can press Tab after typing
java -jar hivemq-file-rbac-extension- to autocomplete the version.
Replace <password> with your desired password. Do not use any whitespaces.
Copy the output of the command. It should look similar to this:
This command will open the default text editor with the ConfigMap contents.
Change the value inbetween the <password> tags with the password hash
generated in step 4.
You can use a different password for each different microservice. Just
remember that you will need to update the configuration in each one
to use the new password.
Save the changes.
Recreate the Pod:
sudo $(which kubectl) delete pod united-manufacturing-hub-hivemqce-0 -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml
This page describes how to setup firewall rules for the UMH instances.
Some enterprise networks operate in a whitelist manner, where all outgoing and incoming communication
is blocked by default. However, the installation and maintenance of UMH requires internet access for
tasks such as downloading the operating system, Docker containers, monitoring via the Management Console,
and loading third-party plugins. As dependencies are hosted on various servers and may change based on
vendors’ decisions, we’ve simplified the user experience by consolidating all mandatory services under a
single domain. Nevertheless, if you wish to install third-party components like Node-RED or Grafana plugins,
you’ll need to whitelist additional domains.
Before you begin
The only prerequisite is having a firewall that allows modification of rules. If you’re unsure about this,
consider contacting your network administrator.
Firewall Configuration
Once you’re ready and ensured that you have the necessary permissions to configure the firewall, follow these steps:
Whitelist management.umh.app
This mandatory step requires whitelisting management.umh.app on TCP port 443 (HTTPS traffic). Not doing so will
disrupt UMH functionality; installations, updates, and monitoring won’t work as expected.
Optional: Whitelist domains for common 3rd party plugins
Include these common external domains and ports in your firewall rules to allow installing Node-RED and Grafana plugins:
registry.npmjs.org (required for installing Node-RED plugins)
storage.googleapis.com (required for installing Grafana plugins)
grafana.com (required for displaying Grafana plugins)
catalogue.nodered.org (required for displaying Node-RED plugins, only relevant for the client that is using Node-RED, not
the server where it’s installed on).
Depending on your setup, additional domains may need to be whitelisted.
DNS Configuration (Optional)
By default, we are using your DHCP configured DNS servers. If you are using static ip or want to use a different DNS server,
contact us for a custom configuration file.
Bring your own containers
Our system tries to fetch all containers from our own registry (management.umh.app) first.
If this fails, it will try to fetch docker.io from https://registry-1.docker.io, ghcr.io from https://ghcr.io and quay.io from https://quay.io (and any other from management.umh.app)
If you need to use a different registry, edit the /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl to set your own mirror configuration.
Troubleshooting
I’m having connectivity problems. What should I do?
First of all, double-check that your firewall rules are configured as described in this page, especially the step
involving our domain. As a quick test, you can use the following command from a different machine within the same
network to check if the rules are working:
curl -vvv https://management.umh.app
7.5.3 - Setup PKI for the MQTT Broker
This page describes how to setup the Public Key Infrastructure (PKI) for the
MQTT broker.
If you want to use MQTT over TLS (MQTTS) or Secure Web Socket (WSS) you need
to setup a Public Key Infrastructure (PKI).
The Public Key Infrastructure for HiveMQ consists of two Java Key Stores (JKS):
Keystore: The Keystore contains the HiveMQ certificate and private keys.
This store must be confidential, since anyone with access to it could generate
valid client certificates and read or send messages in your MQTT infrastructure.
Truststore: The Truststore contains all the clients public certificates.
HiveMQ uses it to verify the authenticity of the connections.
Before you begin
You need to have the following tools installed:
OpenSSL. If you are using Windows, you can install it with
Chocolatey.
<password>: The password for the keystore. You can use any password you want.
<days>: The number of days the certificate should be valid.
The command runs for a few minutes and generates a file named hivemq.jks in
the current directory, which contains the HiveMQ certificate and private key.
If you want to explore the contents of the keystore, you can use
Keystore Explorer.
Generate client certificates
Open a terminal and create a directory for the client certificates:
mkdir pki
Follow these steps for each client you want to generate a certificate for.
You could also do it manually with the following command:
openssl base64 -A -in <filename> -out <filename>.b64
Now you can import the PKI into the United Manufacturing Hub. To do so, create
a file named pki.yaml with the following content:
_000_commonConfig:infrastructure:mqtt:tls:keystoreBase64:<content of hivemq.jks.b64>keystorePassword:<password>truststoreBase64:<content of hivemq-trust-store.jks.b64>truststorePassword:<password><servicename>.cert:<content of <servicename>-cert.pem.b64><servicename>.key:<content of <servicename>-key.pem.b64>
Now, send copy it to your instance with the following command:
scp pki.yaml <username>@<ip-address>:/tmp
After that, access the instance with SSH and run the following command:
This section contains information about the new features and changes in the
United Manufacturing Hub.
All new features, changes, deprecations, and breaking changes in the United Manufacturing Hub are now documented in our Discord channel.
Join our community to stay up to date with the latest developments!
8.1 - Archive
This section is meant to archive the “What’s new” pages only related to the
United Manufacturing Hub’s Helm chart.
8.1.1 - What's New in Version 0.9.15
This section contains information about the new features and changes in the
United Manufacturing Hub introduced in version 0.9.15.
Welcome to United Manufacturing Hub version 0.9.15! In this release we added
support for the UNS
data model, by introducing a new microservice, Data Bridge.
For a complete list of changes, refer to the
release notes.
Data Bridge
Data-bridge is a microservice
specifically tailored to adhere to the
UNS
data model. It consumes topics from a message broker, translates them to
the proper format and publishes them to the other message broker.
It can consume from and publish to both Kafka and MQTT brokers, whether they are
local or remote.
It’s main purpose is to merge messages from multiple topics into a single topic,
using the message key to identify the source topic.
This section contains information about the new features and changes in the
United Manufacturing Hub introduced in version 0.9.14.
Welcome to United Manufacturing Hub version 0.9.14! In this release we changed
the Kafka broker from Apache Kafka to RedPanda, which is a Kafka-compatible
event streaming platform. We also started migrating to a different kafka
library in our micoservices, which will allow full ARM support in the future.
Finally, we tweaked the overall resource usage of the United Manufacturing Hub
to improve performance and efficiency, along with some bug fixes.
For a complete list of changes, refer to the
release notes.
RedPanda
RedPanda is a Kafka-compatible event streaming
platform. It is built with modern hardware in mind and utilizes multi-core CPUs
efficiently, which can result in better performance compared to Kafka. RedPanda
also offers lower latency and higher throughput, making it a better fit for
real-time use cases in IIoT applications. Additionally, RedPanda has a simpler
setup and management process compared to Kafka, which can save time and
resources for development teams. Finally, RedPanda is fully compatible with
Kafka’s API, allowing for a seamless transition for existing Kafka users.
Overall, Redpanda can provide improved performance and efficiency for IIoT
applications that require real-time data processing and management with a lower
setup and management cost.
Sarama Kafka Library
We started migrating our microservices to use the
Sarama Kafka library. This library is
written in Go and is fully compatible with RedPanda. This change will allow us
to support ARM-based devices in the future, which will be useful for edge
computing use cases. Addedd bonus is that Sarama is faster and requires less
memory than the previous library.
For now we only migrated the following microservices:
barcodereader
kafka-init (used as an init container for components that communicate with
Kafka)
mqtt-kafka-bridge
Resources tweaking
With this release we tweaked the resource requests of each default component
of the United Manufacturing Hub to respect the minimum requirements of 4 cores
and 8GB of RAM. This allowed us to increase the memory allocated for the MQTT
broker, resulting in solving the common Out Of Memory issue that caused the
broker to restart.
Be sure to follow the upgrade guide
to adjust your resources accordingly.
The following table shows the new resource requests and limits when deploying
the United Manufacturing Hub with the default configuration or with all the
components enabled. CPU values are expressed in millicores and memory values
are expressed in mebibytes.
resources
Resource
Requests
Limits
CPU (default values)
1080m (27%)
1890m (47%)
Memory (default values)
1650Mi (21%)
2770Mi (35%)
CPU (all components)
2002m (50%)
2730m (68%)
Memory (all components)
2873Mi (36%)
3578Mi (45%)
The requested resources are the ones immediately allocated to the container
when it starts, and the limits are the maximum amount of resources that the
container can (but is not forced to) use. For more information about Kubernetes
resources, refer to the
official documentation.
Container registry
We moved our container registry from Docker Hub to GitHub Container Registry.
This change won’t affect the way you deploy the United Manufacturing Hub, but
it will allow us to better manage our container images and provide a better
experience for our developers. For the time being, we will continue to publish
our images to Docker Hub, but we will eventually deprecate the old registry.
Others
Implemented a new test build to detect race conditions in the codebase. This
will help us to improve the stability of the United Manufacturing Hub.
All our custom images now run as non-root by default, except for the ones that
require root privileges.
The custom microservices now allow to change the type of Service used to
expose them by setting serviceType field.
Added an SQL trigger function that deletes duplicate records from the
statetable table after insertion.
Enhanced the environment variables validation in the codebase.
Added possibility to set the aggregation interval when calculating the
throughput of an asset.
Various dependencies has been updated to their latest version.
8.1.3 - What's New in Version 0.9.13
This section contains information about the new features and changes in the
United Manufacturing Hub introduced in version 0.9.13.
Welcome to United Manufacturing Hub version 0.9.13! This is a minor release that
only updates the new metrics feature.
For a complete list of changes, refer to the
release notes.
8.1.4 - What's New in Version 0.9.12
This section contains information about the new features and changes in the
United Manufacturing Hub introduced in version 0.9.12.
Welcome to United Manufacturing Hub version 0.9.12! Read on to learn about
the new features of the UMH Datasource V2 plugin for Grafana, Redis running
in standalone mode, and more.
For a complete list of changes, refer to the
release notes.
Grafana
New Grafana version
Grafana has been upgraded to version 9.4.3. This introduces new search and
navigation features, a redesigned details section of the logs, and a new
data source connection page.
We have upgraded Node-RED to version 3.0.2.
Checkout the Node-RED release notes for more information.
UMH Datasource V2 plugin
The latest update to the datasource has incorporated typesafe JSON parsing,
significantly enhancing the overall performance and dependability of the plugin.
This implementation ensures that the parsing process strictly adheres to predefined
data types, eliminating the possibility of unexpected errors or data corruption
that can occur with loosely-typed JSON parsing.
Redis in standalone mode
Redis, the service used for caching, is now deployed in standalone mode. This
change introduces these benefits:
Simplicity: Running Redis in standalone mode is simpler than using a
master-replica topology with Sentinel. With standalone mode, there is only one
Redis instance to manage, whereas with master-replica, you need to manage
multiple Redis instances and the Sentinel process. This simplicity can reduce
complexity and make it easier to manage Redis instances.
Lower Overhead: Standalone mode has lower overhead than using a master-replica
topology with Sentinel. In a master-replica topology, there is a communication
overhead between the master and the replicas, and Sentinel adds additional
overhead for monitoring and failover management. In contrast, standalone mode
does not have this overhead.
Better Performance: Since standalone mode does not have the overhead of
master-replica topology with Sentinel, it can provide better performance.
Standalone mode provides faster response times and can handle more requests
per second than a master-replica topology with Sentinel.
That being said, it’s important to note that a master-replica topology with
Sentinel provides higher availability and failover capabilities than standalone
mode.
All basic services are now exposed by a LoadBalancer Service
The MQTT Broker, Kafka Broker, and Kafka Console are now exposed by a
LoadBalancer Service, along with the Database, Grafana and Node-RED. This
change makes it easier to access these services from outside the cluster, as
they are now accessible via the IP address of the cluster.
When installing the United Manufacturing Hub locally, the cluster ports are
automatically mapped to the host ports. This means that you can access the
services from your browser by using localhost and the port number.
Read more about connecting to the services from outside the cluster in the
related documentation.
Metrics
We introduced an optional microservice that can be used to collect metrics
about the system, like OS, CPU, memory, hostname and average load. These metrics
are then sent to our server for analysis, and are completely anonymous. This
microservice is enabled by default, but can be disabled by setting the
_000_commonConfig.metrics.enabled value to false in the values.yaml file.