The United Manufacturing Hub is an Open-Source Helm Chart for Kubernetes, which combines state-of -the-art IT / OT tools & technologies and brings them into the hands of the engineer.
Bringing the worlds best IT and OT tools into the hands of the engineer
Why start from scratch when you can leverage a proven open-source blueprint? Kafka, MQTT, Node-RED, TimescaleDB and Grafana with the press of a button - tailored for manufacturing and ready-to-go
What can you do with it?
Everything That You Need To Do To Generate Value On The Shopfloor
Exchange and store data using HiveMQ for IoT devices, Apache Kafka as enterprise message broker, and TimescaleDB as a reliable relational and time-series storage solution
Visualize data using Grafana and factoryinsight to build powerful shopfloor dashboards
Prevent Vendor Lock-In and Customize to Your Needs
The only requirement is Kubernetes, which is available in various flavors, including k3s, bare-metal k8s, and Kubernetes-as-a-service offerings like AWS EKS or Azure AKS
Swap components with other options at any time. Not a fan of Node-RED? Replace it with Kepware. Prefer a different MQTT broker? Use it!
Leverage existing systems and add only what you need.
Get Started Immediately
Download & install now, so you can show results instead of drawing nice boxes in PowerPoint
Connect with Like-Minded People
Tap into our community of experts and ask anything. No need to depend on external consultants or system integrators.
Leverage community content, from tutorials and Node-RED flows to Grafana dashboards. Although not all content is enterprise-supported, starting with a working solution saves you time and resources.
Get honest answers in a world where many companies spend millions on advertising.
How does it work?
Only requirement: a Kubernetes cluster (and we'll even help you with that!). You only need to install the United Manufacturing Hub Helm Chart on that cluster and configure it.
The United Manufacturing Hub will then generate all the required files for Kubernetes, including auto-generated secrets, various microservices like bridges between MQTT / Kafka, datamodels and configurations. From there on, Kubernetes will take care of all the container management.
Yes - the United Manufacturing Hub is targeting specifically people and companies, who do not have the budget and/or knowledge to work on their own / develop everything from scratch.
With our extensive documentation, guides and knowledge sections you can learn everything that you need.
The United Manufacturing Hub abstracts these tools and technologies so that you can leverage all advantages, but still focus on what really matters: digitizing your production.
With our commercial Management Console you can manage your entire IT / OT infrastructure and work with Grafana / Node-RED without the need to ever touch or understand Kubernetes, Docker, Firewalls, Networking or similar.
Additionally, you can get support licenses providing unlimited support during development and maintenance of the system. Take a look at our website if you want to get more information on this.
Because very often these solutions do not target the actual pains of an engineer: implementation and maintenance. And then companies struggle in rolling out IIoT as the projects take much longer and cost way more than originally proposed.
In the United Manufacturing Hub, implementation and maintenance of the system are the first priority. We've had these pains too often ourselves and therefore incorporated and developed tools & technologies to avoid them.
For example, with sensorconnect we can retrofit production machines where it is impossible at the moment to extract data. Or, with our modular architecture we can fit the security needs of all IT departments -
from integration into a demilitarized zone to on-premise and private cloud. With Apache Kafka we solve the pain of corrupted or missing messages when scaling out the system
How to proceed?
1 - Get Started!
You want to get started right away? Go ahead and jump into the action!
We are glad that you want to start setting up right away! This guide is divided into 5 steps: Installation, Managing the System,
Data Acquisition & Manipulation and Moving to Production.
Contact Us!
Do you still have questions on how to get started? Message us on our Discord Server or submit
a support ticket through the question mark in the lower right corner of the website.
1.1 - 1. Installation
Installing the united manufacturing hub using the Management Console
The United Manufacturing Hub can be installed locally or on an edge device, depending on your needs. For simple tinkering and development, we recommend installing it locally using our Management Console.
If you prefer an open-source approach, we also provide instructions for using k3d.
Local installation using the Management Console (recommended, Windows only)
We’ve put together a comprehensive guide on how to install the UMH locally on your computer using our Management Console. The Management Console is a desktop application allowing you to setup, configure and maintain your IT / OT infrastructure - independent whether it is deployed as a test instance on the same device as the Management Console, or on an edge-device, on-premise server or in the cloud.
To access the documentation, simply click on the button below.
Please note that the Management Console is available at the moment under Windows only and only allows setting up test instances on the same device. If you are using Linux or Mac, please look into the production guides for OS specific installation tutorials.
What’s next?
Once you’ve completed the installation process, we’ll guide you through accessing the microservices using UMHLens. To learn more click here.
1.2 - 2. Managing the System
Basics of UMHLens and importing Node-RED and Grafana flows
In this chapter, we’ll guide you through connecting to our Kubernetes cluster using UMHLens. Then, we’ll walk you through importing a Node-RED and Grafana flow to help you visualize how data flows through the stack. Check out the image below for a sneak peek
If you installed the UMH using the management console, you should see a cluster named “k3d-united-manufacturing-hub”
under Browse. Click on it to connect.
You can check the status of all pods by navigating to Workloads -> Pods and selecting
united-manufacturing-hub as the namespace on the top right. Depending on your system, it may take a while for all pods to start.
To access the web interfaces of the microservices, e.g. node-red or grafana, navigate to Network -> Services on
the left-hand side. Again make sure to change the namespace to united-manufacturing-hub at the top right.
Click on the appropriate service you wish to connect to, scroll down to Connection and forward the port.
2. Import flows to Node-RED
Access the Node-RED Web UI. To do this, click on the service and forward the port as shown above. When the UI opens
in the browser, add nodered to the URL as shown in the figure below to avoid the cannot get error.
Once you are in the web interface, click on the three lines in the upper right corner and select Import.
Now copy this json file and paste it into the import field. Then press Import.
To activate the imported flow, simply click on the Deploy button located at the top right of the screen.
If everything is working as expected, you should see green dots above the input and output. Once you’ve confirmed
that the data is flowing correctly, you can proceed to display it in Grafana
3. Import flows to Grafana & view dashboard
Go into UMHLens and forward the grafana service as you did with node-red. To log in, you need the grafana Secrets,
which you can find in UMHLens under Config -> Secrets -> Grafana-Secret. Click on the eye to display the username and password and enter it in grafana.
Once you are logged in, click on Dashboards on the left and select Import. Now copy this Grafana json and paste it into Import via panel json. Then click on Load. You will then be redirected to Options where you need to select the umh-v2-datasource. Finally, click on Import.
If everything is working properly, you should now see a functional dashboard with a temperature curve.
What’s next?
Next, you can create a node-red flow for yourself and then learn how to create a dashboard in Grafana. Click here to proceed.
1.3 - 3. Data Acquisition and Manipulation
Formatting raw data into the UMH data model using node-red.
The United Manufacturing Hub has several simulators. These simulators simulate different data types/protocols such as MQTT, PackML or OPC/UA. In this chapter we will take the MQTT simulated data and show you how to format it into the UMH data model.
Creating Node-RED flow with simulated MQTT-Data
Access the Node-RED Web UI. To do this, click on the service and forward the port as shown below. Once the UI opens
in the browser, add nodered to the URL to avoid the cannot get error.
From the left-hand column, drag a mqtt-in node, a mqtt-out node, and a debug node into your flow.
Connect the mqtt-in and to the debug-node.
Double-click on the mqtt-in node and add a new MQTT broker. To do so, click on Edit and use the service name of HiveMQ as the host (located in UMHLens under services -> name). Leave the port as autoconfigured and click on Add to save your changes.
To view all incoming messages from a specific topic, type ia/# under Topic and click on Done.
To apply the changes, click on Deploy located at the top right of the screen. Once the changes have been deployed, you can view the debug information by clicking on Debug-Messages located under Deploy.
In this column, you can view all incoming messages and their respective topics. The incoming topics follow this format: ia/raw/development/ioTSensors/. For the purpose of this tutorial, we will be using only the temperature topic, but feel free to choose any topic you’d like. To proceed, copy the temperature topic (ia/raw/development/ioTSensors/Temperature), open the mqtt-in node, paste the copied topic in the Topic field, click on Done, and then press Deploy again to apply the changes.
To format the incoming message, add a JSON node and a Function node to your flow. Connect the nodes in the following order: mqtt-in → JSON → Function → mqtt-out.
Open the function node and paste in the following:
We are creating a new object (array) with two keys timestamp_ms and temperature and their corresponding value Date.now() and parseFloat(msg.payload,10).
The parseFloat function converts the incoming string into a float with the base 10 and the Date.now() creates a timestamp.
We also created a msg.topic for the mqtt-out node, which will automatically apply this topic.
The topic ends with the key processValue which is used whenever a custom process value with unique name has been prepared. The value is numerical. You can learn more about our message structure here.
Add another mqtt-in node to your flow, and set the topic to ia/factoryinsight/Aachen/testing/processValue. Make sure to select the created broker. Connect a debug node to the new mqtt-in node, and then click on Deploy to save the changes.
You should now see the converted message under Debug-messages. To clear any previous messages, click on the trash bin icon.
Congratulations, you have successfully converted the incoming message and exported it via MQTT. However, since we are currently only exporting the temperature without actually working with the data, let’s create a function that counts the critical temperature exceedances.
Drag another function-node into your flow, open it and navigate to On Start.
Paste in the following code, which will only run on start:
flow.set("count", 0);
flow.set("current", 0)
Then click on On-Message and paste in the following and click done:
The pasted in code will work as shown in the diagram below.
Finally, connect the function-node like shown below and click on deploy.
If the incoming value of temperature is now greater than 47, you will see another message consisting of TemperatureWarning and a timestamp in debug-messages.
What’s next?
In the next chapter we will use Grafana to display the formatted data. Click here to proceed.
1.4 - 4. Data Visualization
Building a simple Grafana dashboard
The next step is to visualize the data. In this chapter, we will be creating a Grafana dashboard that is based on the Node-RED flow we created in the previous chapter. The dashboard will display the temperature readings and temperature warnings.
Creating a Grafana dashboard
Open Grafana with UMHLens and enter the secrets also found in UMHLens.
Once you are in Grafana navigate to the left and click on New dashboard.
Click on Add a new panel.
Next we will configure the datasource-v2, to retrieve the data we earlier transformed in Node-red. Click on umh-v2-datasource.
Go to Work cell to query and select under Select new work cell: factoryinsight->Aachen->DefaultArea->DefaultProductionLine->testing
Next go to Value to query and select under Select new value: tags->custom->temperature .
If you now click on Refresh Dashboard at the top right-hand corner, the graph will refresh and display the temperature data.
Next, you can customise your dashboard. On the right side are several options, such as specifying a unit or setting thresholds, etc. Just play around until it suits your needs.
When you have finished making adjustments, click Apply in the top right-hand corner to save the panel and return to the overview.
Next we will display the temperature warnings. Click Add Panel at the top right to create an additional panel.
To set up the umh-v2 data source, repeat the steps discussed earlier, but select under Value to query: TemperatureWarning instead of temperature.
instead of a time series chart to display the temperature warnings, we select Stat on the right side.
Now you can again customize your panel and when you are done click on Apply.
Congratulations, you have created your first Grafana dashboard, and it should look something like the one below.
What’s next?
The next topic is “Moving to Production”, where we will explain what it means to move the umh to a manufacturing environment. Click here to proceed.
1.5 - 5. Moving to Production
Moving the United Manufacturing Hub to production
The next big step is to use the UMH on a virtual machine or an edge device in your production and connect your production assets. However, we understand that you might want to understand a little bit more about the United Manufacturing Hub first. So, you can either read more about, deep-dive into your local installation, or continue with the deployment in production.
Check out our community
We are quite active on GitHub and Discord. Feel free to join, introduce yourself and share your best-practices and experiences.
If you like reading more about its features and architecture, check out the following chapters:
Features to understand the capabilities of the United Manufacturing Hub and learn how to use them
Architecture to learn what is behind the United Manuacturing Hub and how everything works together
If reading is not your thing, you can always …
Play around with it locally
If you want to try around locally, we recommend you try out the following topics.
Grafana Canvas
If you’re interested in creating visually appealing Grafana dashboards, you might want to try Grafana-Canvas. In our previous blog article, we explained why Grafana-Canvas is a valuable addition to your standard Grafana dashboard. If you’d like to learn how to build one, check out our tutorial.
OPC/UA-Simulator
If you want to get a good overview of how the OPC/UA protocol works and how to connect it to the UMH, the OPC/UA-simulator is a useful tool. Detailed instructions can be found in this guide.
PackML-Simulator
For those looking to get started with PackML, the PackML Simulator is another helpful simulator. Check out our tutorial on how to create a Node-RED flow with PackML data.
Benthos
Benthos is a highly scalable data manipulation and IT connection tool. If you’re interested in learning more about it, check out our tutorial.
Kepware
At times, you may need to connect different, older protocols. In such cases, KepwareServerEx can help bridge the gap between these older protocols and the UMH. If you’re interested in learning more, check out our tutorial.
Deployment to production
Ready to go to production? Go install it!
Follow our step-by-step tutorial on how to install the UMH on an edge device or an virtual machine using Flatcar. We’ve also written a blog article explaining why we use Flatcar as the operating system for the industrial IoT, which you can find here.
Make sure to check out our advanced production guides, which include detailed instructions on how to secure your setup and how to best integrate with your infrastructure.
2 - Features
Do you want to understand the capabilities of the United Manufacturing Hub, but do not want to get lost in technical architecture diagrams? Here you can find all the features explained on few pages.
2.1 - Unified Namespace / Message Broker
Exchange events and messages across all your shopfloor equipment, IT / OT systems such as ERP or MES and microservices.
The Unified Namespace is an event-driven architecture that allows for seamless communication between nodes in a network. It operates on the principle that all data, regardless of whether there is an immediate consumer, should be published and made available for consumption. This means that any node in the network can work as either a producer or a consumer, depending on the needs of the system at any given time.
To use any functionalities of the United Manufacturing Hub, you need to use the Unified Namespace as well. More information can be found in our Learning Hub on the topic of Unified Namespace.
When should I use it?
An application consists always out of multiple building blocks. To connect those building blocks, one can either exchange data between them through databases, through service calls (such as REST), or through a message broker.
Opinion: We think for most applications in manufacturing, communication via a message broker is the best choice as it prevents spaghetti diagrams and allows for real-time data processing. For more information about this, you can check out this blog article.
In the United Manufacturing Hub, each single piece of information / “message” / “event” is sent through a message broker, which is also called the Unified Namespace.
What can I do with it?
The Unified Namespace / Message Broker in the United Manufacturing Hub provides several notable functionalities in addition to the features already mentioned:
Easy integration using MQTT: Many modern shopfloor equipment can send and receive data using the MQTT protocol.
Easy integration with legacy equipment: Using tools like Node-RED, data can be easily extracted from various protocols such as Siemens S7, OPC-UA, or Modbus
Get notified in real-time via MQTT: The Unified Namespace allows you to receive real-time notifications via MQTT when new messages are published. This can be useful for applications that require near real-time processing of data, such as an AGV waiting for new commands.
Retrieve past messages from Kafka logs: By looking into the Kafka logs, you can always be aware of the last messages that have been sent to a topic. This allows you to replay certain scenarios for troubleshooting or testing purposes.
Efficiently process messages from millions of devices: The Unified Namespace is designed to handle messages from millions of devices in your factory, even over unreliable connections. By using Kafka, you can efficiently at-least-once process each message, ensuring that each message arrives at-least-once (1 or more times).
Trace messages through the system: The Unified Namespace provides tracing capabilities, allowing you to understand where messages are coming from and where they go. This can be useful for debugging and troubleshooting purposes. You can use the Management Console to visualize the flow of messages through the system.to visualize the flow of messages through the system.
How can I use it?
Using the Unified Namespace is quite simple:
Configure your IoT devices and devices on the shopfloor to use the in-built MQTT broker of the United Manufacturing Hub by specifying the MQTT protocol, selecting unencrypted (1883) / encrypted (8883) ports depending on your configuration, and send the messages into a topic starting with ia/raw. From there on, you can start processing the messages in Node-RED by reading in the messages again via MQTT or Kafka, adjusting the payload or the topic to match the UMH datamodel and sending it back again to MQTT or Kafka.
If you send the messages into other topics, some features might not work correctly (see also limitations).
Recommendation: Send messages from IoT devices via MQTT and then work in Kafka only.
What are the limitations?
Messages are only bridged between MQTT and Kafka if they fulfill the following requirements:
payload is a valid JSON OR message is sent to the ia/raw topic
only sent to topics matching the allowed topics in the UMH datamodel, independent of what is configured in the environment variables (will be changed soon)
The topic lengths can be maximum 249 characters as this is a Kafka limitation
Only the following characters are allowed in the topic: a-z, A-Z, _ and -
Max. messages size for the mqtt-kafka-bridge is 0.95MB (1000000 bytes). If you have more, we recommend using Kafka directly and not bridging it via MQTT.
Messages from MQTT to Kafka will be published under a different topic:
Learn how the United Manufacturing Hub’s Historian feature provides reliable data storage and analysis for your manufacturing data.
The Historian / Data Storage feature in the United Manufacturing Hub provides reliable data storage and analysis for your manufacturing data. Essentially, a Historian is just another term for a data storage system, designed specifically for time-series data in manufacturing.
When should I use it?
If you want to reliably store data from your shop floor that is not designed to fulfill any legal purposes, such as GxP, then the United Manufacturing Hub’s Historian feature is ideal. Open-Source databases such as TimescaleDB are superior to traditional historians in terms of reliability, scalability and maintainability, but can be challenging to use for the OT engineer. The United Manufacturing Hub fills this usability gap, allowing OT engineers to easily ingest, process, and store data permanently in an Open-Source database.
What can I do with it?
The Historian / Data Storage feature of the United Manufacturing Hub allows you to:
Conduct basic data analysis, including automatic downsampling, gap filling, and statistical functions such as Min, Max, and Avg
Query and visualize data
Query data in an ISA95 model, from enterprise to site, area, production line, and work cell.
Visualize your data in Grafana to easily monitor and troubleshoot your production processes.
More information about the exact analytics functionalities can be found in the umh-datasource-v2 documentation. Further below some screenshots of said datasource.
Efficiently manage data
Compress and retain data to reduce database size using various techniques.
How can I use it?
Convert your data in your Unified Namespace to processValue messages, and the Historian feature will store them automatically. You can then view the data in Grafana. An example can be found in the Getting Started guide.
For more information about what exactly is behind the Historian feature, check out our our architecture page
What are the limitations?
Only data in processValue topics are saved automatically. Data in topics like ia/raw are not. Data send to other messages in the UMH datamodel are stored slightly different and can be retrieved via Grafana as well. See also Analytics feature.
At the moment, extensive queries can only be done in your own code by leveraging the API in factoryinsight, or processing the data in the Unified Namespace.
Apart from these limitations, the United Manufacturing Hub’s Historian feature is highly performant compared to legacy Historians.
Learn more about the United Manufacturing Hub’s architecture by visiting our architecture page.
2.3 - Shopfloor KPIs / Analytics
The Shopfloor KPI/Analytics feature of the United Manufacturing Hub provides equipment-based KPIs, configurable dashboards, and detailed analytics for production transparency. Configure OEE calculation and track root causes of low OEE using drill-downs. Easily ingest, process, and analyze data in Grafana.
The Shopfloor KPI / Analytics feature of the United Manufacturing Hub provides a configurable and plug-and-play approach to create “Shopfloor Dashboards” for production transparency consisting of various KPIs and drill-downs.
If you want to create production dashboards that are highly configurable and can drill down into specific KPIs, the Shopfloor KPI / Analytics feature of the United Manufacturing Hub is an ideal choice. This feature is designed to help you quickly and easily create dashboards that provide a clear view of your shop floor performance.
What can I do with it?
The Shopfloor KPI / Analytics feature of the United Manufacturing Hub allows you to:
Query and visualize
In Grafana, you can:
Calculate the OEE (Overall Equipment Effectiveness) and view trends over time
Availability is calculated using the formula (plannedTime - stopTime) / plannedTime, where plannedTime is the duration of time for all machines states that do not belong in the Availability or Performance category, and stopTime is the duration of all machine states configured to be an availability stop.
Performance is calculated using the formula runningTime / (runningTime + stopTime), where runningTime is the duration of all machine states that consider the machine to be running, and stopTime is the duration of all machine states that are considered a performance loss. Note that this formula does not take into account losses caused by letting the machine run at a lower speed than possible. To approximate this, you can use the LowSpeedThresholdInPcsPerHour configuration option (see further below).
Quality is calculated using the formula good pieces / total pieces
Drill down into stop reasons (including histograms) to identify the root-causes for a potentially low OEE.
List all produced and planned orders including target vs actual produced pieces, total production time, stop reasons per order, and more using job and product tables.
See machine states, shifts, and orders on timelines to get a clear view of what happened during a specific time range.
View production speed and produced pieces over time.
Configure
In the database, you can configure:
Stop Reasons Configuration: Configure which stop reasons belong into which category for the OEE calculation and whether they should be included in the OEE calculation at all. For instance, some companies define changeovers as availability losses, some as performance losses. You can easily move them into the correct category.
Automatic Detection and Classification: Configure whether to automatically detect/classify certain types of machine states and stops:
AutomaticallyIdentifyChangeovers: If the machine state was an unspecified machine stop (UnknownStop), but an order was recently started, the time between the start of the order until the machine state turns to running, will be considered a Changeover Preparation State (10010). If this happens at the end of the order, it will be a Changeover Post-processing State (10020).
MicrostopDurationInSeconds: If an unspecified stop (UnknownStop) has a duration smaller than a configurable threshold (e.g., 120 seconds), it will be considered a Microstop State (50000) instead. Some companies put small unknown stops into a different category (performance) than larger unknown stops, which usually land up in the availability loss bucket.
IgnoreMicrostopUnderThisDurationInSeconds: In some cases, the machine can actually stop for a couple of seconds in routine intervals, which might be unwanted as it makes analysis difficult. One can set a threshold to ignore microstops that are smaller than a configurable threshold (usually like 1-2 seconds).
MinimumRunningTimeInSeconds: Same logic if the machine is running for a couple of seconds only. With this configurable threshold, small run-times can be ignored. These can happen, for example, during the changeover phase.
ThresholdForNoShiftsConsideredBreakInSeconds: If no shift was planned, an UnknownStop will always be classified as a NoShift state. Some companies move smaller NoShift’s into their category called “Break” and move them either into Availability or Performance.
LowSpeedThresholdInPcsPerHour: For a simplified performance calculation, a threshold can be set, and if the machine has a lower speed than this, it could be considered a LowSpeedState and could be categorized into the performance loss bucket.
Language Configuration: The language of the machine states can be configured using the languageCode configuration option (or overwritten in Grafana).
Connect devices on the shop floor using Node-RED with United Manufacturing Hub’s Unified Namespace. Simplify data integration across PLCs, Quality Stations, and MES/ERP systems with a user-friendly UI.
One feature of the United Manufacturing Hub is to connect devices on the shopfloor such as PLCs, Quality Stations or
MES / ERP systems with the Unified Namespace using Node-RED.
Node-RED has a large library of nodes, which lets you connect various protocols. It also has a user-friendly UI
with little code, making it easy to configure the desired nodes.
When should I use it?
Sometimes it is necessary to connect a lot of different protocols (e.g Siemens-S7, OPC-UA, Serial, …) and node-RED can be a maintainable
solution to connect all these protocols without the need for other data connectivity tools. Node-RED is largely known in
the IT/OT-Community making it a familiar tool for a lot of users.
What can I do with it?
By default, there are connector nodes for common protocols:
connect to MQTT using the MQTT node
connect to HTTP using the HTTP node
connect to TCP using the TCP node
connect to IP using the UDP node
Furthermore, you can install packages to support more connection protocols. For example:
You can additionally contextualize the data, using function or other different nodes do manipulate the
received data.
How can I use it?
Node-RED comes preinstalled as a microservice with the United Manufacturing Hub.
To access Node-RED navigate to Network -> Services on the left-hand side in UMHLens. You can download UMHLens / OpenLens here.
On the top right, change the Namespace from default to united-manufacturing-hub.
Click on united-manufacturing-hub-nodered-service, scroll down to Connection and forward the port.
Once Node-RED opens in the browser, add nodered to the URL to avoid the cannot get error.
Begin exploring right away! If you require inspiration on where to start, we provide a variety of guides to help you
become familiar with various node-red workflows, including how to process data and align it with the UMH datamodel:
Alternatively, visit learning page where you can find multiple best practices for using Node-RED
What are the limitations?
Most packages have no enterprise support. If you encounter any errors, you need to ask the community.
However, we found that these packages are often more stable than the commercial ones out there,
as they have been battle tested by way more users than commercial software.
Having many flows without following a strict structure, leads in general to confusion.
One additional limitation is “the speed of development of Node-RED”. After a big Node-RED and JavaScript update
dependencies most likely break, and those single community maintained nodes need to be updated.
Where to get more information?
Learn more about Node-RED and the United Manufacturing Hub by following our Get started guide .
2.5 - Retrofitting with ifm IO-link master and sensorconnect
Upgrade older machines with ifm IO-Link master and Sensorconnect for seamless data collection and integration. Retrofit your shop floor with plug-and-play sensors for valuable insights and improved efficiency.
Retrofitting older machines with sensors is sometimes the only-way to capture process-relevant information.
In this article, we will focus on retrofitting with ifm IO-Link master and
Sensorconnect, a microservice of the United Manufacturing Hub, that finds and reads out ifm IO-Link masters in the
network and pushes sensor data to MQTT/Kafka for further processing.
When should I use it?
Retrofitting with ifm IO-Link master such as the AL1350 and using Sensorconnect is ideal when dealing with older machines that are not
equipped with any connectable hardware to read relevant information out of the machine itself. By placing sensors on
the machine and connecting them with IO-Link master, required information can be gathered for valuable
insights. Sensorconnect helps to easily connect to all sensors correctly and properly capture the large
amount of sensor data provided.
What can I do with it?
With ifm IO-Link master and Sensorconnect, you can collect data from sensors and make it accessible for further use.
Sensorconnect offers:
Automatic detection of ifm IO-Link masters in the network.
Identification of IO-Link and alternative digital or analog sensors connected to the master using converter such as the DP2200.
Digital Sensors employ a voltage range from 10 to 30V DC, producing binary outputs of true or false. In contrast, analog sensors operate at 24V DC, with a current range spanning from 4 to 20 mA. Utilizing the appropriate converter, analog outputs can be effectively transformed into digital signals.
Constant polling of data from the detected sensors.
Interpreting the received data based on a sensor database containing thousands of entries.
Sending data in JSON format to MQTT and Kafka for further data processing.
How can I use it?
To use ifm IO-link gateways and Sensorconnect please follow these instructions:
Ensure all IO-Link gateways are in the same network or accessible from your instance of the United Manufacturing Hub.
Retrofit the machines by connecting the desired sensors and establish a connection with ifm IO-Link gateways.
Configure the Sensorconnect IP-range to either match the IP address using subnet notation /32, or, in cases involving multiple masters, configure it to scan an entire range, for example /24. To change the value, go to the Customize the United Manufacturing Hub section.
Once completed, the data should be available in your Unified Namespace.
What are the limitations?
The current ifm firmware has a software bug, that will cause the IO-Link master to crash if it receives to many requests.
To resolve this issue, you can either request an experimental firmware, which is available exclusively from ifm, or re-connect the power to the IO-Link gateway.
Integrate USB barcode scanners with United Manufacturing Hub’s barcodereader microservice for seamless data publishing to Unified Namespace. Ideal for inventory, order processing, and quality testing stations.
The barcodereader microservice enables the processing of barcodes from USB-linked scanner devices, subsequently publishing the acquired
data to the Unified Namespace.
When should I use it?
When you need to connect a barcode reader or any other USB devices acting as a keyboard (HID). These cases could be to scan an order
at the production machine from the accompanying order sheet. Or To scan material for inventory and track and trace.
What can I do with it?
You can connect USB devices acting as a keyboard to the Unified Namespace. It will record all inputs and send it out once
a return / enter character has been detected. A lof of barcode scanners work that way. Additionally, you can also connect
something like a quality testing station (we once connected a Mitutoyo quality testing station).
How can I use it?
To use the microservice barcode reader, you will need configure the helm-chart and enable it.
Enable _000_commonConfig.datasources.barcodereader.enabled in the Helm Chart
During startup, it will show all connected USB devices. Remember yours and then change the INPUT_DEVICE_NAME and INPUT_DEVICE_PATH
Also set ASSET_ID, CUSTOMER_ID, etc. as this will then send it into the topic ia/ASSET_ID/…/barcode
Restart the pod
Scan a device, and it will be written into the topic xxx
Once installed, you can configure the microservice by
setting the needed environment variables. The program will continuously scan for barcodes using the device and publish
the data to the Kafka topic.
What are the limitations?
Sometimes special characters are not parsed correctly. They need to be adjusted afterward in th Unified Namespace.
Monitor and maintain your manufacturing processes with real-time Grafana alerts from the United Manufacturing Hub. Get notified of potential issues and reduce downtime by proactively addressing problems.
The United Manufacturing Hub utilizes a TimescaleDB database, which is based
on PostgreSQL. Therefore, you can use the PostgreSQL plugin in Grafana to
implement and configure alerts and notifications.
Why should I use it?
Alerts based on real-time data enable proactive problem detection.
For example, you will receive a notification if the temperature of machine
oil or an electrical component of a production line exceeds limitations.
By utilizing such alerts, you can schedule maintenance, enhance efficiency,
and reduce downtime in your factories.
What can I do with it?
Grafana alerts help you keep an eye on your production and manufacturing
processes. By setting up alerts, you can quickly identify problems,
ensuring smooth operations and high-quality products.
An example of using alerts is the tracking of the temperature
of an industrial oven. If the temperature goes too high or too low, you
will get an alert, and the responsible team can take action before any damage
occurs. Alerts can be configured in many different ways, for example,
to set off an alarm if a maximum is reached once or if it exceeds a limit when
averaged over a time period. It is also possible to include several values
to create an alert, for example if a temperature surpasses a limit and/or the
concentration of a component is too low. Notifications can be sent
simultaneously across many services like Discord, Mail, Slack, Webhook,
Telegram, or Microsoft Teams. It is also possible to forward the alert with
SMS over a personal Webhook. A complete list can be found on the
Grafana page
about alerting.
How can I use it?
For a detailed tutorial on how to set up an alert, please visit our learn page
with the detailed step-by-step tutorial. Here you
can find an overview of the process.
Install the PostgreSQL plugin in Grafana:
Before you can formulate alerts, you need to install the PostgreSQL plugin,
which is already integrated into Grafana.
Alert Rule:
When creating an alert, you first have to set the alert rule in Grafana. Here
you set a name, specify which values are used for the rule, and
when the rule is fired. Additionally, you can add labels for your rules,
to link them to the correct contact points. You have to use SQL to select the
desired values.
Contact Point:
In a contact point you create a collection of addresses and services that
should be notified in case of an alert. This could be a Discord channel or
Slack for example. When a linked alert is triggered, everyone within the
contact point receives a message. The messages can be preconfigured and are
specific to every service or contact.
Notification Policies:
In a notification policy, you establish the connection of a contact point
with the desired alerts. This is done by adding the labels of the desired
alerts and the contact point to the policy.
Mute Timing:
In case you do not want to receive messages during a recurring time
period, you can add a mute timing to Grafana. If added to the notification
policy, no notifications will be sent out by the contact point. This could be
times without shifts, like weekends or during regular maintenance.
Silence:
You can also add silences for a specific time frame and labels, in case
you only want to mute alerts once.
An alert is only sent out once
after being triggered. For the next alert, it has to return to the normal
state, so the data no longer violates the rule.
What are the limitations?
It can be complicated to select and manipulate the desired values to create
the correct function for your application. Grafana cannot
differentiate between data points of the same source. For example, you
want to make a temperature threshold based on a single sensor.
If your query selects the last three values and two of them are above the
threshold, Grafana will fire two alerts which it cannot tell apart.
This results in errors. You have to configure the rule to reduce the selected
values to only one per source to avoid this.
It can be complicated to create such a specific rule with this limitation, and
it requires some testing.
Another thing to keep in mind is that the alerts can only work with data from
the database. It also does not work with the machine status; these values only
exist in a raw, unprocessed form in TimescaleDB and are not processed through
an API like process values.
A detailed view of the architecture of the UMH stack.
The United Manufacturing Hub at its core is a Helm Chart for Kubernetes consisting of several microservices and open source 3rd party applications, such as Node-RED and Grafana. This Helm Chart can be deployed in various environments, from edge devices and virtual machines to managed Kubernetes offerings. In large-scale deployments, you find typically a combination out of all these deployment options.
In this chapter, we’ll explore the various microservices and applications that make up the United Manufacturing Hub, and how they work together to help you extract, contextualize, store, and visualize data from your shop floor.
flowchart
subgraph UMH["United Manufacturing Hub"]
style UMH fill:#47a0b5
subgraph UNS["Unified Namespace"]
style UNS fill:#f4f4f4
kafka["Apache Kafka"]
mqtt["HiveMQ"]
console["Console"]
kafka-bridge
mqtt-kafka-bridge["mqtt-kafka-bridge"]
click kafka "./microservices/core/kafka"
click mqtt "./microservices/core/mqtt-broker"
click console "./microservices/core/console"
click kafka-bridge "./microservices/core/kafka-bridge"
click mqtt-kafka-bridge "./microservices/core/mqtt-kafka-bridge"
mqtt <-- MQTT --> mqtt-kafka-bridge <-- Kafka --> kafka
kafka -- Kafka --> console
end
subgraph custom["Custom Microservices"]
custom-microservice["A user provied custom microservice in the Helm Chart"]
custom-application["A user provided custom application deployed as Kubernetes resources or as a Helm Chart"]
click custom-microservice "./microservices/core/custom"
end
subgraph Historian
style Historian fill:#f4f4f4
kafka-to-postgresql
timescaledb[("TimescaleDB")]
factoryinsight
umh-datasource
grafana["Grafana"]
redis
click kafka-to-postgresql "./microservices/core/kafka-to-postgresql"
click timescaledb "./microservices/core/database"
click factoryinsight "./microservices/core/factoryinsight"
click grafana "./microservices/core/grafana"
click redis "./microservices/core/redis"
kafka -- Kafka ---> kafka-to-postgresql
kafka-to-postgresql -- SQL --> timescaledb
timescaledb -- SQL --> factoryinsight
factoryinsight -- HTTP --> umh-datasource
umh-datasource --Plugin--> grafana
factoryinsight <--RESP--> redis
kafka-to-postgresql <--RESP--> redis
end
subgraph Connectivity
style Connectivity fill:#f4f4f4
nodered["Node-RED"]
barcodereader
sensorconnect
click nodered "./microservices/core/node-red"
click barcodereader "./microservices/community/barcodereader"
click sensorconnect "./microservices/core/sensorconnect"
nodered <-- Kafka --> kafka
barcodereader -- Kafka --> kafka
sensorconnect -- Kafka --> kafka
end
subgraph Simulators
style Simulators fill:#f4f4f4
mqtt-simulator["IoT sensors simulator"]
packml-simulator["PackML simulator"]
opcua-simulator["OPC-UA simulator"]
click mqtt-simulator "./microservices/community/mqtt-simulator"
click packml-simulator "./microservices/community/packml-simulator"
click opcua-simulator "./microservices/community/opcua-simulator"
mqtt-simulator -- MQTT --> mqtt
packml-simulator -- MQTT --> mqtt
opcua-simulator -- OPC-UA --> nodered
end
end
subgraph Datasources
plc["PLCs"]
other["Other systems on the shopfloor (MES, ERP, etc.)"]
barcode["USB barcode reader"]
ifm["IO-link sensor"]
iot["IoT devices"]
plc -- "Siemens S7, OPC-UA, Modbus, etc." --> nodered
other -- " " ----> nodered
ifm -- HTTP --> sensorconnect
barcode -- USB --> barcodereader
iot <-- MQTT --> mqtt
%% at the end for styling purposes
nodered <-- MQTT --> mqtt
end
subgraph Data sinks
umh-other["Other UMH instances"]
other-systems["Other systems (cloud analytics, cold storage, BI tools, etc.)"]
kafka <-- Kafka --> kafka-bridge
kafka-bridge <-- Kafka ----> umh-other
factoryinsight -- HTTP ----> other-systems
end
Simulators
The United Manufacturing Hub includes several simulators to generate data during development and testing.
Microservices
iotsensorsmqtt simulates data in three different MQTT topics, providing a simple way to test and visualize MQTT data streams.
packml-simulator simulates a PackML machine which sends and receives MQTT messages
opcua-simulator simulates an OPC-UA server, which can then be used to test connectivity of OPC-UA clients and to generate sample data for OPC-UA clients
Data connectivity microservices
The United Manufacturing Hub includes microservices that extract data from the shop floor and push it into the Unified Namespace. Additionally, you can deploy your own microservices or third-party solutions directly into the Kubernetes cluster using the custom microservice feature. To learn more about third-party solutions, check out our extensive tutorials on our learning hub
Microservices
sensorconnect automatically reads out IO-Link Master and their connected sensors, and pushes the data to the message broker.
barcodereader connects to USB barcode reader devices and pushes the data to the message broker.
Node-RED is a versatile tool with many community plugins and allows access to machine PLCs or connections with other systems on the shopfloor. It plays an important role and is explained in the next section.
Node-RED: connectivity & contextualization
Node-RED is not just a tool for connectivity, but also for stream processing and data contextualization. It is often used to extract data from the message broker, reformat the event, and push it back into a different topic, such as the UMH datamodel.
In addition to the built-in microservices, third-party contextualization solutions can be deployed similarly to data connectivity microservices. For more information on these solutions, check out our extensive tutorials on our learning hub.
In addition to the built-in microservices, third-party contextualization solutions can be deployed similarly to data connectivity microservices. For more information on these solutions, check out our extensive tutorials on our learning hub.
Microservices
Node-RED is a programming tool that can wire together hardware devices, APIs, and online services.
Unified Namespace
At the core of the United Manufacturing Hub lies the Unified Namespace, which serves as the central source of truth for all events and messages occurring on your shop floor. The Unified Namespace is implemented using two message brokers: HiveMQ for MQTT and Apache Kafka. MQTT is used to receive data from IoT devices on the shop floor because it excels at handling a large number of unreliable connections. On the other hand, Kafka is used to enable communication between the microservices, leveraging its large-scale data processing capabilities.
The data between both brokers is bridged automatically using the mqtt-to-kafka microservice, allowing you to send data to MQTT and process it reliably in Kafka.
For more information on the Unified Namespace feature and how to use it, check out the detailed description of the Unified Namespace feature.
Microservices
HiveMQ is an MQTT broker used for receiving data from IoT devices on the shop floor. It excels at handling large numbers of unreliable connections.
Apache Kafka is a distributed streaming platform used for communication between microservices. It offers large-scale data processing capabilities.
mqtt-kafka-bridge is a microservice that bridges messages between MQTT and Kafka, allowing you to send data to MQTT and process them reliably in Kafka.
kafka-bridge a microservice that bridges messages between multiple Kafka instances.
console is a web-based user interface for Kafka, which provides a graphical view of topics and messages.
Historian / data storage and visualization
The United Manufacturing Hub stores events according to our datamodel. These events are automatically stored in TimescaleDB, an open-source time-series SQL database. From there, you can access the stored data using Grafana, a visualization and analytics software. With Grafana, you can perform on-the-fly data analysis by executing simple min, max, and avg on tags, or extended KPI calculations such as OEE. These calculations can be selected in the umh-datasource microservice.
For more information on the Historian or Analytics feature and how to use it, check out the detailed description of the Historian feature or the Analytics features.
Microservices
kafka-to-postgresql stores data in selected topics from the Kafka broker in a PostgreSQL compatible database such as TimescaleDB.
TimescaleDB, which is an open-source time-series SQL database
factoryinsight provides REST endpoints to fetch data and calculate KPIs
umh-datasource is a plugin providing access factoryinsight
redis is an in-memory data structure store, used for cache.
Custom Microservices
The Helm Chart allows you to add your own microservices or Docker containers to the United Manufacturing Hub. These can be used, for example, to connect with third-party systems or to analyze the data. Additionally, you can deploy any other third-party application as long as it is available as a Helm Chart, Kubernetes resource, or Docker Compose (which can be converted to Kubernetes resources).
3.1 - Helm Chart
This page describes the Helm Chart of the United Manufacturing Hub and the
possible configuration options.
An Helm chart is a package manager for Kubernetes that simplifies the
installation, configuration, and deployment of applications and services.
It contains all the necessary Kubernetes manifests, configuration files, and
dependencies required to run a particular application or service. One of the
main advantages of Helm is that it allows to define the configuration of the
installed resources in a single YAML file, called values.yaml. Helm provides great documentation on how to acheive this at https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing
The Helm Chart of the United Manufacturing Hub is composed of both custom
microservices and third-party applications. If you want a more in-depth view of
the architecture of the United Manufacturing Hub, you can read the Architecture overview page.
Helm Chart structure
Custom microservices
The Helm Chart of the United Manufacturing Hub is composed of the following
custom microservices:
barcodereader: reads the input from
a barcode reader and sends it to the MQTT broker for further processing.
customMicroservice: a
template for deploying any number of custom microservices.
factoryinput: provides REST
endpoints for MQTT messages.
factoryinsight: provides REST
endpoints to fetch data and calculate KPIs.
grafanaproxy: provides a
proxy to the backend services.
MQTT Simulator: simulates
sensors and sends the data to the MQTT broker for further processing.
kafka-bridge: connects Kafka brokers
on different Kubernetes clusters.
kafkatopostgresql:
stores the data from the Kafka broker in a PostgreSQL database.
TimescaleDB: an open-source time-series SQL
database.
Configuration options
The Helm Chart of the United Manufacturing Hub can be configured by setting
values in the values.yaml file. This file has three main sections that can be
used to configure the applications:
customers: contains the definition of the customers that will be created
during the installation of the Helm Chart. This section is optional, and it’s
used only by factoryinsight and factoryinput.
_000_commonConfig: contains the basic configuration options to customize the
United Manufacturing Hub, and it’s divided into sections that group applications
with similar scope, like the ones that compose the infrastructure or the ones
responsible for data processing. This is the section that should be mostly used
to configure the microservices.
_001_customMicroservices: used to define the configuration of
custom microservices that are not included in the Helm Chart.
After those three sections, there are the specific sections for each microservice,
which contain their advanced configuration. This is the so called Danger Zone,
because the values in those sections should not be changed, unlsess you absolutely
know what you are doing.
When a parameter contains . (dot) characters, it means that it is a nested
parameter. For example, in the tls.factoryinput.cert parameter the cert
parameter is nested inside the tls.factoryinput section, and the factoryinput
section is nested inside the tls section.
Customers
The customers section contains the definition of the customers that will be
created during the installation of the Helm Chart. It’s a simple dictionary where
the key is the name of the customer, and the value is the password.
For example, the following snippet creates two customers:
customers:customer1:password1customer2:password2
Common configuration options
The _000_commonConfig contains the basic configuration options to customize the
United Manufacturing Hub, and it’s divided into sections that group applications
with similar scope.
The following table lists the configuration options that can be set in the
_000_commonConfig section:
_000_commonConfig section parameters
Parameter
Description
Type
Allowed values
Default
datainput
The configuration of the microservices used to input data.
The _000_commonConfig.datasources section contains the configuration of the
microservices used to acquire data, like the ones that connect to a sensor or
simulate data.
The following table lists the configuration options that can be set in the
_000_commonConfig.datasources section:
datasources section parameters
Parameter
Description
Type
Allowed values
Default
barcodereader
The configuration of the barcodereader microservice.
The _000_commonConfig.dataprocessing.nodered section contains the configuration
of the nodered microservice.
The following table lists the configuration options that can be set in the
_000_commonConfig.dataprocessing.nodered section:
nodered section parameters
Parameter
Description
Type
Allowed values
Default
enabled
Whether the nodered microservice is enabled.
bool
true, false
true
defaultFlows
Whether the default flows should be used.
bool
true, false
false
Infrastructure
The _000_commonConfig.infrastructure section contains the configuration of the
microservices responsible for connecting all the other microservices, such as the
MQTT broker and the
Kafka broker.
The following table lists the configuration options that can be set in the
_000_commonConfig.infrastructure section:
The private key of the certificate for the Kafka broker
string
Any
—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.barcodereader.sslKeyPassword
The encrypted password of the SSL key for the barcodereader microservice. If empty, no password is used
string
Any
""
tls.barcodereader.sslKeyPem
The private key for the SSL certificate of the barcodereader microservice
string
Any
—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.barcodereader.sslCertificatePem
The private SSL certificate for the barcodereader microservice
string
Any
—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkabridge.sslKeyPasswordLocal
The encrypted password of the SSL key for the local mqttbridge broker. If empty, no password is used
string
Any
""
tls.kafkabridge.sslKeyPemLocal
The private key for the SSL certificate of the local mqttbridge broker
string
Any
—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkabridge.sslCertificatePemLocal
The private SSL certificate for the local mqttbridge broker
string
Any
—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkabridge.sslCACertRemote
The CA certificate for the remote mqttbridge broker
string
Any
—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkabridge.sslCertificatePemRemote
The private SSL certificate for the remote mqttbridge broker
string
Any
—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkabridge.sslKeyPasswordRemote
The encrypted password of the SSL key for the remote mqttbridge broker. If empty, no password is used
string
Any
""
tls.kafkabridge.sslKeyPemRemote
The private key for the SSL certificate of the remote mqttbridge broker
string
Any
—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkadebug.sslKeyPassword
The encrypted password of the SSL key for the kafkadebug microservice. If empty, no password is used
string
Any
""
tls.kafkadebug.sslKeyPem
The private key for the SSL certificate of the kafkadebug microservice
string
Any
—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkadebug.sslCertificatePem
The private SSL certificate for the kafkadebug microservice
string
Any
—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkainit.sslKeyPassword
The encrypted password of the SSL key for the kafkainit microservice. If empty, no password is used
string
Any
""
tls.kafkainit.sslKeyPem
The private key for the SSL certificate of the kafkainit microservice
string
Any
—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkainit.sslCertificatePem
The private SSL certificate for the kafkainit microservice
string
Any
—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkastatedetector.sslKeyPassword
The encrypted password of the SSL key for the kafkastatedetector microservice. If empty, no password is used
string
Any
""
tls.kafkastatedetector.sslKeyPem
The private key for the SSL certificate of the kafkastatedetector microservice
string
Any
—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkastatedetector.sslCertificatePem
The private SSL certificate for the kafkastatedetector microservice
string
Any
—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkatopostgresql.sslKeyPassword
The encrypted password of the SSL key for the kafkatopostgresql microservice. If empty, no password is used
string
Any
""
tls.kafkatopostgresql.sslKeyPem
The private key for the SSL certificate of the kafkatopostgresql microservice
string
Any
—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkatopostgresql.sslCertificatePem
The private SSL certificate for the kafkatopostgresql microservice
string
Any
—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kowl.sslKeyPassword
The encrypted password of the SSL key for the kowl microservice. If empty, no password is used
string
Any
""
tls.kowl.sslKeyPem
The private key for the SSL certificate of the kowl microservice
string
Any
—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kowl.sslCertificatePem
The private SSL certificate for the kowl microservice
string
Any
—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.mqttkafkabridge.sslKeyPassword
The encrypted password of the SSL key for the mqttkafkabridge microservice. If empty, no password is used
string
Any
""
tls.mqttkafkabridge.sslKeyPem
The private key for the SSL certificate of the mqttkafkabridge microservice
string
Any
—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.mqttkafkabridge.sslCertificatePem
The private SSL certificate for the mqttkafkabridge microservice
string
Any
—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.nodered.sslKeyPassword
The encrypted password of the SSL key for the nodered microservice. If empty, no password is used
string
Any
""
tls.nodered.sslKeyPem
The private key for the SSL certificate of the nodered microservice
string
Any
—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.nodered.sslCertificatePem
The private SSL certificate for the nodered microservice
string
Any
—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.sensorconnect.sslKeyPassword
The encrypted password of the SSL key for the sensorconnect microservice. If empty, no password is used
string
Any
""
tls.sensorconnect.sslKeyPem
The private key for the SSL certificate of the sensorconnect microservice
string
Any
—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.sensorconnect.sslCertificatePem
The private SSL certificate for the sensorconnect microservice
string
Any
—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
Data storage
The _000_commonConfig.datastorage section contains the configuration of the
microservices used to store data. Specifically, it controls the following
microservices:
If you want to specifically configure one of these microservices, you can do so
in their respective sections in the Danger Zone.
The following table lists the configurable parameters of the
_000_commonConfig.datastorage section.
datastorage section parameters
Parameter
Description
Type
Allowed values
Default
enabled
Whether to enable the data storage microservices
bool
true, false
true
db_password
The password for the database. Used by all the microservices that need to connect to the database
string
Any
changeme
Data input
The _000_commonConfig.datainput section contains the configuration of the
microservices used to input data. Specifically, it controls the following
microservices:
If you want to specifically configure one of these microservices, you can do so
in their respective sections in the danger zone.
The following table lists the configurable parameters of the
_000_commonConfig.datainput section./
datainput section parameters
Parameter
Description
Type
Allowed values
Default
enabled
Whether to enable the data input microservices
bool
true, false
false
MQTT Bridge
The _000_commonConfig.mqttBridge section contains the configuration of the
mqtt-bridge microservice,
responsible for bridging MQTT brokers in different Kubernetes clusters.
The following table lists the configurable parameters of the
_000_commonConfig.mqttBridge section.
mqttBridge section parameters
Parameter
Description
Type
Allowed values
Default
enabled
Whether to enable the mqtt-bridge microservice
bool
true, false
false
localSubTopic
The topic that the local MQTT broker subscribes to
string
Any valid MQTT topic
ia/factoryinsight
localPubTopic
The topic that the local MQTT broker publishes to
string
Any valid MQTT topic
ia/factoryinsight
oneWay
Whether to enable one-way communication, from local to remote
The topic that the remote MQTT broker subscribes to
string
Any valid MQTT topic
ia
remotePubTopic
The topic that the remote MQTT broker publishes to
string
Any valid MQTT topic
ia/factoryinsight
Kafka Bridge
The _000_commonConfig.kafkaBridge section contains the configuration of the
kafka-bridge microservice,
responsible for bridging Kafka brokers in different Kubernetes clusters.
The following table lists the configurable parameters of the
_000_commonConfig.kafkaBridge section.
The _000_commonConfig.kafkaStateDetector section contains the configuration
of the kafka-state-detector
microservice, responsible for detecting the state of the Kafka broker.
The following table lists the configurable parameters of the
_000_commonConfig.kafkaStateDetector section.
kafkastatedetector section parameters
Parameter
Description
Type
Allowed values
Default
enabled
Whether to enable the kafka-state-detector microservice
bool
true, false
false
Debug
The _000_commonConfig.debug section contains the debug configuration for all
the microservices. This values should not be enabled in production.
The following table lists the configurable parameters of the
_000_commonConfig.debug section.
debug section parameters
Parameter
Description
Type
Allowed values
Default
enableFGTrace
Whether to enable the foreground trace
bool
true, false
false
Tulip Connector
The _000_commonConfig.tulipconnector section contains the configuration of
the tulip-connector
microservice, responsible for connecting a Tulip instance with the United
Manufacturing Hub.
The following table lists the configurable parameters of the
_000_commonConfig.tulipconnector section.
tulipconnector section parameters
Parameter
Description
Type
Allowed values
Default
enabled
Whether to enable the tulip-connector microservice
bool
true, false
false
domain
The domain name pointing to you cluster
string
Any valid domain name
tulip-connector.changme.com
Custom microservices configuration
The _001_customConfig section contains a list of custom microservices
definitions. It can be used to deploy any application of your choice, which can
be configured using the following parameters:
Custom microservices configuration parameters
Parameter
Description
Type
Allowed values
Default
name
The name of the microservice
string
Any
example
image
The image and tag of the microservice
string
Any
hello-world:latest
enabled
Whether to enable the microservice
bool
true, false
false
imagePullPolicy
The image pull policy of the microservice
string
“Always”, “IfNotPresent”, “Never”
“Always”
env
The list of environment variables to set for the microservice
object
Any
[{name: LOGGING_LEVEL, value: PRODUCTION}]
port
The internal port of the microservice to target
int
Any
80
externalPort
The host port to which expose the internal port
int
Any
8080
probePort
The port to use for the liveness and startup probes
int
Any
9091
startupProbe
The interval in seconds for the startup probe
int
Any
200
livenessProbe
The interval in seconds for the liveness probe
int
Any
500
statefulEnabled
Create a PersistentVolumeClaim for the microservice and mount it in /data
bool
true, false
false
Danger zone
The next sections contain a more advanced configuration of the microservices.
Usually, changing the values of the previous sections is enough to run the
United Manufacturing Hub. However, you may need to adjust some of the values
below if you want to change the default behavior of the microservices.
Everything below this point should not be changed, unless you know what you are doing.
Whether to enable the initChownData job, to reset data ownership at startup
bool
true, false
true
persistence.enabled
Whether to enable persistence
bool
true, false
true
persistence.size
The size of the persistent volume
string
Any
5Gi
podDisruptionBudget.minAvailable
The minimum number of available pods
int
Any
1
service.port
The port of the Service
int
Any
8080
service.type
The type of Service to expose
string
ClusterIP, LoadBalancer
LoadBalancer
serviceAccount.create
Whether to create a ServiceAccount
bool
true, false
false
testFramework.enabled
Whether to enable the test framework
bool
true, false
false
datasources
The datasources section contains the configuration of the datasources
provisioning. See the
Grafana documentation
for more information.
datasources.yaml:apiVersion:1datasources:- name:umh-v2-datasource# <string, required> datasource type. Requiredtype:umh-v2-datasource# <string, required> access mode. proxy or direct (Server or Browser in the UI). Requiredaccess:proxy# <int> org id. will default to orgId 1 if not specifiedorgId:1url:"http://united-manufacturing-hub-factoryinsight-service/"jsonData:customerID:$FACTORYINSIGHT_CUSTOMERIDapiKey:$FACTORYINSIGHT_PASSWORDbaseURL:"http://united-manufacturing-hub-factoryinsight-service/"apiKeyConfigured:trueversion:1# <bool> allow users to edit datasources from the UI.isDefault:falseeditable:false# <string, required> name of the datasource. Required- name:umh-datasource# <string, required> datasource type. Requiredtype:umh-datasource# <string, required> access mode. proxy or direct (Server or Browser in the UI). Requiredaccess:proxy# <int> org id. will default to orgId 1 if not specifiedorgId:1url:"http://united-manufacturing-hub-factoryinsight-service/"jsonData:customerId:$FACTORYINSIGHT_CUSTOMERIDapiKey:$FACTORYINSIGHT_PASSWORDserverURL:"http://united-manufacturing-hub-factoryinsight-service/"apiKeyConfigured:trueversion:1# <bool> allow users to edit datasources from the UI.isDefault:trueeditable:false# <string, required> name of the datasource. Required
envValueFrom
The envValueFrom section contains the configuration of the environment
variables to add to the Pod, from a secret or a configmap.
grafana envValueFrom section parameters
Parameter
Description
Value from
Name
Key
FACTORYINSIGHT_APIKEY
The API key to use to authenticate to the Factoryinsight API
secretKeyRef
factoryinsight-secret
apiKey
FACTORYINSIGHT_BASEURL
The base URL of the Factoryinsight API
secretKeyRef
factoryinsight-secret
baseURL
FACTORYINSIGHT_CUSTOMERID
The customer ID to use to authenticate to the Factoryinsight API
secretKeyRef
factoryinsight-secret
customerID
FACTORYINSIGHT_PASSWORD
The password to use to authenticate to the Factoryinsight API
secretKeyRef
factoryinsight-secret
password
env
The env section contains the configuration of the environment variables to add
to the Pod.
grafana env section parameters
Parameter
Description
Type
Allowed values
Default
GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS
List of plugin identifiers to allow loading even if they lack a valid signature
The extraInitContainers section contains the configuration of the extra init
containers to add to the Pod.
The init-plugins container is used to install the default plugins shipped with
the UMH version of Grafana without the need to have an internet connection.
See the documentation
for a list of the plugins.
The initContainer section contains the configuration for the init containers.
By default, the hivemqextensioninit container is used to initialize the HiveMQ
extensions.
This section gives an overview of the microservices that can be found in the
United Manufacturing Hub.
There are several microservices that are part of the United Manufacturing Hub.
Some of them compose the core of the platform, and are mainly developed by the
UMH team, with the addition of some third-party software. Others are maintained
by the community, and are used to extend the functionality of the platform.
3.2.1 - Core
This section contains the overview of the Core components of the United
Manufacturing Hub.
The microservices in this section are part of the Core of the United Manufacturing
Hub. They are mainly developed by the UMH team, with the addition of some
third-party software. They are used to provide the core functionality of the
platform.
3.2.1.1 - Cache
The technical documentation of the redis microservice,
which is used as a cache for the other microservices.
The cache in the United Manufacturing Hub is Redis, a
key-value store that is used as a cache for the other microservices.
How it works
Recently used data is stored in the cache to reduce the load on the database.
All the microservices that need to access the database will first check if the
data is available in the cache. If it is, it will be used, otherwise the
microservice will query the database and store the result in the cache.
By default, Redis is configured to run in standalone mode, which means that it
will only have one master node.
You shouldn’t need to configure the cache manually, as it’s configured
automatically when the cluster is deployed. However, if you need to change the
configuration, you can do it by editing the redis section of the Helm
chart values file.
You can consult the Bitnami Redis chart
for more information about the available configuration options.
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
ALLOW_EMPTY_PASSWORD
Allow empty password
bool
true, false
false
BITNAMI_DEBUG
Specify if debug values should be set
bool
true, false
false
REDIS_PASSWORD
Redis password
string
Any
Random UUID
REDIS_PORT
Redis port number
int
Any
6379
REDIS_REPLICATION_MODE
Redis replication mode
string
master, slave
master
REDIS_TLS_ENABLED
Enable TLS
bool
true, false
false
3.2.1.2 - Database
The technical documentation of the database microservice,
which stores the data of the application.
The database microservice is the central component of the United Manufacturing
Hub and is based on TimescaleDB, an open-source relational database built for
handling time-series data. TimescaleDB is designed to provide scalable and
efficient storage, processing, and analysis of time-series data.
You can find more information on the datamodel of the database in the
Data Model section, and read
about the choice to use TimescaleDB in the
blog article.
How it works
When deployed, the database microservice will create two databases, with the
related usernames and passwords:
grafana: This database is used by Grafana to store the dashboards and
other data.
factoryinsight: This database is the main database of the United Manufacturing
Hub. It contains all the data that is collected by the microservices.
There is only one parameter that usually needs to be changed: the password used
to connect to the database. To do so, set the value of the db_password key in
the _000_commonConfig.datastorage
section of the Helm chart values file.
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
BOOTSTRAP_FROM_BACKUP
Whether to bootstrap the database from a backup or not.
int
0, 1
0
PATRONI_KUBERNETES_LABELS
The labels to use to find the pods of the StatefulSet.
The namespace in which the StatefulSet is deployed.
string
Any
united-manufacturing-hub
PATRONI_KUBERNETES_POD_IP
The IP address of the pod.
string
Any
Random IP
PATRONI_KUBERNETES_PORTS
The ports to use to connect to the pods.
string
Any
[{"name": "postgresql", "port": 5432}]
PATRONI_NAME
The name of the pod.
string
Any
united-manufacturing-hub-timescaledb-0
PATRONI_POSTGRESQL_CONNECT_ADDRESS
The address to use to connect to the database.
string
Any
$(PATRONI_KUBERNETES_POD_IP):5432
PATRONI_POSTGRESQL_DATA_DIR
The directory where the database data is stored.
string
Any
/var/lib/postgresql/data
PATRONI_REPLICATION_PASSWORD
The password to use to connect to the database as a replica.
string
Any
Random 16 characters
PATRONI_REPLICATION_USERNAME
The username to use to connect to the database as a replica.
string
Any
standby
PATRONI_RESTAPI_CONNECT_ADDRESS
The address to use to connect to the REST API.
string
Any
$(PATRONI_KUBERNETES_POD_IP):8008
PATRONI_SCOPE
The name of the cluster.
string
Any
united-manufacturing-hub
PATRONI_SUPERUSER_PASSWORD
The password to use to connect to the database as the superuser.
string
Any
Random 16 characters
PATRONI_admin_OPTIONS
The options to use for the admin user.
string
Comma separated list of options
createrole,createdb
PATRONI_admin_PASSWORD
The password to use to connect to the database as the admin user.
string
Any
Random 16 characters
PGBACKREST_CONFIG
The path to the configuration file for Postgres BackRest.
string
Any
/etc/pgbackrest/pgbackrest.conf
PGDATA
The directory where the database data is stored.
string
Any
$(PATRONI_POSTGRESQL_DATA_DIR)
PGHOST
The directory of the runnning database
string
Any
/var/run/postgresql
3.2.1.3 - Factoryinsight
The technical documentation of the Factoryinsight microservice, which exposes
a set of APIs to access the data from the database.
Factoryinsight is a microservice that provides a set of REST APIs to access the
data from the database. It is particularly useful to calculate the Key
Performance Indicators (KPIs) of the factories.
How it works
Factoryinsight exposes REST APIs to access the data from the database or calculate
the KPIs. By default, it’s only accessible from the internal network of the
cluster, but it can be configured to be
accessible from the external network.
The APIs require authentication, that can be ehither a Basic Auth or a Bearer
token. Both of these can be found in the Secret factoryinsight-secret.
You shouldn’t need to configure Factoryinsight manually, as it’s configured
automatically when the cluster is deployed. However, if you need to change the
configuration, you can do it by editing the factoryinsight section of the Helm
chart values file.
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
CUSTOMER_NAME_{NUMBER}
Specifies a user for the REST API. Multiple users can be set
string
Any
""
CUSTOMER_PASSWORD_{NUMBER}
Specifies the password of the user for the REST API
string
Any
""
DEBUG_ENABLE_FGTRACE
Enables the use of the fgtrace library. Not reccomended for production
string
true, false
false
DRY_RUN
If enabled, data wont be stored in database
bool
true, false
false
FACTORYINSIGHT_PASSWORD
Specifies the password for the admin user for the REST API
string
Any
Random UUID
FACTORYINSIGHT_USER
Specifies the admin user for the REST API
string
Any
factoryinsight
INSECURE_NO_AUTH
If enabled, no authentication is required for the REST API. Not reccomended for production
bool
true, false
false
LOGGING_LEVEL
Defines which logging level is used, mostly relevant for developers
string
PRODUCTION, DEVELOPMENT
PRODUCTION
MICROSERVICE_NAME
Name of the microservice. Used for tracing
string
Any
united-manufacturing-hub-factoryinsight
POSTGRES_DATABASE
Specifies the database name to use
string
Any
factoryinsight
POSTGRES_HOST
Specifies the database DNS name or IP address
string
Any
united-manufacturing-hub
POSTGRES_PASSWORD
Specifies the database password to use
string
Any
changeme
POSTGRES_PORT
Specifies the database port
int
Valid port number
5432
POSTGRES_USER
Specifies the database user to use
string
Any
factoryinsight
REDIS_PASSWORD
Password to access the redis sentinel
string
Any
Random UUID
REDIS_URI
The URI of the Redis instance
string
Any
united-manufacturing-hub-redis-headless:6379
SERIAL_NUMBER
Serial number of the cluster. Used for tracing
string
Any
defalut
VERSION
The version of the API used. Each version also enables all the previous ones
int
Any
2
3.2.1.4 - Grafana
The technical documentation of the grafana microservice,
which is a web application that provides visualization and analytics capabilities.
The grafana microservice is a web application that provides visualization and
analytics capabilities. Grafana allows you to query, visualize, alert on and
understand your metrics no matter where they are stored.
It has a rich ecosystem of plugins that allow you to extend its functionality
beyond the core features.
How it works
Grafana is a web application that can be accessed through a web browser. It
let’s you create dashboards that can be used to visualize data from the database.
Thanks to some custom datasource plugins,
Grafana can use the various APIs of the United Manufacturing Hub to query the
database and display useful information.
Kubernetes resources
Deployment: united-manufacturing-hub-grafana
Service:
External LoadBalancer: united-manufacturing-hub-grafana at
port 8080
The technical documentation of the kafka-bridge microservice,
which acts as a communication bridge between two Kafka brokers.
Kafka-bridge is a microservice that connects two Kafka brokers and forwards
messages between them. It is used to connect the local broker of the edge computer
with the remote broker on the server.
How it works
This microservice has two ways of operation:
High Integrity: This mode is used for topics that are critical for the
user. It is garanteed that no messages are lost. This is achieved by
committing the message only after it has been successfully inserted into the
database. Ususally all the topics are forwarded in this mode, except for
processValue, processValueString and raw messages.
High Throughput: This mode is used for topics that are not critical for
the user. They are forwarded as fast as possible, but it is possible that
messages are lost, for example if the database struggles to keep up. Usually
only the processValue, processValueString and raw messages are forwarded in
this mode.
Kubernetes resources
Deployment: united-manufacturing-hub-kafkabridge
Secret:
Local broker: united-manufacturing-hub-kafkabridge-secrets-local
You can configure the kafka-bridge microservice by setting the following values
in the _000_commonConfig.kafkaBridge
section of the Helm chart values file.
The topic map is a list of objects, each object represents a topic (or a set of
topics) that should be forwarded. The following JSON schema describes the
structure of the topic map:
{
"$schema": "http://json-schema.org/draft-07/schema",
"type": "array",
"title": "Kafka Topic Map",
"description": "This schema validates valid Kafka topic maps.",
"default": [],
"additionalItems": true,
"items": {
"$id": "#/items",
"anyOf": [
{
"$id": "#/items/anyOf/0",
"type": "object",
"title": "Unidirectional Kafka Topic Map with send direction",
"description": "This schema validates entries, that are unidirectional and have a send direction.",
"default": {},
"examples": [
{
"name": "HighIntegrity",
"topic": "^ia\\..+\\..+\\..+\\.(?!processValue).+$",
"bidirectional": false,
"send_direction": "to_remote" }
],
"required": [
"name",
"topic",
"bidirectional",
"send_direction" ],
"properties": {
"name": {
"$id": "#/items/anyOf/0/properties/name",
"type": "string",
"title": "Entry Name",
"description": "Name of the map entry, only used for logging & tracing.",
"default": "",
"examples": [
"HighIntegrity" ]
},
"topic": {
"$id": "#/items/anyOf/0/properties/topic",
"type": "string",
"title": "The topic to listen on",
"description": "The topic to listen on, this can be a regular expression.",
"default": "",
"examples": [
"^ia\\..+\\..+\\..+\\.(?!processValue).+$" ]
},
"bidirectional": {
"$id": "#/items/anyOf/0/properties/bidirectional",
"type": "boolean",
"title": "Is the transfer bidirectional?",
"description": "When set to true, the bridge will consume and produce from both brokers",
"default": false,
"examples": [
false ]
},
"send_direction": {
"$id": "#/items/anyOf/0/properties/send_direction",
"type": "string",
"title": "Send direction",
"description": "Can be either 'to_remote' or 'to_local'",
"default": "",
"examples": [
"to_remote",
"to_local" ]
}
},
"additionalProperties": true },
{
"$id": "#/items/anyOf/1",
"type": "object",
"title": "Bi-directional Kafka Topic Map with send direction",
"description": "This schema validates entries, that are bi-directional.",
"default": {},
"examples": [
{
"name": "HighIntegrity",
"topic": "^ia\\..+\\..+\\..+\\.(?!processValue).+$",
"bidirectional": true }
],
"required": [
"name",
"topic",
"bidirectional" ],
"properties": {
"name": {
"$id": "#/items/anyOf/1/properties/name",
"type": "string",
"title": "Entry Name",
"description": "Name of the map entry, only used for logging & tracing.",
"default": "",
"examples": [
"HighIntegrity" ]
},
"topic": {
"$id": "#/items/anyOf/1/properties/topic",
"type": "string",
"title": "The topic to listen on",
"description": "The topic to listen on, this can be a regular expression.",
"default": "",
"examples": [
"^ia\\..+\\..+\\..+\\.(?!processValue).+$" ]
},
"bidirectional": {
"$id": "#/items/anyOf/1/properties/bidirectional",
"type": "boolean",
"title": "Is the transfer bidirectional?",
"description": "When set to true, the bridge will consume and produce from both brokers",
"default": false,
"examples": [
true ]
}
},
"additionalProperties": true }
]
},
"examples": [
{
"name":"HighIntegrity",
"topic":"^ia\\..+\\..+\\..+\\.(?!processValue).+$",
"bidirectional":true },
{
"name":"HighThroughput",
"topic":"^ia\\..+\\..+\\..+\\.(processValue).*$",
"bidirectional":false,
"send_direction":"to_remote" }
]
}
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
DEBUG_ENABLE_FGTRACE
Enables the use of the fgtrace library, do not enable in production
string
true, false
false
KAFKA_GROUP_ID_SUFFIX
Identifier appended to the kafka group ID, usually a serial number
string
Any
defalut
KAFKA_SSL_KEY_PASSWORD_LOCAL
Password for the SSL key pf the local broker
string
Any
""
KAFKA_SSL_KEY_PASSWORD_REMOTE
Password for the SSL key of the remote broker
string
Any
""
KAFKA_TOPIC_MAP
A json map of the kafka topics should be forwarded
Defines which logging level is used, mostly relevant for developers.
string
PRODUCTION, DEVELOPMENT
PRODUCTION
MICROSERVICE_NAME
Name of the microservice (used for tracing)
string
Any
united-manufacturing-hub-kafka-bridge
REMOTE_KAFKA_BOOTSTRAP_SERVER
URL of the remote kafka broker
string
Any valid URL
""
SERIAL_NUMBER
Serial number of the cluster (used for tracing)
string
Any
defalut
3.2.1.6 - Kafka Broker
The technical documentation of the kafka-broker microservice,
which handles the communication between the microservices.
The Kafka broker in the United Manufacturing Hub is RedPanda,
a Kafka-compatible event streaming platform. It’s used to store and process
messages, in order to stream real-time data between the microservices.
How it works
RedPanda is a distributed system that is made up of a cluster of brokers,
designed for maximum performance and reliability. It does not depend on external
systems like ZooKeeper, as it’s shipped as a single binary.
External NodePort: united-manufacturing-hub-kafka-external at
port 9094 for the Kafka API listener, port 9644 for the Admin API listener,
port 8083 for the HTTP Proxy listener, and port 8081 for the Schema Registry
listener.
You shouldn’t need to configure the Kafka broker manually, as it’s configured
automatically when the cluster is deployed. However, if you need to change the
configuration, you can do it by editing the redpanda
section of the Helm chart values file.
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
HOST_IP
The IP address of the host machine.
string
Any
Random IP
POD_IP
The IP address of the pod.
string
Any
Random IP
SERVICE_NAME
The name of the service.
string
Any
united-manufacturing-hub-kafka
3.2.1.7 - Kafka Console
The technical documentation of the kafka-console microservice,
which provides a GUI to interact with the Kafka broker.
Kafka-console uses Redpanda Console
to help you manage and debug your Kafka workloads effortlessy.
With it, you can explore your Kafka topics, view messages, list the active
consumers, and more.
How it works
You can access the Kafka console via its Service.
It’s automatically connected to the Kafka broker, so you can start using it
right away.
You can view the Kafka broker configuration in the Broker tab, and explore the
topics in the Topics tab.
Kubernetes resources
Deployment: united-manufacturing-hub-console
Service:
External LoadBalancer: united-manufacturing-hub-console at
port 8090
ConfigMap: united-manufacturing-hub-console
Secret: united-manufacturing-hub-console
Configuration
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
LOGIN_JWTSECRET
The secret used to authenticate the communication to the backend.
string
Any
Random string
3.2.1.8 - Kafka to Postgresql
The technical documentation of the kafka-to-postgresql microservice,
which consumes messages from a Kafka broker and writes them in a PostgreSQL database.
Kafka-to-postgresql is a microservice responsible for consuming kafka messages
and inserting the payload into a Postgresql database. Take a look at the
Datamodel to see how the data is structured.
This microservice requires that the Kafka Topic umh.v1.kafka.newTopic exits. This will happen automatically from version 0.9.12.
How it works
By default, kafka-to-postgresql sets up two Kafka consumers, one for the
High Integrity topics and one for the
High Throughput topics.
The graphic below shows the program flow of the microservice.
High integrity
The High integrity topics are forwarded to the database in a synchronous way.
This means that the microservice will wait for the database to respond with a
non error message before committing the message to the Kafka broker.
This way, the message is garanteed to be inserted into the database, even though
it might take a while.
Most of the topics are forwarded in this mode.
The picture below shows the program flow of the high integrity mode.
High throughput
The High throughput topics are forwarded to the database in an asynchronous way.
This means that the microservice will not wait for the database to respond with
a non error message before committing the message to the Kafka broker.
This way, the message is not garanteed to be inserted into the database, but
the microservice will try to insert the message into the database as soon as
possible. This mode is used for the topics that are expected to have a high
throughput.
You shouldn’t need to configure kafka-to-postgresql manually, as it’s configured
automatically when the cluster is deployed. However, if you need to change the
configuration, you can do it by editing the kafkatopostgresql section of the Helm
chart values file.
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
DEBUG_ENABLE_FGTRACE
Enables the use of the fgtrace library. Not reccomended for production
string
true, false
false
DRY_RUN
If set to true, the microservice will not write to the database
bool
true, false
false
KAFKA_BOOTSTRAP_SERVER
URL of the Kafka broker used, port is required
string
Any
united-manufacturing-hub-kafka:9092
KAFKA_SSL_KEY_PASSWORD
Key password to decode the SSL private key
string
Any
""
LOGGING_LEVEL
Defines which logging level is used, mostly relevant for developers
string
PRODUCTION, DEVELOPMENT
PRODUCTION
MEMORY_REQUEST
Memory request for the message cache
string
Any
50Mi
MICROSERVICE_NAME
Name of the microservice (used for tracing)
string
Any
united-manufacturing-hub-kafkatopostgresql
POSTGRES_DATABASE
The name of the PostgreSQL database
string
Any
factoryinsight
POSTGRES_HOST
Hostname of the PostgreSQL database
string
Any
united-manufacturing-hub
POSTGRES_PASSWORD
The password to use for PostgreSQL connections
string
Any
changeme
POSTGRES_SSLMODE
If set to true, the PostgreSQL connection will use SSL
string
Any
require
POSTGRES_USER
The username to use for PostgreSQL connections
string
Any
factoryinsight
3.2.1.9 - MQTT Bridge
The technical documentation of the mqtt-bridge microservice,
which acts as a communication bridge between two MQTT brokers.
MQTT-bridge is a microservice that connects two MQTT brokers and forwards
messages between them. It is used to connect the local broker of the edge computer
with the remote broker on the server.
How it works
This microservice subscribes to topics on the local broker and publishes the
messages to the remote broker, while also subscribing to topics on the remote
broker and publishing the messages to the local broker.
You can configure the URL of the remote MQTT broker that MQTT-bridge should
connect to by setting the value of the remoteBrokerUrl parameter in the
_000_commonConfig.mqttBridge
section of the Helm chart values file.
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
BRIDGE_ONE_WAY
Whether to enable one-way communication, from local to remote
bool
true, false
true
INSECURE_SKIP_VERIFY_LOCAL
Skip TLS certificate verification for the local broker
bool
true, false
true
INSECURE_SKIP_VERIFY_REMOTE
Skip TLS certificate verification for the remote broker
bool
true, false
true
LOCAL_BROKER_SSL_ENABLED
Whether to enable SSL for the local MQTT broker
bool
true, false
true
LOCAL_BROKER_URL
URL for the local MQTT broker
string
Any
ssl://united-manufacturing-hub-mqtt:8883
LOCAL_CERTIFICATE_NAME
Set to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryption
Set to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryption
string
USE_TLS, NO_CERT
USE_TLS
REMOTE_PUB_TOPIC
Remote MQTT topic to publish to
string
Any
ia/factoryinsight
REMOTE_SUB_TOPIC
Remote MQTT topic to subscribe to
string
Any
ia
3.2.1.10 - MQTT Broker
The technical documentation of the mqtt-broker microservice,
which forwards MQTT messages between the other microservices.
The MQTT broker in the United Manufacturing Hub is HiveMQ
and is customized to fit the needs of the stack. It’s a core component of
the stack and is used to communicate between the different microservices.
How it works
The MQTT broker is responsible for receiving MQTT messages from the
different microservices and forwarding them to the
MQTT Kafka bridge.
Kubernetes resources
StatefulSet: united-manufacturing-hub-hivemqce
Service:
Internal ClusterIP:
HiveMQ local: united-manufacturing-hub-hivemq-local-service at
port 1883 (MQTT) and 8883 (MQTT over TLS)
VerneMQ (for backwards compatibility): united-manufacturing-hub-vernemq at
port 1883 (MQTT) and 8883 (MQTT over TLS)
VerneMQ local (for backwards compatibility): united-manufacturing-hub-vernemq-local-service at
port 1883 (MQTT) and 8883 (MQTT over TLS)
External LoadBalancer: united-manufacturing-hub-mqtt at
port 1883 (MQTT) and 8883 (MQTT over TLS)
If you want to add more extensions, or to change the configuration, visit
the HiveMQ documentation.
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
HIVEMQ_ALLOW_ALL_CLIENTS
Whether to allow all clients to connect to the broker
bool
true, false
true
3.2.1.11 - MQTT Kafka Bridge
The technical documentation of the mqtt-kafka-bridge microservice,
which transfers messages from MQTT brokers to Kafka Brokers and vice versa.
Mqtt-kafka-bridge is a microservice that acts as a bridge between MQTT brokers
and Kafka brokers, transfering messages from one to the other and vice versa.
This microservice requires that the Kafka Topic umh.v1.kafka.newTopic exits.
This will happen automatically from version 0.9.12.
Since version 0.9.10, it allows all raw messages, even if their content is not
in a valid JSON format.
How it works
Mqtt-kafka-bridge consumes topics from a message broker, translates them to
the proper format and publishes them to the other message broker.
You shouldn’t need to configure mqtt-kafka-bridge manually, as it’s configured
automatically when the cluster is deployed. However, if you need to change the
configuration, you can do it by editing the mqttkafkabridge section of the Helm
chart values file.
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
DEBUG_ENABLE_FGTRACE
Enables the use of the fgtrace library. Not reccomended for production
string
true, false
false
INSECURE_SKIP_VERIFY
Skip TLS certificate verification
bool
true, false
true
KAFKA_BASE_TOPIC
The Kafka base topic
string
Any
ia
KAFKA_BOOTSTRAP_SERVER
URL of the Kafka broker used, port is required
string
Any
united-manufacturing-hub-kafka:9092
KAFKA_LISTEN_TOPIC
Kafka topic to subscribe to. Accept regex values
string
Any
^ia.+
KAFKA_SENDER_THREADS
Number of threads used to send messages to Kafka
int
Any
1
LOGGING_LEVEL
Defines which logging level is used, mostly relevant for developers
string
PRODUCTION, DEVELOPMENT
PRODUCTION
MESSAGE_LRU_SIZE
Size of the LRU cache used to store messages. This is used to prevent duplicate messages from being sent to Kafka.
int
Any
100000
MICROSERVICE_NAME
Name of the microservice (used for tracing)
string
Any
united-manufacturing-hub-mqttkafkabridge
MQTT_BROKER_URL
The MQTT broker URL
string
Any
united-manufacturing-hub-mqtt:1883
MQTT_CERTIFICATE_NAME
Set to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryption
Size of the LRU cache used to store raw messages. This is used to prevent duplicate messages from being sent to Kafka.
int
Any
100000
SERIAL_NUMBER
Serial number of the cluster (used for tracing)
string
Any
default
3.2.1.12 - Node-RED
The technical documentation of the nodered microservice,
which wires together hardware devices, APIs and online services.
Node-RED is a programming tool for wiring together
hardware devices, APIs and online services in new and interesting ways. It
provides a browser-based editor that makes it easy to wire together flows using
the wide range of nodes in the Node-RED library.
How it works
Node-RED is a JavaScript-based tool that can be used to create flows that
interact with the other microservices in the United Manufacturing Hub or
external services.
You can enable the nodered microservice and decide if you want to use the
default flows in the _000_commonConfig.dataprocessing.nodered
section of the Helm chart values.
All the other values are set by default and you can find them in the
Danger Zone section of the Helm chart values.
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
NODE_RED_ENABLE_SAFE_MODE
Enable safe mode, useful in case of broken flows
boolean
true, false
false
TZ
The timezone used by Node-RED
string
Any
Berlin/Europe
3.2.1.13 - Sensorconnect
The technical documentation of the sensorconnect microservice,
which reads data from sensors and sends them to the MQTT or Kafka broker.
Sensorconnect automatically detects ifm gateways
connected to the network and reads data from the connected IO-Link
sensors.
How it works
Sensorconnect continuosly scans the given IP range for gateways, making it
effectively a plug-and-play solution. Once a gateway is found, it automatically
download the IODD files for the connected sensors and starts reading the data at
the configured interval. Then it processes the data and sends it to the MQTT or
Kafka broker, to be consumed by other microservices.
If you want to learn more about how to use sensors in your asstes, check out the
retrofitting section of the UMH Learn
website.
IODD files
The IODD files are used to describe the sensors connected to the gateway. They
contain information about the data type, the unit of measurement, the minimum and
maximum values, etc. The IODD files are downloaded automatically from
IODDFinder once a sensor is found, and are
stored in a Persistent Volume. If downloading from internet is not possible,
for example in a closed network, you can download the IODD files manually and
store them in the folder specified by the IODD_FILE_PATH environment variable.
If no IODD file is found for a sensor, the data will not be processed, but sent
to the broker as-is.
You can configure the IP range to scan for gateways, and which message broker to
use, by setting the values of the parameters in the
_000_commonConfig.datasources.sensorconnect
section of the Helm chart values file.
The default values of the other parameters are usually good for most use cases,
but you can change them in the Danger Zone section of the Helm chart values file.
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
ADDITIONAL_SLEEP_TIME_PER_ACTIVE_PORT_MS
Additional sleep time between pollings for each active port
float
Any
0.0
ADDITIONAL_SLOWDOWN_MAP
JSON map of values, allows to slow down and speed up the polling time of specific sensors
Enables the use of the fgtrace library. Not reccomended for production
string
true, false
false
DEVICE_FINDER_TIMEOUT_SEC
HTTP timeout in seconds for finding new devices
int
Any
1
DEVICE_FINDER_TIME_SEC
Time interval in seconds for finding new devices
int
Any
20
IODD_FILE_PATH
Filesystem path where to store IODD files
string
Any valid Unix path
/ioddfiles
IP_RANGE
The IP range to scan for new sensor
string
Any valid IP in CIDR notation
192.168.10.1/24
KAFKA_BOOTSTRAP_SERVER
URL of the Kafka broker. Port is required
string
Any
united-manufacturing-hub-kafka:9092
KAFKA_SSL_KEY_PASSWORD
The encrypted password of the SSL key. If empty, no password is used
string
Any
""
KAFKA_USE_SSL
Set to true to use SSL encryption for the connection to the Kafka broker
string
true, false
false
LOGGING_LEVEL
Defines which logging level is used, mostly relevant for developers
string
PRODUCTION, DEVELOPMENT
PRODUCTION
LOWER_POLLING_TIME_MS
Time in milliseconds to define the lower bound of time between sensor polling
int
Any
20
MAX_SENSOR_ERROR_COUNT
Amount of errors before a sensor is temporarily disabled
int
Any
50
MICROSERVICE_NAME
Name of the microservice (used for tracing)
string
Any
united-manufacturing-hub-sensorconnect
MQTT_BROKER_URL
URL of the MQTT broker. Port is required
string
Any
united-manufacturing-hub-mqtt:1883
MQTT_CERTIFICATE_NAME
Set to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryption
string
USE_TLS, NO_CERT
USE_TLS
MQTT_PASSWORD
Password for the MQTT broker
string
Any
INSECURE_INSECURE_INSECURE
POD_NAME
Name of the pod (used for tracing)
string
Any
united-manufacturing-hub-sensorconnect-0
POLLING_SPEED_STEP_DOWN_MS
Time in milliseconds subtracted from the polling interval after a successful polling
int
Any
1
POLLING_SPEED_STEP_UP_MS
Time in milliseconds added to the polling interval after a failed polling
int
Any
20
SENSOR_INITIAL_POLLING_TIME_MS
Amount of time in milliseconds before starting to request sensor data. Must be higher than LOWER_POLLING_TIME_MS
int
Any
100
SUB_TWENTY_MS
Set to 1 to allow LOWER_POLLING_TIME_MS of under 20 ms. This is not recommended as it might lead to the gateway becoming unresponsive until a manual reboot
int
0, 1
0
TEST
If enabled, the microservice will use a test IODD file from the filesystem to use with a mocked sensor. Only useful for development.
string
true, false
false
TRANSMITTERID
Serial number of the cluster (used for tracing)
string
Any
default
UPPER_POLLING_TIME_MS
Time in milliseconds to define the upper bound of time between sensor polling
int
Any
1000
USE_KAFKA
If enabled, uses Kafka as a message broker
string
true, false
true
USE_MQTT
If enabled, uses MQTT as a message broker
string
true, false
false
Slowdown map
The ADDITIONAL_SLOWDOWN_MAP environment variable allows you to slow down and
speed up the polling time of specific sensors. It is a JSON array of values, with
the following structure:
This section contains the overview of the community-supported components of
the United Manufacturing Hub used to extend the functionality of the platform.
The microservices in this section are not part of the Core of the United
Manufacturing Hub, either because they are still in development, deprecated or
only supported community. They can be used to extend the functionality of the
platform.
It is not recommended to use these microservices in production as they might be
unstable or not supported anymore.
3.2.2.1 - Barcodereader
The technical documentation of the barcodereader microservice,
which reads barcodes and sends the data to the Kafka broker.
This microservice is still in development and is not considered stable for production use.
Barcodereader is a microservice that reads barcodes and sends the data to the Kafka broker.
How it works
Connect a barcode scanner to the system and the microservice will read the barcodes and send the data to the Kafka broker.
The asset ID, which is used for the topic structure
string
Any
barcodereader
CUSTOMER_ID
The customer ID, which is used for the topic structure
string
Any
raw
DEBUG_ENABLE_FGTRACE
Enables the use of the fgtrace library. Not recommended for production
string
true, false
false
INPUT_DEVICE_NAME
The name of the USB device to use
string
Any
Datalogic ADC, Inc. Handheld Barcode Scanner
INPUT_DEVICE_PATH
The path of the USB device to use. It is recommended to use a wildcard (for example, /dev/input/event*) or leave empty
string
Valid Unix device path
""
KAFKA_BOOTSTRAP_SERVER
URL of the Kafka broker used, port is required
string
Any
united-manufacturing-hub-kafka:9092
LOCATION
The location, which is used for the topic structure
string
Any
barcodereader
LOGGING_LEVEL
Defines which logging level is used, mostly relevant for developers.
string
PRODUCTION, DEVELOPMENT
PRODUCTION
MICROSERVICE_NAME
Name of the microservice (used for tracing)
string
Any
united-manufacturing-hub-barcodereader
SCAN_ONLY
Prevent message broadcasting if enabled
bool
true, false
false
SERIAL_NUMBER
Serial number of the cluster (used for tracing)
string
Any
defalut
3.2.2.2 - Factoryinput
The technical documentation of the factoryinput microservice,
which provides REST endpoints for MQTT messages via HTTP requests.
This microservice is still in development and is not considered stable for production use
Factoryinput provides REST endpoints for MQTT messages via HTTP requests.
This microservice is typically accessed via grafana-proxy
How it works
The factoryinput microservice provides REST endpoints for MQTT messages via HTTP requests.
The main endpoint is /api/v1/{customer}/{location}/{asset}/{value}, with a POST
request method. The customer, location, asset and value are all strings. And are
used to build the MQTT topic. The body of the HTTP request is used as the MQTT
payload.
Internal ClusterIP: united-manufacturing-hub-factoryinput-service at
port 80
Secret: factoryinput-secret
Configuration
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
BROKER_URL
URL to the broker
string
all
ssl://united-manufacturing-hub-mqtt:8883
CERTIFICATE_NAME
Set to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryption
string
USE_TLS, NO_CERT
USE_TLS
CUSTOMER_NAME_{NUMBER}
Specifies a user for the REST API. Multiple users can be set
string
Any
""
CUSTOMER_PASSWORD_{NUMBER}
Specifies the password of the user for the REST API
string
Any
""
DEBUG_ENABLE_FGTRACE
Enables the use of the fgtrace library. Not reccomended for production
string
true, false
false
FACTORYINPUT_PASSWORD
Specifies the admin user for the REST API
string
Any
factoryinsight
FACTORYINPUT_USER
Specifies the password for the admin user for the REST API
string
Any
Random UUID
LOGGING_LEVEL
Defines which logging level is used, mostly relevant for developers
string
PRODUCTION, DEVELOPMENT
PRODUCTION
MQTT_QUEUE_HANDLER
Number of queue workers to spawn
int
0-65535
10
MQTT_PASSWORD
Password for the MQTT broker
string
Any
INSECURE_INSECURE_INSECURE
POD_NAME
Name of the pod. Used for tracing
string
Any
united-manufacturing-hub-factoryinput-0
SERIAL_NUMBER
Serial number of the cluster. Used for tracing
string
Any
defalut
VERSION
The version of the API used. Each version also enables all the previous ones
int
Any
1
3.2.2.3 - Grafana Proxy
The technical documentation of the grafana-proxy microservice,
which proxies request from Grafana to the backend services.
This microservice is still in development and is not considered stable for production use
How it works
The grafana-proxy microservice serves an HTTP REST endpoint located at
/api/v1/{service}/{data}. The service parameter specifies the backend
service to which the request should be proxied, like factoryinput or
factoryinsight. The data parameter specifies the API endpoint to forward to
the backend service. The body of the HTTP request is used as the payload for
the proxied request.
Kubernetes resources
Deployment: united-manufacturing-hub-grafanaproxy
Service:
External LoadBalancer: united-manufacturing-hub-grafanaproxy-service at
port 2096
Configuration
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
DEBUG_ENABLE_FGTRACE
Enables the use of the fgtrace library. Not reccomended for production
The microservice publishes messages on the topic ia/raw/development/ioTSensors/,
creating a subtopic for each simulation. The subtopics are the names of the
simulations, which are Temperature, Humidity, and Pressure.
The values are calculated using a normal distribution with a mean and standard
deviation that can be configured.
You can change the configuration of the microservice by updating the config.json
file in the ConfigMap.
3.2.2.6 - MQTT to Postgresql
The technical documentation of the mqtt-to-postgresql microservice,
which consumes messages from an MQTT broker and writes them in a PostgreSQL
database.
This microservice is deprecated and should not be used anymore in production.
Please use kafka-to-postgresql instead.
How it works
The mqtt-to-postgresql microservice subscribes to the MQTT broker and saves
the values of the messages on the topic ia/# in the database.
3.2.2.7 - OPCUA Simulator
The technical documentation of the opcua-simulator microservice,
which simulates OPCUA devices.
This microservice is a community contribution and is not part of the main stack of the United Manufacturing Hub, but is enabled by default.
How it works
The OPCUA Simulator is a microservice that simulates OPCUA devices. You can read
the full documentation on the
GitHub repository.
You can then connect to the simulated OPCUA server via Node-RED and read the
values of the simulated devices. Learn more about how to connect to the OPCUA
simulator to Node-RED in our guide.
You can change the configuration of the microservice by updating the config.json
file in the ConfigMap.
3.2.2.8 - PackML Simulator
The technical documentation of the packml-simulator microservice,
which simulates a manufacturing line using PackML over MQTT.
This microservice is a community contribution and is not part of the main stack of the United Manufacturing Hub, but it is enabled by default.
PackML MQTT Simulator is a virtual line that interfaces using PackML implemented
over MQTT. It implements the following PackML State model and communicates
over MQTT topics as defined by environmental variables. The simulator can run
with either a basic MQTT topic structure or SparkPlugB.
You shouldn’t need to configure PackML Simulator manually, as it’s configured
automatically when the cluster is deployed. However, if you need to change the
configuration, you can do it by editing the packmlmqttsimulator section of the
Helm chart values file.
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
AREA
ISA-95 area name of the line
string
Any
DefaultArea
LINE
ISA-95 line name of the line
string
Any
DefaultProductionLine
MQTT_PASSWORD
Password for the MQTT broker. Leave empty if the server does not manage permissions
string
Any
INSECURE_INSECURE_INSECURE
MQTT_URL
Server URL of the MQTT server
string
Any
mqtt://united-manufacturing-hub-mqtt:1883
MQTT_USERNAME
Name for the MQTT broker. Leave empty if the server does not manage permissions
string
Any
PACKMLSIMULATOR
SITE
ISA-95 site name of the line
string
Any
testLocation
3.2.2.9 - Tulip Connector
The technical documentation of the tulip-connector microservice,
which exposes internal APIs, such as factoryinsight, to the internet.
Specifically designed to communicate with Tulip.
This microservice is still in development and is not considered stable for production use.
The tulip-connector microservice enables communication with the United
Manufacturing Hub by exposing internal APIs, like
factoryinsight, to the
internet. With this REST endpoint, users can access data stored in the UMH and
seamlessly integrate Tulip with a Unified Namespace and on-premise Historian.
Furthermore, the tulip-connector can be customized to meet specific customer
requirements, including integration with an on-premise MES system.
How it works
The tulip-connector acts as a proxy between the internet and the UMH. It
exposes an endpoint to forward requests to the UMH and returns the response.
You can enable the tulip-connector and set the domain for the ingress by editing
the values in the _000_commonConfig.tulipconnector
section of the Helm chart values file.
Environment variables
Environment variables
Variable name
Description
Type
Allowed values
Default
FACTORYINSIGHT_PASSWORD
Specifies the password for the admin user for the REST API
string
Any
Random UUID
FACTORYINSIGHT_URL
Specifies the URL of the factoryinsight microservice.
Specifies the mode that the service will run in. Change only during development
string
dev, prod
prod
3.2.3 - Grafana Plugins
This section contains the overview of the custom Grafana plugins that can be
used to access the United Manufacturing Hub.
3.2.3.1 - Umh Datasource V2
This page contains the technical documentation of the umh-datasource-v2 plugin,
which allows for easy data extraction from factoryinsight.
The plugin, umh-datasource-v2, is a Grafana data source plugin that allows you to fetch
resources from a database and build queries for your dashboard.
How it works
When creating a new panel, select umh-datasource-v2 from the Data source drop-down menu. It will then fetch the resources
from the database. The loading time may depend on your internet speed.
Select the resources in the cascade menu to build your query. DefaultArea and DefaultProductionLine are placeholders
for the future implementation of the new data model.
Only the available values for the specified work cell will be fetched from the database. You can then select which data value you want to query.
Next you can specify how to transform the data, depending on what value you selected.
For example, all the custom tags will have the aggregation options available. For example if you query a processValue:
Time bucket: lets you group data in a time bucket
Aggregates: common statistical aggregations (maximum, minimum, sum or count)
Handling missing values: lets you choose how missing data should be handled
Configuration
In Grafana, navigate to the Data sources configuration panel.
Select umh-v2-datasource to configure it.
Configurations:
Base URL: the URL for the factoryinsight backend. Defaults to http://united-manufacturing-hub-factoryinsight-service/.
Enterprise name: previously customerID for the old datasource plugin. Defaults to factoryinsight.
API Key: authenticates the API calls to factoryinsight.
Can be found with UMHLens by going to Secrets → factoryinsight-secret → apiKey. It should follow the format Basic xxxxxxxx.
This page contains the technical documentation of the plugin umh-datasource, which allows for easy data extraction from factoryinsight.
We are no longer maintaining this microservice. Use instead our new microservice datasource-v2 for data extraction from factoryinsight.
The umh datasource is a Grafana 8.X compatible plugin, that allows you to fetch resources from a database
and build queries for your dashboard.
How it works
When creating a new panel, select umh-datasource from the Data source drop-down menu. It will then fetch the resources
from the database. The loading time may depend on your internet speed.
Select your query parameters Location, Asset and Value to build your query.
Configuration
In Grafana, navigate to the Data sources configuration panel.
Select umh-datasource to configure it.
Configurations:
Base URL: the URL for the factoryinsight backend. Defaults to http://united-manufacturing-hub-factoryinsight-service/.
Enterprise name: previously customerID for the old datasource plugin. Defaults to factoryinsight.
API Key: authenticates the API calls to factoryinsight.
Can be found with UMHLens by going to Secrets → factoryinsight-secret → apiKey. It should follow the format Basic xxxxxxxx.
3.2.3.3 - Factoryinput Panel
This page contains the technical documentation of the plugin factoryinput-panel, which allows for easy execution of MQTT messages inside the UMH stack from a Grafana panel.
This plugin is still in development and is not considered stable for production use
Below you will find a schematic of this flow, through our stack.
3.3 - Datamodel
This page describes the data model of the UMH stack - from the message payloads up to database tables.
Raw Data
If you have events that you just want to send to the message broker / Unified Namespace without the need for it to be stored, simply send it to the raw topic.
This data will not be processed by the UMH stack, but you can use it to build your own data processing pipeline.
ProcessValue Data
If you have data that does not fit in the other topics (such as your PLC tags or sensor data), you can use the processValue topic. It will be saved in the database in the processValue or processValueString and can be queried using factorysinsight or the umh-datasource Grafana plugin.
Production Data
In a production environment, you should first declare products using addProduct.
This allows you to create an order using addOrder. Once you have created an order,
send an state message to tell the database that the machine is working (or not working) on the order.
When the machine is ordered to produce a product, send a startOrder message.
When the machine has finished producing the product, send an endOrder message.
Send count messages if the machine has produced a product, but it does not make sense to give the product its ID. Especially useful for bottling or any other use case with a large amount of products, where not each product is traced.
Recommendation: Start with addShift and state and continue from there on
Modifying Data
If you have accidentally sent the wrong state or if you want to modify a value, you can use the modifyState message.
Unique Product Tracking
You can use uniqueProduct to tell the database that a new instance of a product has been created.
If the produced product is scrapped, you can use scrapUniqueProduct to change its state to scrapped.
3.3.1 - Messages
For each message topic you will find a short description what the message is used for and which structure it has, as well as what structure the payload is excepted to have.
Introduction
The United Manufacturing Hub provides a specific structure for messages/topics, each with its own unique purpose.
By adhering to this structure, the UMH will automatically calculate KPIs for you, while also making it easier to maintain
consistency in your topic structure.
3.3.1.1 - activity
activity messages are sent when a new order is added.
This is part of our recommended workflow to create machine states. The data sent here will not be stored in the database automatically, as it will be required to be converted into a state. In the future, there will be a microservice, which converts these automatically.
A message is sent here each time a new order is added.
Content
key
data type
description
product_id
string
current product name
order_id
string
current order name
target_units
int64
amount of units to be produced
The product needs to be added before adding the order. Otherwise, this message will be discarded
One order is always specific to that asset and can, by definition, not be used across machines. For this case one would need to create one order and product for each asset (reason: one product might go through multiple machines, but might have different target durations or even target units, e.g. one big 100m batch get split up into multiple pieces)
JSON
Examples
One order was started for 100 units of product “test”:
This message can be emitted to add a child product to a parent product.
It can be sent multiple times, if a parent product is split up into multiple child’s or multiple parents are combined into one child. One example for this if multiple parts are assembled to a single product.
detectedAnomaly messages are sent when an asset has stopped and the reason is identified.
This is part of our recommended workflow to create machine states. The data sent here will not be stored in the database automatically, as it will be required to be converted into a state. In the future, there will be a microservice, which converts these automatically.
If you have a lot of processValues, we’d recommend not using the /processValue as topic, but to append the tag name as well, e.g., /processValue/energyConsumption. This will structure it better for usage in MQTT Explorer or for data processing only certain processValues.
For automatic data storage in kafka-to-postgresql both will work fine as long as the payload is correct.
Please be aware that the values may only be int or float, other character are not valid, so make sure there is no quotation marks or anything
sneaking in there. Also be cautious of using the JavaScript ToFixed() function, as it is converting a float into a string.
Usage
A message is sent each time a process value has been prepared. The key has a unique name.
Content
key
data type
description
timestamp_ms
int64
unix timestamp of message creation
<valuename>
int64 or float64
Represents a process value, e.g. temperature
Pre 0.10.0:
As <valuename> is either of type ´int64´ or ´float64´, you cannot use booleans. Convert to integers as needed; e.g., true = “1”, false = “0”
A message is sent each time a process value has been prepared. The key has a unique name. This message is used when the datatype of the process value is a string instead of a number.
Content
key
data type
description
timestamp_ms
int64
unix timestamp of message creation
<valuename>
string
Represents a process value, e.g. temperature
JSON
Example
At the shown timestamp the custom process value “customer” had a readout of “miller”.
recommendation are action recommendations, which require concrete and rapid action in order to quickly eliminate efficiency losses on the store floor.
Content
key
data type
description
uid
string
UniqueID of the product
timestamp_ms
int64
unix timestamp of message creation
customer
string
the customer ID in the data structure
location
string
the location in the data structure
asset
string
the asset ID in the data structure
recommendationType
int32
Name of the product
enabled
bool
-
recommendationValues
map
Map of values based on which this recommendation is created
diagnoseTextDE
string
Diagnosis of the recommendation in german
diagnoseTextEN
string
Diagnosis of the recommendation in english
recommendationTextDE
string
Recommendation in german
recommendationTextEN
string
Recommendation in english
JSON
Example
A recommendation for the demonstrator at the shown location has not been running for a while, so a recommendation is sent to either start the machine or specify a reason why it is not running.
{
"UID": "43298756",
"timestamp_ms": 15888796894,
"customer": "united-manufacturing-hub",
"location": "dccaachen",
"asset": "DCCAachen-Demonstrator",
"recommendationType": "1",
"enabled": true,
"recommendationValues": { "Treshold": 30, "StoppedForTime": 612685 },
"diagnoseTextDE": "Maschine DCCAachen-Demonstrator steht seit 612685 Sekunden still (Status: 8, Schwellwert: 30)" ,
"diagnoseTextEN": "Machine DCCAachen-Demonstrator is not running since 612685 seconds (status: 8, threshold: 30)",
"recommendationTextDE":"Maschine DCCAachen-Demonstrator einschalten oder Stoppgrund auswählen.",
"recommendationTextEN": "Start machine DCCAachen-Demonstrator or specify stop reason.",
}
Here a message is sent every time products should be marked as scrap. It works as follows: A message with scrap and timestamp_ms is sent. It starts with the count that is directly before timestamp_ms. It is now iterated step by step back in time and step by step the existing counts are set to scrap until a total of scrap products have been scraped.
Content
timestamp_ms is the unix timestamp, you want to go back from
scrap number of item to be considered as scrap.
You can specify maximum of 24h to be scrapped to avoid accidents
(NOT IMPLEMENTED YET) If counts does not equal scrap, e.g. the count is 5 but only 2 more need to be scrapped, it will scrap exactly 2. Currently, it would ignore these 2. see also #125
(NOT IMPLEMENTED YET) If no counts are available for this asset, but uniqueProducts are available, they can also be marked as scrap.
JSON
Examples
Ten items where scrapped:
{
"timestamp_ms":1589788888888,
"scrap":10}
Schema
{
"$schema": "http://json-schema.org/draft/2019-09/schema",
"$id": "https://learn.umh.app/content/docs/architecture/datamodel/messages/scrapCount.json",
"type": "object",
"default": {},
"title": "Root Schema",
"required": [
"timestamp_ms",
"scrap" ],
"properties": {
"timestamp_ms": {
"type": "integer",
"default": 0,
"minimum": 0,
"title": "The unix timestamp you want to go back from",
"examples": [
1589788888888 ]
},
"scrap": {
"type": "integer",
"default": 0,
"minimum": 0,
"title": "Number of items to be considered as scrap",
"examples": [
10 ]
}
},
"examples": [
{
"timestamp_ms": 1589788888888,
"scrap": 10 },
{
"timestamp_ms": 1589788888888,
"scrap": 5 }
]
}
A message is sent here each time the asset changes status. Subsequent changes are not possible. Different statuses can also be process steps, such as “setup”, “post-processing”, etc. You can find a list of all supported states here.
Content
key
data type
description
state
uint32
value of the state according to the link above
timestamp_ms
uint64
unix timestamp of message creation
JSON
Example
The asset has a state of 10000, which means it is actively producing.
A message is sent here each time a product has been produced or modified. A modification can take place, for example, due to a downstream quality control.
There are two cases of when to send a message under the uniqueProduct topic:
The exact product doesn’t already have a UID (-> This is the case, if it has not been produced at an asset incorporated in the digital shadow). Specify a space holder asset = “storage” in the MQTT message for the uniqueProduct topic.
The product was produced at the current asset (it is now different from before, e.g. after machining or after something was screwed in). The newly produced product is always the “child” of the process. Products it was made out of are called the “parents”.
Content
key
data type
description
begin_timestamp_ms
int64
unix timestamp of start time
end_timestamp_ms
int64
unix timestamp of completion time
product_id
string
product ID of the currently produced product
isScrap
bool
optional information whether the current product is of poor quality and will be sorted out. Is considered false if not specified.
uniqueProductAlternativeID
string
alternative ID of the product
JSON
Example
The processing of product “Beilinger 30x15” with the AID 216381 started and ended at the designated timestamps. It is of low quality and due to be scrapped.
The database stores the messages in different tables.
Introduction
We are using the database TimescaleDB, which is based on PostgreSQL and supports standard relational SQL database work,
while also supporting time-series databases.
This allows for usage of regular SQL queries, while also allowing to process and store time-series data.
Postgresql has proven itself reliable over the last 25 years, so we are happy to use it.
If you want to learn more about database paradigms, please refer to the knowledge article about that topic.
It also includes a concise video summarizing what you need to know about different paradigms.
Our database model is designed to represent a physical manufacturing process. It keeps track of the following data:
The state of the machine
The products that are produced
The orders for the products
The workers’ shifts
Arbitrary process values (sensor data)
The producible products
Recommendations for the production
Please note that our database does not use a retention policy. This means that your database can grow quite fast if you save a lot of process values. Take a look at our guide on enabling data compression and retention in TimescaleDB to customize the database to your needs.
A good method to check your db size would be to use the following commands inside postgres shell:
CREATETABLEIFNOTEXISTScountTable(timestampTIMESTAMPTZNOTNULL,asset_idSERIALREFERENCESassetTable(id),countINTEGERCHECK(count>0),UNIQUE(timestamp,asset_id));-- creating hypertable
SELECTcreate_hypertable('countTable','timestamp');-- creating an index to increase performance
CREATEINDEXONcountTable(asset_id,timestampDESC);
This table stores process values, for example toner level of a printer, flow rate of a pump, etc.
This table, has a closely related table for storing number values, processValueTable.
CREATETABLEIFNOTEXISTSprocessValueStringTable(timestampTIMESTAMPTZNOTNULL,asset_idSERIALREFERENCESassetTable(id),valueNameTEXTNOTNULL,valueTESTNULL,UNIQUE(timestamp,asset_id,valueName));-- creating hypertable
SELECTcreate_hypertable('processValueStringTable','timestamp');-- creating an index to increase performance
CREATEINDEXONprocessValueStringTable(asset_id,timestampDESC);-- creating an index to increase performance
CREATEINDEXONprocessValueStringTable(valuename);
3.3.2.6 - processValueTable
processValueTable contains process values.
Usage
This table stores process values, for example toner level of a printer, flow rate of a pump, etc.
This table, has a closely related table for storing string values, processValueStringTable.
CREATETABLEIFNOTEXISTSprocessValueTable(timestampTIMESTAMPTZNOTNULL,asset_idSERIALREFERENCESassetTable(id),valueNameTEXTNOTNULL,valueDOUBLEPRECISIONNULL,UNIQUE(timestamp,asset_id,valueName));-- creating hypertable
SELECTcreate_hypertable('processValueTable','timestamp');-- creating an index to increase performance
CREATEINDEXONprocessValueTable(asset_id,timestampDESC);-- creating an index to increase performance
CREATEINDEXONprocessValueTable(valuename);
CREATETABLEIFNOTEXISTSstateTable(timestampTIMESTAMPTZNOTNULL,asset_idSERIALREFERENCESassetTable(id),stateINTEGERCHECK(state>=0),UNIQUE(timestamp,asset_id));-- creating hypertable
SELECTcreate_hypertable('stateTable','timestamp');-- creating an index to increase performance
CREATEINDEXONstateTable(asset_id,timestampDESC);
3.3.2.11 - uniqueProductTable
uniqueProductTable contains unique products and their IDs.
CREATETABLEIFNOTEXISTSuniqueProductTable(uidTEXTNOTNULL,asset_idSERIALREFERENCESassetTable(id),begin_timestamp_msTIMESTAMPTZNOTNULL,end_timestamp_msTIMESTAMPTZNOTNULL,product_idTEXTNOTNULL,is_scrapBOOLEANNOTNULL,quality_classTEXTNOTNULL,station_idTEXTNOTNULL,UNIQUE(uid,asset_id,station_id),CHECK(begin_timestamp_ms<end_timestamp_ms));-- creating an index to increase performance
CREATEINDEXONuniqueProductTable(asset_id,uid,station_id);
3.3.3 - States
States are the core of the database model. They represent the state of the machine at a given point in time.
States Documentation Index
Introduction
This documentation outlines the various states used in the United Manufacturing Hub software stack to calculate OEE/KPI and other production metrics.
State Categories
Active (10000-29999): These states represent that the asset is actively producing.
Material (60000-99999): These states represent that the asset has issues regarding materials.
Operator (140000-159999): These states represent that the asset is stopped because of operator related issues.
Planning (160000-179999): These states represent that the asset is stopped as it is planned to stop (planned idle time).
Process (100000-139999): These states represent that the asset is in a stop, which belongs to the process and cannot be avoided.
Unknown (30000-59999): These states represent that the asset is in an unspecified state.
Glossary
OEE: Overall Equipment Effectiveness
KPI: Key Performance Indicator
Conclusion
This documentation provides a comprehensive overview of the states used in the United Manufacturing Hub software stack and their respective categories. For more information on each state category and its individual states, please refer to the corresponding subpages.
3.3.3.1 - Active (10000-29999)
These states represent that the asset is actively producing
10000: ProducingAtFullSpeedState
This asset is running at full speed.
Examples for ProducingAtFullSpeedState
WS_Cur_State: Operating
PackML/Tobacco: Execute
20000: ProducingAtLowerThanFullSpeedState
Asset is producing, but not at full speed.
Examples for ProducingAtLowerThanFullSpeedState
WS_Cur_Prog: StartUp
WS_Cur_Prog: RunDown
WS_Cur_State: Stopping
PackML/Tobacco : Stopping
WS_Cur_State: Aborting
PackML/Tobacco: Aborting
WS_Cur_State: Holding
Ws_Cur_State: Unholding
PackML:Tobacco: Unholding
WS_Cur_State Suspending
PackML/Tobacco: Suspending
WS_Cur_State: Unsuspending
PackML/Tobacco: Unsuspending
PackML/Tobacco: Completing
WS_Cur_Prog: Production
EUROMAP: MANUAL_RUN
EUROMAP: CONTROLLED_RUN
Currently not included:
WS_Prog_Step: all
3.3.3.2 - Unknown (30000-59999)
These states represent that the asset is in an unspecified state
30000: UnknownState
Data for that particular asset is not available (e.g. connection to the PLC is disrupted)
Examples for UnknownState
WS_Cur_Prog: Undefined
EUROMAP: Offline
40000 UnspecifiedStopState
The asset is not producing, but the reason is unknown at the time.
Examples for UnspecifiedStopState
WS_Cur_State: Clearing
PackML/Tobacco: Clearing
WS_Cur_State: Emergency Stop
WS_Cur_State: Resetting
PackML/Tobacco: Clearing
WS_Cur_State: Held
EUROMAP: Idle
Tobacco: Other
WS_Cur_State: Stopped
PackML/Tobacco: Stopped
WS_Cur_State: Starting
PackML/Tobacco: Starting
WS_Cur_State: Prepared
WS_Cur_State: Idle
PackML/Tobacco: Idle
PackML/Tobacco: Complete
EUROMAP: READY_TO_RUN
50000: MicrostopState
The asset is not producing for a short period (typically around five minutes), but the reason is unknown at the time.
3.3.3.3 - Material (60000-99999)
These states represent that the asset has issues regarding materials.
60000 InletJamState
This machine does not perform its intended function due to a lack of material flow in the infeed of the machine, detected by the sensor system of the control system (machine stop). In the case of machines that have several inlets, the condition o lack in the inlet refers to the main flow , i.e. to the material (crate, bottle) that is fed in the direction of the filling machine (Central machine). The defect in the infeed is an extraneous defect, but because of its importance for visualization and technical reporting, it is recorded separately.
Examples for InletJamState
WS_Cur_State: Lack
70000: OutletJamState
The machine does not perform its intended function as a result of a jam in the good flow discharge of the machine, detected by the sensor system of the control system (machine stop). In the case of machines that have several discharges, the jam in the discharge condition refers to the main flow, i.e. to the good (crate, bottle) that is fed in the direction of the filling machine (central machine) or is fed away from the filling machine. The jam in the outfeed is an external fault 1v, but it is recorded separately, because of its importance for visualization and technical reporting.
Examples for OutletJamState
WS_Cur_State: Tailback
80000: CongestionBypassState
The machine does not perform its intended function due to a shortage in the bypass supply or a jam in the bypass discharge of the machine, detected by the sensor system of the control system (machine stop). This condition can only occur in machines with two outlets or inlets and in which the bypass is in turn the inlet or outlet of an upstream or downstream machine of the filling line (packaging and palleting machines). The jam/shortage in the auxiliary flow is an external fault, but it is recoded separately due to its importance for visualization and technical reporting.
Examples for the CongestionBypassState
WS_Cur_State: Lack/Tailback Branch Line
90000: MaterialIssueOtherState
The asset has a material issue, but it is not further specified.
Examples for MaterialIssueOtherState
WS_Mat_Ready (Information of which material is lacking)
PackML/Tobacco: Suspended
3.3.3.4 - Process(100000-139999)
These states represent that the asset is in a stop, which belongs to the process and cannot be avoided.
100000: ChangeoverState
The asset is in a changeover process between products.
Examples for ChangeoverState
WS_Cur_Prog: Program-Changeover
Tobacco: CHANGE OVER
110000: CleaningState
The asset is currently in a cleaning process.
Examples for CleaningState
WS_Cur_Prog: Program-Cleaning
Tobacco: CLEAN
120000: EmptyingState
The asset is currently emptied, e.g. to prevent mold for food products over the long breaks, e.g. the weekend.
Examples for EmptyingState
Tobacco: EMPTY OUT
130000: SettingUpState
This machine is currently preparing itself for production, e.g. heating up.
Examples for SettingUpState
EUROMAP: PREPARING
3.3.3.5 - Operator (140000-159999)
These states represent that the asset is stopped because of operator related issues.
140000: OperatorNotAtMachineState
The operator is not at the machine.
150000: OperatorBreakState
The operator is taking a break.
This is different from a planned shift as it could contribute to performance losses.
Examples for OperatorBreakState
WS_Cur_Prog: Program-Break
3.3.3.6 - Planning (160000-179999)
These states represent that the asset is stopped as it is planned to stopped (planned idle time).
160000: NoShiftState
There is no shift planned at that asset.
170000: NO OrderState
There is no order planned at that asset.
3.3.3.7 - Technical (180000-229999)
These states represent that the asset has a technical issue.
180000: EquipmentFailureState
The asset itself is defect, e.g. a broken engine.
Examples for EquipmentFailureState
WS_Cur_State: Equipment Failure
190000: ExternalFailureState
There is an external failure, e.g. missing compressed air.
Examples for ExternalFailureState
WS_Cur_State: External Failure
200000: ExternalInterferenceState
There is an external interference, e.g. the crane to move the material is currently unavailable.
210000: PreventiveMaintenanceStop
A planned maintenance action.
Examples for PreventiveMaintenanceStop
WS_Cur_Prog: Program-Maintenance
PackML: Maintenance
EUROMAP: MAINTENANCE
Tobacco: MAINTENANCE
220000: TechnicalOtherStop
The asset has a technical issue, but it is not specified further.
Examples for TechnicalOtherStop
WS_Not_Of_Fail_Code
PackML: Held
EUROMAP: MALFUNCTION
Tobacco: MANUAL
Tobacco: SET UP
Tobacco: REMOTE SERVICE
4 - Production Guide
This section contains information about how to use the stack in a production
environment.
4.1 - Installation
This section contains guides on how to install the United Manufacturing Hub.
Learn how to install the United Manufacturing Hub using completely Free and Open
Source Software.
4.1.1 - Flatcar Installation (Bare Metal)
This page describes how to deploy the United Manufacturing Hub on Flatcar
Linux on bare metal.
Here is a step-by-step guide on how to deploy the UMH stack on
Flatcar Linux, a Linux distribution designed for
container workloads, with high security and low maintenance.
This is a good option if you want to deploy the UMH stack on edge devices or IPCs.
Before you begin
Your system must meet the following requirements before you can install the
United Manufacturing Hub:
CPU cores: 4
Memory size: 8 GB
Hard disk size: 32 GB
You need the latest version of our iPXE boot image:
You also need a computer with an SSH client (most modern operating systems
already have it) and either UMHLens
or OpenLens installed.
Additionally, this guide assumes a configuration similar to the following:
%%{ init: { 'flowchart': { 'curve': 'bumpY' } } }%%
flowchart LR
A(Internet) -. WAN .- B[Router]
subgraph Internal network
B -- LAN --- C[Edge device]
B -- LAN --- D[Your computer]
end
For optimal functionality, we recommend assigning a static IP address to your
edge device. This can be accomplished through a static lease in the DHCP server
or by setting the IP address during installation. Changing the IP address of the
edge device after installation may result in certificate issues, so we strongly
advise against doing so. By assigning a static IP address, you can ensure a more
stable and reliable connection for your edge device.
Install Flatcar Linux on the edge device
Connect the USB stick to the edge device and boot it. Each device has a
different way of booting from USB, so you need to consult the documentation
of your device.
Accept the License.
Select the correct network settings. If you are unsure, select DHCP, but
keep in mind that a static IP address is strongly recommended.
Select the correct drive to install Flatcar Linux on. If you are unsure, check
the troubleshooting section.
Check that the installation settings are correct and press Confirm to start
the installation.
Now the installation will start. You should see a green command line soon after,
that says [email protected] ~$~. Now remove the USB stick from the
device. At this point the system is still installing. After a few minutes,
depending on the speed of your network, the installation will finish and the
system will reboot. Now you should see a grey login prompt that says
flatcar-1-umh login:, as well as the IP address of the device.
Please note that the installation may take some time. This largely depends on the available resources
including network speed and system performance.
Connect to the edge device
Now you can leave the edge device and connect to it from your computer via SSH.
If you are on Windows 11, we recommend using the default Windows terminal,
that you can find by typing terminal in the Windows search bar or Start menu. Next,
connect to the edge device via SSH, using the IP address you saw on the login prompt:
If you are not on Windows 11, you can use MobaXTerm
to connect to the edge device via SSH. Open MobaXTerm and click on Session
in the top left corner. Then click on SSH and enter the IP address of the
edge device in the Remote host field. Click on Advanced SSH settings and
enter core in the Username field. Click on Save and then on Open.
The default password for the core user is umh.
Import the cluster configuration
From your SSH session, run the following command to get the cluster configuration:
From the homepage, click on Browse Clusters in Catalog. You should see
all your clusters.
Click on a cluster to connect to it.
Navigate to Helm > Releases and change the namespace from default to
united-manufacturing-hub in the upper right corner.
Select the united-manufacturing-hub Release to inspect the
release details, the installed resources, and the Helm values.
Troubleshooting
The installation stops at the green login prompt
To check the status of the installation, run the following command:
systemctl status installer
If the installation is still running, you should see something like this:
● installer.service - Flatcar Linux Installer
Loaded: loaded (/usr/lib/systemd/system/installer.service; static; vendor preset: enabled)
Active: active (running) since Wed 2021-05-12 14:00:00 UTC; 1min 30s ago
Otherwise, the installation failed. You can check the logs to see what went wrong.
I don’t know which drive to select
You can check the drive type from the manual of your device.
For SATA drives (spinning hard disk or SSD), the drive type is SDA.
For NVMe drives, the drive type is NVMe.
If you are unsure, you can boot into the edge device with any Linux distribution
and run the following command:
lsblk
The output should look similar to this:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223.6G 0 disk
├─sda1 8:1 0 512M 0 part /boot
└─sda2 8:2 0 223.1G 0 part /
sdb 8:0 0 31.8G 0 disk
└─sdb1 8:1 0 31.8G 0 part /mnt/usb
In this case, the drive type is SDA. Generally, the drive type is the name of
the first drive in the list, or at least the drive that doesn’t match the
size of the USB stick.
I can access the cluster but there are no resources
First completely shut down UMHLens / OpenLens (from the
system tray). Then start it again and try to access the cluster.
If that doesn’t work, access the edge device via SSH and run the following
command:
systemctl status k3s
If the output contains a status different from active (running), the cluster
is not running. Otherwise, the UMH installation failed. You can check the logs
with the following commands:
systemctl status umh-install
systemctl status helm-install
If any of the commands returns some errors, is probably easier to reinstall the
system.
What’s next
You can follow the Getting Started guide
to get familiar with the UMH stack.
If you already know your way around the United Manufacturing Hub, you can
follow the Administration guides to
configure the stack for production.
4.1.2 - Flatcar Installation (Virtual Machine)
This page describes how to deploy the United Manufacturing Hub on Flatcar
Linux in a virtual machine.
Here is a step-by-step guide on how to deploy the UMH stack on
Flatcar Linux, a Linux distribution designed for
container workloads, with high security and low maintenance, in a virtual machine.
This is a good option if you want to deploy the UMH stack on a virtual machine
to try out the installation process or to test the UMH stack.
You also need to have a virtual machine software installed on your computer. We
recommend VirtualBox, which is free and open
source, but other solutions are also possible.
Additionally, you need to have either UMHLens
or OpenLens installed.
Create a virtual machine
Create a new virtual machine in your virtual machine software. Make sure to
use the following settings:
Operating System: Linux
Version: Other Linux (64-bit)
CPU cores: 4
Memory size: 8 GB
Hard disk size: 32 GB
Also, the network settings of the virtual machine must allow communication with
the internet and the host machine. If you are using VirtualBox, you can check
the network settings by clicking on the virtual machine in the VirtualBox
manager and then on Settings. In the Network tab, make sure that the
Adapter 1 is set to Bridged Adapter.
Install Flatcar Linux
Start the virtual machine.
Accept the License.
Set a static IP address.
Select sda as the disk.
Select Confirm.
Now the installation will start. You should see a green command line
soon after, that says [email protected] ~$~. At this point the system
is still installing. After a few minutes, depending on the speed of your
network, the installation will finish and the system will reboot.
By default, it will reboot into the installation environment. Just shut down the
virtual machine and remove the ISO image from the CD drive, then boot the
virtual machine again. This way, the installation process will continue, at the
end of which you will a grey login prompt that says flatcar-1-umh login:, as
well as the IP address of the device.
Please note that the installation may take some time. This largely depends on the available resources
including network speed and system performance.
Connect to the virtual machine
You can leave the virtual machine running and connect to it using SSH, so that
is easier to work with it.
Open a terminal on your computer and connect to the edge device via SSH, using
the IP address you saw on the login prompt:
If you are on Windows, you can use MobaXTerm
to connect to the edge device via SSH. Open MobaXTerm and click on Session
in the top left corner. Then click on SSH and enter the IP address of the
edge device in the Remote host field. Click on Advanced SSH settings and
enter core in the Username field. Click on Save and then on Open.
The default password for the core user is umh.
Import the cluster configuration
From your SSH session, run the following command to get the cluster configuration:
From the homepage, click on Browse Clusters in Catalog. You should see
all your clusters.
Click on a cluster to connect to it.
Navigate to Helm > Releases and change the namespace from default to
united-manufacturing-hub in the upper right corner.
Select the united-manufacturing-hub Release to inspect the
release details, the installed resources, and the Helm values.
Troubleshooting
The installation stops at the green login prompt
To check the status of the installation, run the following command:
systemctl status installer
If the installation is still running, you should see something like this:
● installer.service - Flatcar Linux Installer
Loaded: loaded (/usr/lib/systemd/system/installer.service; static; vendor preset: enabled)
Active: active (running) since Wed 2021-05-12 14:00:00 UTC; 1min 30s ago
Otherwise, the installation failed. You can check the logs to see what went wrong.
I can access the cluster but there are no resources
First completely shut down UMHLens / OpenLens (from the
system tray). Then start it again and try to access the cluster.
If that doesn’t work, access the virtual machine via SSH and run the following
command:
systemctl status k3s
If the output contains a status different from active (running), the cluster
is not running. Otherwise, the UMH installation failed. You can check the logs
with the following commands:
systemctl status umh-install
systemctl status helm-install
If any of the commands returns some errors, is probably easier to reinstall the
system.
I can’t SSH into the virtual machine
If you can’t SSH into the virtual machine, make sure that the network settings
for the virtual machine are correct. If you are using VirtualBox, you can check
the network settings by clicking on the virtual machine in the VirtualBox
manager and then on Settings. In the Network tab, make sure that the
Adapter 1 is set to NAT.
Disable any VPNs that you might be using.
What’s next
You can follow the Getting Started guide
to get familiar with the UMH stack.
If you already know your way around the United Manufacturing Hub, you can
follow the Administration guides to
configure the stack for production.
4.1.3 - Local k3d Installation
This page describes how to deploy the United Manufacturing Hub locally using
k3d.
Here is a step-by-step guide on how to deploy the UMH stack using
k3d, a lightweight wrapper to run
k3s in Docker. k3d makes it very easy to create
single- and multi-node k3s clusters in Docker, e.g. for local development on
Kubernetes.
Before you begin
Your system must meet the following requirements before you can install the
United Manufacturing Hub:
CPU cores: 4
Memory size: 8 GB
Hard disk size: 32 GB
You also need to have Docker up and
running and either UMHLens
or OpenLens installed.
The --api-port flag is used to expose the Kubernetes API server on the
host machine. If the 6443 port is already in use, you can use any other
port.
The --port flag is used to expose the ports of the services
running in the cluster on the host machine. If any of the ports on the left
side of the : is already in use, you can use any other port.
Verify that the cluster is up and running.
kubectl get nodes
The output should look like this:
NAME STATUS ROLES AGE VERSION
k3d-united-manufacturing-hub-server-0 Ready control-plane,master 10s v1.24.4+k3s1
From the homepage, click on Browse Clusters in Catalog. You should see
all your clusters.
Click on a cluster to connect to it.
Navigate to Helm > Releases and change the namespace from default to
united-manufacturing-hub in the upper right corner.
Select the united-manufacturing-hub Release to inspect the
release details, the installed resources, and the Helm values.
Troubleshooting
I don’t see the cluster in UMHLens / OpenLens
If you don’t see the cluster in UMHLens / OpenLens, you
might have to add the cluster manually. To do so, follow these steps:
Open a terminal and run the following command to get the kubeconfig file:
k3d kubeconfig get united-manufacturing-hub
Copy the output of the command.
Open UMHLens / OpenLens, click on the three horizontal
lines in the upper left corner and choose Files > Add Cluster.
Paste the kubeconfig and click Add clusters.
What’s next
You can follow the Getting Started guide
to get familiar with the UMH stack.
If you already know your way around the United Manufacturing Hub, you can
follow the Administration guides to
configure the stack for production.
4.2 - Upgrading
This section contains the upgrading guides for the different versions the United Manufacturing Hub.
The United Manufacturing Hub is a continuously evolving product. This means that
new features and bug fixes are added to the product on a regular basis. This
section contains the upgrading guides for the different versions the United
Manufacturing Hub.
The upgrading process is done by upgrading the Helm chart.
4.2.1 - Upgrade to v0.9.14
This page describes how to upgrade the United Manufacturing Hub to version 0.9.14
This page describes how to upgrade the United Manufacturing Hub to version
0.9.14. Before upgrading, remember to backup the
database,
Node-RED flows,
and your cluster configuration.
Add Helm repo in UMHLens / OpenLens
Check if the UMH Helm repository is added in UMHLens / OpenLens.
To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,).
Click on the Kubernetes tab and check if the Helm Chart section contains
the https://repo.umh.app repository.
If it doesn’t, click the Add Custom Helm Repo button and fill in the following
values:
Some workloads need to be deleted before upgrading. This process does not delete
any data, but it will cause downtime. If a workload is missing, it means that it
was not enabled in your cluster, therefore you can skip it.
To delete a resource, you can select it using the box on the left of the
resource name and click the - button on the bottom right corner.
Open the Workloads tab.
From the Deployment section, delete the following deployments:
From the StatefulSet section, delete the following statefulsets:
united-manufacturing-hub-hivemqce
united-manufacturing-hub-kafka
united-manufacturing-hub-nodered
united-manufacturing-hub-sensorconnect
united-manufacturing-hub-mqttbridge
Open the Network tab.
From the Services section, delete the following services:
united-manufacturing-hub-kafka
Upgrade Helm Chart
Now everything is ready to upgrade the Helm chart.
Navigate to the Helm > Releases tab.
Select the united-manufacturing-hub release and click
Upgrade.
In the Helm Upgrade window, make sure that the Upgrade version field
contains the version you want to upgrade to.
You can also change the values of the Helm chart, if needed. For example,
if you want to apply the new tweaks to the resources in order to avoid the
Out Of Memory crash of the MQTT Broker, you can change the following values:
You can also enable the new container registry by changing the values in the
image or image.repository fields from unitedmanufacturinghub/<image-name>
to ghcr.io/united-manufacturing-hub/<image-name>.
Click Upgrade.
The upgrade process can take a few minutes. The upgrade is complete when the
Status field of the release is Deployed.
4.2.2 - Upgrade to v0.9.13
This page describes how to upgrade the United Manufacturing Hub to version 0.9.13
This page describes how to upgrade the United Manufacturing Hub to version
0.9.13. Before upgrading, remember to backup the
database,
Node-RED flows,
and your cluster configuration.
Add Helm repo in UMHLens / OpenLens
Check if the UMH Helm repository is added in UMHLens / OpenLens.
To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,).
Click on the Kubernetes tab and check if the Helm Chart section contains
the https://repo.umh.app repository.
If it doesn’t, click the Add Custom Helm Repo button and fill in the following
values:
Some workloads need to be deleted before upgrading. This process does not delete
any data, but it will cause downtime. If a workload is missing, it means that it
was not enabled in your cluster, therefore you can skip it.
To delete a resource, you can select it using the box on the left of the
resource name and click the - button on the bottom right corner.
Open the Workloads tab.
From the Deployment section, delete the following deployments:
From the StatefulSet section, delete the following statefulsets:
united-manufacturing-hub-mqttbridge
united-manufacturing-hub-hivemqce
united-manufacturing-hub-nodered
united-manufacturing-hub-sensorconnect
Upgrade Helm Chart
Now everything is ready to upgrade the Helm chart.
Navigate to the Helm > Releases tab.
Select the united-manufacturing-hub release and click
Upgrade.
In the Helm Upgrade window, make sure that the Upgrade version field
contains the version you want to upgrade to.
You can also change the values of the Helm chart, if needed.
Click Upgrade.
The upgrade process can take a few minutes. The upgrade is complete when the
Status field of the release is Deployed.
4.2.3 - Upgrade to v0.9.12
This page describes how to upgrade the United Manufacturing Hub to version 0.9.12
This page describes how to upgrade the United Manufacturing Hub to version
0.9.12. Before upgrading, remember to backup the
database,
Node-RED flows,
and your cluster configuration.
Add Helm repo in UMHLens / OpenLens
Check if the UMH Helm repository is added in UMHLens / OpenLens.
To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,).
Click on the Kubernetes tab and check if the Helm Chart section contains
the https://repo.umh.app repository.
If it doesn’t, click the Add Custom Helm Repo button and fill in the following
values:
This step is only needed if you enabled RBAC for the MQTT Broker and changed the
default password. If you did not change the default password, you can skip this
step.
Navigate to Config > ConfigMaps.
Select the united-manufacturing-hub-hivemqce-extension
ConfigMap.
Copy the content of credentials.xml and save it in a safe place.
Clear Workloads
Some workloads need to be deleted before upgrading. This process does not delete
any data, but it will cause downtime. If a workload is missing, it means that it
was not enabled in your cluster, therefore you can skip it.
To delete a resource, you can select it using the box on the left of the
resource name and click the - button on the bottom right corner.
Open the Workloads tab.
From the Deployment section, delete the following deployments:
From the StatefulSet section, delete the following statefulsets:
united-manufacturing-hub-mqttbridge
united-manufacturing-hub-hivemqce
united-manufacturing-hub-nodered
united-manufacturing-hub-sensorconnect
Remove MQTT Broker extension PVC
In this version we reduced the size of the MQTT Broker extension PVC. To do so,
we need to delete the old PVC and create a new one. This process will set the
credentials of the MQTT Broker to the default ones. If you changed the default
password, you can restore them after the upgrade.
Navigate to Storage > Persistent Volume Claims.
Select the united-manufacturing-hub-hivemqce-claim-extensions PVC and
click Delete.
Upgrade Helm Chart
Now everything is ready to upgrade the Helm chart.
Navigate to the Helm > Releases tab.
Select the united-manufacturing-hub release and click
Upgrade.
In the Helm Upgrade window, make sure that the Upgrade version field
contains the version you want to upgrade to.
There are some incompatible changes in this version. To avoid errors, you
need to change the following values:
console:console:config:kafka:tls:passphrase:""# <- remove this line
console.extraContainers: remove the property and its content.
console:extraContainers:{}# <- remove this line
console.extraEnv: remove the property and its content.
console:extraEnv:""# <- remove this line
console.extraEnvFrom: remove the property and its content.
console:extraEnvFrom:""# <- remove this line
console.extraVolumeMounts: remove the |- characters right after the
property name. It should look like this:
console:extraVolumeMounts:# <- remove the `|-` characters in this line- name:united-manufacturing-hub-kowl-certificatesmountPath:/SSL_certs/kafkareadOnly:true
console.extraVolumes: remove the |- characters right after the
property name. It should look like this:
console:extraVolumes:# <- remove the `|-` characters in this line- name:united-manufacturing-hub-kowl-certificatessecret:secretName:united-manufacturing-hub-kowl-secrets
Change the console.service property to the following:
redis.sentinel: remove the property and its content.
redis:sentinel:{}# <- remove all the content of this section
Remove the property redis.master.command:
redis:master:command:/run.sh# <- remove this line
timescaledb-single.fullWalPrevention: remove the property and its content.
timescaledb-single:fullWalPrevention:# <- remove this linecheckFrequency:30# <- remove this lineenabled:false# <- remove this linethresholds:# <- remove this linereadOnlyFreeMB:64# <- remove this linereadOnlyFreePercent:5# <- remove this linereadWriteFreeMB:128# <- remove this linereadWriteFreePercent:8# <- remove this line
timescaledb-single.loadBalancer: remove the property and its content.
timescaledb-single:loadBalancer:# <- remove this lineannotations:# <- remove this lineservice.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout:"4000"# <- remove this lineenabled:true# <- remove this lineport:5432# <- remove this line
timescaledb-single.replicaLoadBalancer: remove the property and its content.
timescaledb-single:replicaLoadBalancer:annotations:# <- remove this lineservice.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout:"4000"# <- remove this lineenabled:false# <- remove this lineport:5432# <- remove this line
timescaledb-single.secretNames: remove the property and its content.
timescaledb-single:secretNames:{}# <- remove this line
timescaledb-single.unsafe: remove the property and its content.
timescaledb-single:unsafe:false# <- remove this line
Change the value of the timescaledb-single.service.primary.type property
to LoadBalancer:
The upgrade process can take a few minutes. The upgrade is complete when the
Status field of the release is Deployed.
4.2.4 - Upgrade to v0.9.11
This page describes how to upgrade the United Manufacturing Hub to version 0.9.11
This page describes how to upgrade the United Manufacturing Hub to version
0.9.11. Before upgrading, remember to backup the
database,
Node-RED flows,
and your cluster configuration.
Add Helm repo in UMHLens / OpenLens
Check if the UMH Helm repository is added in UMHLens / OpenLens.
To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,).
Click on the Kubernetes tab and check if the Helm Chart section contains
the https://repo.umh.app repository.
If it doesn’t, click the Add Custom Helm Repo button and fill in the following
values:
Some workloads need to be deleted before upgrading. This process does not delete
any data, but it will cause downtime. If a workload is missing, it means that it
was not enabled in your cluster, therefore you can skip it.
To delete a resource, you can select it using the box on the left of the
resource name and click the - button on the bottom right corner.
Open the Workloads tab.
From the Deployment section, delete the following deployments:
From the StatefulSet section, delete the following statefulsets:
united-manufacturing-hub-mqttbridge
united-manufacturing-hub-hivemqce
united-manufacturing-hub-nodered
united-manufacturing-hub-sensorconnect
Upgrade Helm Chart
Now everything is ready to upgrade the Helm chart.
Navigate to the Helm > Releases tab.
Select the united-manufacturing-hub release and click
Upgrade.
In the Helm Upgrade window, make sure that the Upgrade version field
contains the version you want to upgrade to.
You can also change the values of the Helm chart, if needed.
Click Upgrade.
The upgrade process can take a few minutes. The upgrade is complete when the
Status field of the release is Deployed.
4.2.5 - Upgrade to v0.9.10
This page describes how to upgrade the United Manufacturing Hub to version 0.9.10
This page describes how to upgrade the United Manufacturing Hub to version
0.9.10. Before upgrading, remember to backup the
database,
Node-RED flows,
and your cluster configuration.
Add Helm repo in UMHLens / OpenLens
Check if the UMH Helm repository is added in UMHLens / OpenLens.
To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,).
Click on the Kubernetes tab and check if the Helm Chart section contains
the https://repo.umh.app repository.
If it doesn’t, click the Add Custom Helm Repo button and fill in the following
values:
In this release, the Grafana version has been updated from 8.5.9 to 9.3.1.
Check the release notes for
further information about the changes.
Additionally, the way default plugins are installed has changed. Unfortunatly,
it is necesary to manually install all the plugins that were previously installed.
If you didn’t install any plugin other than the default ones, you can skip this
section.
Follow these steps to see the list of plugins installed in your cluster:
Open the browser and go to the Grafana dashboard.
Navigate to the Configuration > Plugins tab.
Select the Installed filter.
Write down all the plugins that you manually installed. You can recognize
them by not having the Core tag.
The following ones are installed by default, therefore you can skip them:
ACE.SVG by Andrew Rodgers
Button Panel by UMH Systems Gmbh
Button Panel by CloudSpout LLC
Discrete by Natel Energy
Dynamic Text by Marcus Olsson
FlowCharting by agent
Pareto Chart by isaozler
Pie Chart (old) by Grafana Labs
Timepicker Buttons Panel by williamvenner
UMH Datasource by UMH Systems Gmbh
Untimely by factry
Worldmap Panel by Grafana Labs
Clear Workloads
Some workloads need to be deleted before upgrading. This process does not delete
any data, but it will cause downtime. If a workload is missing, it means that it
was not enabled in your cluster, therefore you can skip it.
To delete a resource, you can select it using the box on the left of the
resource name and click the - button on the bottom right corner.
Open the Workloads tab.
From the Deployment section, delete the following deployments:
The upgrade process can take a few minutes. The upgrade is complete when the
Status field of the release is Deployed.
Afterwards, you can reinstall the additional Grafana plugins.
Replace VerneMQ with HiveMQ
In this upgrade we switched from using VerneMQ to HiveMQ as our MQTT Broker
(you can read the
blog article
about it).
While this process is fully backwards compatible, we suggest to update NodeRed
flows and any other additional service that uses MQTT, to use the new service
broker called united-manufacturing-hub-mqtt. The old
united-manufacturing-hub-vernemq is still functional and,
despite the name, also points to HiveMQ, but in future upgrades will be removed.
Please double-check if all of your services can connect to the new MQTT broker.
It might be needed for them to be restarted, so that they can resolve the DNS
name and get the new IP. Also, it can happen with tools like chirpstack, that you
need to specify the client-id as the automatically generated ID worked with
VerneMQ, but is now declined by HiveMQ.
Troubleshooting
Some microservices can’t connect to the new MQTT broker
If you are using the united-manufacturing-hub-mqtt service,
but some microservice can’t connect to it, restarting the microservice might
solve the issue. To do so, you can delete the Pod of the microservice and let
Kubernetes recreate it.
ChirpStack can’t connect to the new MQTT broker
ChirpStack uses a generated client-id to connect to the MQTT broker. This
client-id is not accepted by HiveMQ. To solve this issue, you can set the
client_id field in the integration.mqtt section of the chirpstack configuration
file to a fixed value:
This page describes how to upgrade the United Manufacturing Hub to version 0.9.9
This page describes how to upgrade the United Manufacturing Hub to version
0.9.9. Before upgrading, remember to backup the
database,
Node-RED flows,
and your cluster configuration.
Add Helm repo in UMHLens / OpenLens
Check if the UMH Helm repository is added in UMHLens / OpenLens.
To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,).
Click on the Kubernetes tab and check if the Helm Chart section contains
the https://repo.umh.app repository.
If it doesn’t, click the Add Custom Helm Repo button and fill in the following
values:
Some workloads need to be deleted before upgrading. This process does not delete
any data, but it will cause downtime. If a workload is missing, it means that it
was not enabled in your cluster, therefore you can skip it.
To delete a resource, you can select it using the box on the left of the
resource name and click the - button on the bottom right corner.
Open the Workloads tab.
From the Deployment section, delete the following deployments:
From the StatefulSet section, delete the following statefulsets:
united-manufacturing-hub-mqttbridge
united-manufacturing-hub-hivemqce
united-manufacturing-hub-nodered
united-manufacturing-hub-sensorconnect
Upgrade Helm Chart
Now everything is ready to upgrade the Helm chart.
Navigate to the Helm > Releases tab.
Select the united-manufacturing-hub release and click
Upgrade.
In the Helm Upgrade window, make sure that the Upgrade version field
contains the version you want to upgrade to.
You can also change the values of the Helm chart, if needed.
In the grafana section, find the extraInitContainers field and change the
value of the image field to unitedmanufacturinghub/grafana-plugin-extractor:0.1.4.
Click Upgrade.
The upgrade process can take a few minutes. The upgrade is complete when the
Status field of the release is Deployed.
4.2.7 - Upgrade to v0.9.8
This page describes how to upgrade the United Manufacturing Hub to version 0.9.8
This page describes how to upgrade the United Manufacturing Hub to version
0.9.8. Before upgrading, remember to backup the
database,
Node-RED flows,
and your cluster configuration.
Add Helm repo in UMHLens / OpenLens
Check if the UMH Helm repository is added in UMHLens / OpenLens.
To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,).
Click on the Kubernetes tab and check if the Helm Chart section contains
the https://repo.umh.app repository.
If it doesn’t, click the Add Custom Helm Repo button and fill in the following
values:
Some workloads need to be deleted before upgrading. This process does not delete
any data, but it will cause downtime. If a workload is missing, it means that it
was not enabled in your cluster, therefore you can skip it.
To delete a resource, you can select it using the box on the left of the
resource name and click the - button on the bottom right corner.
Open the Workloads tab.
From the Deployment section, delete the following deployments:
From the StatefulSet section, delete the following statefulsets:
united-manufacturing-hub-mqttbridge
united-manufacturing-hub-hivemqce
united-manufacturing-hub-nodered
united-manufacturing-hub-sensorconnect
Upgrade Helm Chart
Now everything is ready to upgrade the Helm chart.
Navigate to the Helm > Releases tab.
Select the united-manufacturing-hub release and click
Upgrade.
In the Helm Upgrade window, make sure that the Upgrade version field
contains the version you want to upgrade to.
You can also change the values of the Helm chart, if needed.
Click Upgrade.
The upgrade process can take a few minutes. The upgrade is complete when the
Status field of the release is Deployed.
4.2.8 - Upgrade to v0.9.7
This page describes how to upgrade the United Manufacturing Hub to version 0.9.7
This page describes how to upgrade the United Manufacturing Hub to version
0.9.7. Before upgrading, remember to backup the
database,
Node-RED flows,
and your cluster configuration.
Add Helm repo in UMHLens / OpenLens
Check if the UMH Helm repository is added in UMHLens / OpenLens.
To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,).
Click on the Kubernetes tab and check if the Helm Chart section contains
the https://repo.umh.app repository.
If it doesn’t, click the Add Custom Helm Repo button and fill in the following
values:
Some workloads need to be deleted before upgrading. This process does not delete
any data, but it will cause downtime. If a workload is missing, it means that it
was not enabled in your cluster, therefore you can skip it.
To delete a resource, you can select it using the box on the left of the
resource name and click the - button on the bottom right corner.
Open the Workloads tab.
From the Deployment section, delete the following deployments:
In the timescaledb-single section, make sure that the image.tag field
is set to pg13.8-ts2.8.0-p1.
Click Upgrade.
The upgrade process can take a few minutes. The upgrade is complete when the
Status field of the release is Deployed.
Change Factoryinsight API version
The Factoryinsight API version has changed from v1 to v2. To make sure that
you are using the new version, click on any Factoryinsight Pod and check that the
VERSION environment variable is set to 2.
If it’s not, follow these steps:
Navigate to the Workloads > Deployments tab.
Select the united-manufacturing-hub-factoryinsight-deployment deployment.
Click the Edit button to open the deployment’s configuration.
Find the spec.template.spec.containers[0].env field.
Set the value field of the VERSION variable to 2.
4.2.9 - Upgrade to v0.9.6
This page describes how to upgrade the United Manufacturing Hub to version 0.9.6
This page describes how to upgrade the United Manufacturing Hub to version
0.9.6. Before upgrading, remember to backup the
database,
Node-RED flows,
and your cluster configuration.
Add Helm repo in UMHLens / OpenLens
Check if the UMH Helm repository is added in UMHLens / OpenLens.
To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,).
Click on the Kubernetes tab and check if the Helm Chart section contains
the https://repo.umh.app repository.
If it doesn’t, click the Add Custom Helm Repo button and fill in the following
values:
This command could take a while to complete, especially on larger tables.
Type exit to close the shell.
Clear Workloads
Some workloads need to be deleted before upgrading. This process does not delete
any data, but it will cause downtime. If a workload is missing, it means that it
was not enabled in your cluster, therefore you can skip it.
To delete a resource, you can select it using the box on the left of the
resource name and click the - button on the bottom right corner.
Open the Workloads tab.
From the Deployment section, delete the following deployments:
From the StatefulSet section, delete the following statefulsets:
united-manufacturing-hub-mqttbridge
united-manufacturing-hub-hivemqce
united-manufacturing-hub-nodered
united-manufacturing-hub-sensorconnect
Upgrade Helm Chart
Now everything is ready to upgrade the Helm chart.
Navigate to the Helm > Releases tab.
Select the united-manufacturing-hub release and click
Upgrade.
In the Helm Upgrade window, make sure that the Upgrade version field
contains the version you want to upgrade to.
You can also change the values of the Helm chart, if needed.
Click Upgrade.
The upgrade process can take a few minutes. The upgrade is complete when the
Status field of the release is Deployed.
4.2.10 - Upgrade to v0.9.5
This page describes how to upgrade the United Manufacturing Hub to version 0.9.5
This page describes how to upgrade the United Manufacturing Hub to version
0.9.5. Before upgrading, remember to backup the
database,
Node-RED flows,
and your cluster configuration.
Add Helm repo in UMHLens / OpenLens
Check if the UMH Helm repository is added in UMHLens / OpenLens.
To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,).
Click on the Kubernetes tab and check if the Helm Chart section contains
the https://repo.umh.app repository.
If it doesn’t, click the Add Custom Helm Repo button and fill in the following
values:
Now you can close the shell by typing exit and continue with the upgrade process.
Clear Workloads
Some workloads need to be deleted before upgrading. This process does not delete
any data, but it will cause downtime. If a workload is missing, it means that it
was not enabled in your cluster, therefore you can skip it.
To delete a resource, you can select it using the box on the left of the
resource name and click the - button on the bottom right corner.
Open the Workloads tab.
From the Deployment section, delete the following deployments:
end_time_stamp has been renamed to timestamp_ms_end
deleteShiftByAssetIdAndBeginTimestamp and deleteShiftById have been removed.
Use the deleteShift
message instead.
4.2.11 - Upgrade to v0.9.4
This page describes how to upgrade the United Manufacturing Hub to version 0.9.4
This page describes how to upgrade the United Manufacturing Hub to version
0.9.4. Before upgrading, remember to backup the
database,
Node-RED flows,
and your cluster configuration.
Add Helm repo in UMHLens / OpenLens
Check if the UMH Helm repository is added in UMHLens / OpenLens.
To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,).
Click on the Kubernetes tab and check if the Helm Chart section contains
the https://repo.umh.app repository.
If it doesn’t, click the Add Custom Helm Repo button and fill in the following
values:
Some workloads need to be deleted before upgrading. This process does not delete
any data, but it will cause downtime. If a workload is missing, it means that it
was not enabled in your cluster, therefore you can skip it.
To delete a resource, you can select it using the box on the left of the
resource name and click the - button on the bottom right corner.
Open the Workloads tab.
From the Deployment section, delete the following deployments:
If you have enabled Barcodereader,
find the barcodereader section and set the
following values, adding the missing ones and updating the already existing
ones:
enabled:falseimage:pullPolicy:IfNotPresentresources:requests:cpu:"2m"memory:"30Mi"limits:cpu:"10m"memory:"60Mi"scanOnly:false# Debug mode, will not send data to kafka
Click Upgrade.
The upgrade process can take a few minutes. The process is complete when the
Status field of the release is Deployed.
4.3 - Administration
This section describes how to manage and configure the United Manufacturing Hub
cluster.
In this section, you will find information about how to manage and configure the
United Manufacturing Hub cluster, from customizing the cluster to access the
different services.
4.3.1 - Access the Database
This page describes how to access the United Manufacturing Hub database to
perform SQL operations using a database client, the CLI or Grafana.
There are multiple ways to access the database. If you want to just visualize data,
then using Grafana or a database client is the easiest way. If you need to also
perform SQL commands, then using a database client or the CLI are the best options.
Generally, using a database client gives you the most flexibility, since you can
both visualize the data and manipulate the database. However, it requires you to
install a database client on your machine.
Using the CLI gives you more control over the database, but it requires you to
have a good understanding of SQL.
Grafana, on the other hand, is for visualizing data. It is a good option if
you just want to see the data in a dashboard and don’t need to manupulate it.
Before you begin
You need to have a UMH cluster. If you do not already have a cluster, you can
create one by using the Management Console.
Get the database credentials
If you are not using the CLI, you need to know the database credentials. You can
find them in the timescale-post-init-pw Secret. By
default, the username is factoryinsight and the password is changeme.
...
ALTER USER factoryinsight WITH PASSWORD 'changeme';
...
Access the database using a database client
There are many database clients that you can use to access the database. Here’s
a list of some of the most popular database clients:
For the sake of this tutorial, pgAdmin will be used as an example, but other clients
have similar functionality. Refer to the specific client documentation for more
information.
Forward the database port to your local machine
From the Pods section in UMHLens / OpenLens, find
the united-manufacturing-hub-timescaledb-0 Pod.
In the Pod Details window, click the Forward button next to the
postgresql:5432/TCP port.
Enter a port number, such as 5432, and click Start. You can disable the
Open in browser option if you don’t want to open the port in your browser.
Using pgAdmin
You can use pgAdmin to access the database. To do so,
you need to install the pgAdmin client on your machine. For more information, see
the pgAdmin documentation.
Once you have installed the client, you can add a new server from the main window.
In the General tab, give the server a meaningful name. In the Connection
tab, enter the database credentials:
The Host name/address is localhost.
The Port is the port you forwarded.
The Maintenance database is postgres.
The Username and Password are the ones you found in the Secret.
Click Save to save the server.
You can now connect to the database by double-clicking the server.
Use the side menu to navigate through the server. The tables are listed under
the Schemas > public > Tables section of the factoryinsight database.
Refer to the pgAdmin documentation
for more information on how to use the client to perform database operations.
Access the database using the command line interface
You can access the database from the command line using the psql command
directly from the united-manufacturing-hub-timescaledb-0 Pod.
You will not need credentials to access the database from the Pod’s CLI.
Open a shell in the database Pod
From the Pod section in UMHLens / OpenLens, click on united-manufacturing-hub-timescaledb-0
to open the details page.
Click the Pod Shell button to open a shell in the container.
Enter the postgres shell:
psql
Connect to the database:
\c factoryinsight
Perform SQL commands
Once you have a shell in the database, you can perform
SQL commands.
For example, to create an index on the processValueTable:
CREATEINDEXONprocessvaluetable(valuename);
When you are done, exit the postgres shell:
exit
Access the database using Grafana
You can use Grafana to visualize data from the database.
Add PostgreSQL as a data source
Open the Grafana dashboard in your browser.
From the Configuration (gear) icon, select Data Sources.
Click Add data source and select PostgreSQL.
Configure the connection to the database:
The Host is united-manufacturing-hub.united-manufacturing-hub.svc.cluster.local:5432.
The Database is factoryinsight.
The User and Password are the ones you found in the Secret.
Set TLS/SSL Mode to require.
Enable TimescaleDB.
Everything else can be left as the default.
Click Save & Test to save the data source.
Now click on Explore to start querying the database.
You can also create dashboards using the newly created data source.
This page describes how to access services from within the cluster.
All the services deployed in the cluster are visible to each other. That makes it
easy to connect them together.
Before you begin
You need to have a UMH cluster. If you do not already have a cluster, you can
create one by using the Management Console.
Connect to a service from another service
To connect to a service from another service, you can use the service name as the
host name.
To get a list of available services and related ports you can open
UMHLens / OpenLens and go to Network > Services.
All of them are available from within the cluster. The ones of type LoadBalancer
are also available from outside the cluster using the node IP.
Example
The most common use case is to connect to the MQTT Broker from Node-RED.
To do that, when you create the MQTT node, you can use the service name
united-manufacturing-hub-mqtt as the host name and one the ports
listed in the Ports column.
The MQTT service name has changed since version 0.9.10. If you are using an older
version, use united-manufacturing-hub-vernemq instead of
united-manufacturing-hub-mqtt.
This page describe how to access services from outside the cluster.
Some of the microservices in the United Manufacturing Hub are exposed outside
the cluster with a LoadBalancer service. A LoadBalancer is a service
that exposes a set of Pods on the same network as the cluster, but
not necessarily to the entire internet. The LoadBalancer service
provides a single IP address that can be used to access the Pods.
Before you begin
You need to have a UMH cluster. If you do not already have a cluster, you can
create one by using the Management Console.
Accessing the services
The LoadBalancer service provides a single IP address that can be used to access
the Pods. To find the IP address, open UMHLens / OpenLens
and navigate to Network > Services. The IP address is listed in the
External IP column.
To access the services, use the IP address and the port number of the service,
e.g. http://192.168.1.100:8080.
If you installed the United Manufacturing Hub on your local machine, either
using the Management Console or the command line, the services are accessible
at localhost:<port-number>.
Services with LoadBalancer by default
The following services are exposed outside the cluster with a LoadBalancer
service by default:
To access Node-RED, you need to use the /node-red path, e.g.
http://192.168.1.100:1880/node-red.
Services without a LoadBalancer
Some of the microservices in the United Manufacturing Hub are exposed via
a ClusterIP service. That means that they are only accessible from within the
cluster itself. To access them from outside the cluster, you need to create a
LoadBalancer service.
For any other microservice, follow these steps to enable the LoadBalancer service:
Open UMHLens / OpenLens and navigate to Network >
Services.
Select the service and click the Edit button.
Scroll down to the status.loadBalancer section and change it to the following:
status:loadBalancer:ingress:- ip:<external-ip>
Replace <external-ip> with the external IP address of the node.
Scroll to the spec.type section and change the value from ClusterIP to
LoadBalancer.
Click Save to apply the changes.
If you installed the United Manufacturing Hub on your local machine, either
using the Management Console or the command line, you also need to map the port
exposed by the k3d cluster to a port on your local machine. To do that, run the
following command:
Replace <local-port> with a free port number on your local machine, and
<cluster-port> with the port number of the service.
Port forwarding in UMHLens / OpenLens
If you don’t want to create a LoadBalancer service, effectively exposing the
microservice to anyone that has access to the host IP address, you can use
UMHLens / OpenLens to forward the port to your local
machine.
Open UMHLens / OpenLens and navigate to Network >
Services.
Select the service that you want to access.
Scroll down to the Connection section and click the Forward… button.
From the dialog, you can choose a port on your local machine to forward the
cluster port from, or you can leave it empty to use a random port.
Click Forward to apply the changes.
If you left the checkbox Open in browser checked, then the service will
open in your default browser.
You can see and manage the forwarded ports of your cluster in the Network >
Port Forwarding section.
Port forwarding can be unstable, especially if the connection to the cluster is
slow. If you are experiencing issues, try to create a LoadBalancer service
instead.
Security considerations
MQTT broker
There are some security considerations to keep in mind when exposing the MQTT broker.
By default, the MQTT broker is configured to allow anonymous connections. This
means that anyone can connect to the broker without providing any credentials.
This is not recommended for production environments.
To secure the MQTT broker, you can configure it to require authentication. For
that, you can either enable RBAC
or set up HiveMQ PKI (recommended
for production environments).
If you are using a version of the United Manufacturing Hub older than 0.9.10,
then you need to change the ACL configuration
to allow your MQTT client to connect to the broker.
Troubleshooting
LoadBalancer service stuck in Pending state
If the LoadBalancer service is stuck in the Pending state, it probably means
that the host port is already in use. To fix this, edit the service and change
the section spec.ports.port to a different port number.
What’s next
4.3.4 - Access Kafka Outside the Cluster
This page describes how to access Kafka from outside the cluster.
By default the Kafka broker is only available from within the cluster, therefore
you cannot access it from external applications.
You can enable external access from the Kafka configuration.
Before you begin
You need to have a UMH cluster. If you do not already have a cluster, you can
create one by using the Management Console.
Enable external access from Kafka configuration
From UMHLens / OpenLens, go to Helm > Releases.
Click on the Upgrade button.
Search for the kafka section and edit the following values:
This page describes how to expose Grafana to the Internet.
This page describes how to expose Grafana to the Internet so that you can access
it from outside the Kubernetes cluster.
Before you begin
You need to have a UMH cluster. If you do not already have a cluster, you can
create one by using the Management Console.
Enable the ingress
To expose Grafana to the Internet, you need to enable the ingress.
Open UMHLens / OpenLens and go to the Helm > Releases
page.
Click the Upgrade button and search for Grafana.
Scroll down to the ingress section
Set the enabled field to true.
Add you domain name to the hosts field.
Click Upgrade to apply the changes.
Remember to add a DNS record for your domain name that points to the external IP
address of the Kubernetes host. You can find the external IP address of the
Kubernetes host on the Nodes page in UMHLens / OpenLens.
This page describes how to install custom drivers in NodeRed.
NodeRed is running on Alpine Linux as non-root user. This means that you can’t
install packages with apk. This tutorial shows you how to install packages
with proper security measures.
Before you begin
You need to have a UMH cluster. If you do not already have a cluster, you can
create one by using the Management Console.
Change the security context
From the StatefulSets section in UMHLens / OpenLens, click on united-manufacturing-hub-nodered
to open the details page.
Click the Edit button to open the StatefulSet’s configuration.
Press Ctrl+F and search for securityContext.
Set the values of the runAsUser field to 0, of fsGroup to 0, and of
runAsNonRoot to false.
This page describes how to execute Kafka shell scripts.
When working with Kafka, you may need to execute shell scripts to perform
administrative tasks. This page describes how to execute Kafka shell scripts.
Before you begin
You need to have a UMH cluster. If you do not already have a cluster, you can
create one by using the Management Console.
Open a shell in the Kafka container
From the Pod section in UMHLens / OpenLens, click on united-manufacturing-hub-kafka-0
to open the details page.
Click the Pod Shell button to open a shell in the container.
Navigate to the Kafka bin directory:
cd /opt/bitnami/kafka/bin
Execute any Kafka shell scripts. For example, to list all topics:
This page describes how to reduce the size of the United Manufacturing Hub database.
Over time, time-series data can consume a large amount of disk space. To reduce
the amount of disk space used by time-series data, there are three options:
Enable data compression. This reduces the required disk space by applying
mathematical compression to the data. This compression is lossless, so the data
is not changed in any way. However, it will take more time to compress and
decompress the data. For more information, see how
TimescaleDB compression works.
Enable data retention. This deletes old data that is no longer needed, by
setting policies that automatically delete data older than a specified time. This
can be beneficial for managing the size of the database, as well as adhering to
data retention regulations. However, by definition, data loss will occur. For
more information, see how
TimescaleDB data retention works.
Downsampling. This is a method of reducing the amount of data stored by
aggregating data points over a period of time. For example, you can aggregate
data points over a 30-minute period, instead of storing each data point. If exact
data is not required, downsampling can be useful to reduce database size.
However, data may be less accurate.
Before you begin
You need to have a UMH cluster. If you do not already have a cluster, you can
create one by using the Management Console.
Open the database shell
From the Pod section in UMHLens / OpenLens, click on united-manufacturing-hub-timescaledb-0
to open the details page.
Click the Pod Shell button to open a shell in the container.
Enter the postgres shell:
psql
Connect to the database:
\c factoryinsight
Enable data compression
To enable data compression, you need to execute the following SQL command from
the database shell:
From the Pod section in UMHLens / OpenLens, click on united-manufacturing-hub-timescaledb-0
to open the details page.
Click the Pod Shell button to open a shell in the container.
Enter the postgres shell:
psql
Connect to the database:
\c factoryinsight
Choose the assets to delete
You have multiple choices to delete assets, like deleting a single asset, or
deleting all assets in a location, or deleting all assets with a specific name.
To do so, you can customize the SQL command using different filters. Specifically,
a combination of the following filters:
assetname
location
customer
To filter an SQL command, you can use the WHERE clause. For example, using all
of the filters:
This page shows how to explore cached data in the United Manufacturing Hub.
When working with the United Manufacturing Hub, you might want to visualize
information about the cached data. This page shows how you can access the cache
and explore the data.
Before you begin
You need to have a UMH cluster. If you do not already have a cluster, you can
create one by using the Management Console.
Open a shell in the cache Pod
Open UMHLens / OpenLens and navigate to the Config >
Secrets page.
Get the cache password from the Secret redis-secret.
From the Pods section click on united-manufacturing-hub-redis-master-0
to open the details page.
If you have multiple cache Pods, you can select any of them.
Click the Pod Shell button to open a shell in the container.
Enter the shell:
redis-cli -a <cache-password>
Now you can execute any command. For example, to get the number of keys in
the cache, run:
This page shows how to optimize the database in order to reduce the time needed
to execute queries.
When you have a large database, it is possible that some queries take a long time
to execute. This especially shows when you are using Grafana and the dropdown
menu in the datasource takes a long time to load or does not load at all.
Before you begin
You need to have a UMH cluster. If you do not already have a cluster, you can
create one by using the Management Console.
Your United Manufacturing Hub must be at or later than version 0.9.4.
To check the United Manufacturing Hub version, open UMHLens / OpenLens and go to Helm > Releases. The version is listed in the Version column.
Open a shell in the database container
From the Pod section in UMHLens / OpenLens, click on united-manufacturing-hub-timescaledb-0
to open the details page.
Click the Pod Shell button to open a shell in the container.
Enter the postgres shell:
psql
Connect to the database:
\c factoryinsight
Create an index
Indexes are used to speed up queries. Run this query to create an index on the
processvaluetable table:
If you have already created an index, you can rollback the factoryinsight deployment
to version 0.9.4. This way it will use a less optimized but faster query, significantly
reducing the execution time.
From the Deployments section in UMHLens / OpenLens, click
on united-manufacturing-hub-factoryinsight-deployment to open the
details page.
Click the Edit button to open the deployment’s configuration.
Scroll down to the spec.containers section and change the image value to
unitedmanufacturinghub/factoryinsight:0.9.4.
This page describes how to change the datatype of some columns in the database
in order to optimize the performance.
In version 0.9.5 and prior, some tables in the database were created with the
varchar data type. This data type is not optimal for storing large amounts of
data. In version 0.9.6, the data type of some columns was changed from varchar
to text. This migration optimizes the database, by changing the data type of
some columns from varchar to text.
Before you begin
You need to have a UMH cluster. If you do not already have a cluster, you can
create one by using the Management Console.
From the Pod section in UMHLens / OpenLens, click on united-manufacturing-hub-timescaledb-0
to open the details page.
Click the Pod Shell button to open a shell in the container.
You can find a list of all available parameters down below.
If OutputPath is not set, the backup will be stored in the current folder.
This script might take a while to finish, depending on the size of your database
and your connection speed.
If the connection is interrupted, there is currently no option to resume the process, therefore you will need to start again.
Here is a list of all available parameters:
Available parameters
Parameter
Description
Required
Default value
GrafanaToken
Grafana API key
Yes
IP
IP of the cluster to backup
Yes
KubeconfigPath
Path to the kubeconfig file
Yes
DatabaseDatabase
Name of the databse to backup
No
factoryinsight
DatabasePassword
Password of the database user
No
changeme
DatabasePort
Port of the database
No
5432
DatabaseUser
Database user
No
factoryinsight
DaysPerJob
Number of days worth of data to backup in each parallel job
No
31
EnableGpgEncryption
Set to true if you want to encrypt the backup
No
false
EnableGpgSigning
Set to true if you want to sign the backup
No
false
GpgEncryptionKeyId
ID of the GPG key used for encryption
No
GpgSigningKeyId
ID of the GPG key used for signing
No
GrafanaPort
External port of the Grafana service
No
8080
OutputPath
Path to the folder where the backup will be stored
No
Current folder
ParallelJobs
Number of parallel job backups to run
No
4
SkipDiskSpaceCheck
Skip checking available disk space
No
false
SkipGpgQuestions
Set to true if you want to sign or encrypt the backup
No
false
Restore
Each component of the United Manufacturing Hub can be restored separately, in
order to allow for more flexibility and to reduce the damage in case of a
failure.
Cluster configuration
To restore the Kubernetes cluster, execute the .\restore-helm.ps1 script with
the following parameters:
<REMOTE_HOST> is the IP of the server where the database is running.
Use localhost if you installed the United Manufacturing Hub using k3d.
<BACKUP_NAME> is the name of the backup file.
Grafana database
If you want to backup the Grafana database, you can follow the same steps as
above, but you need to replace any occurence of factoryinsight with
grafana.
Additionally, you also need to write down the credentials in the
grafana-secret Secret, as they will be needed
to access the dashboard after restoring the database.
Restoring the database
This section is untested. Please report any issues you encounter.
For this section, we assume that you are restoring the data to a fresh United
Manufacturing Hub installation with an empty database.
Copy the backup file to the database pod
Open UMHLens / OpenLens.
Launch a new terminal sesstion by clicking on the + button in the
bottom-left corner of the window.
Run the following command to copy the backup file to the database pod:
This page describes how to import and export Node-RED flows.
Export Node-RED Flows
To export Node-RED flows, please follow the steps below:
Access Node-RED by navigating to http://<CLUSTER-IP>:1880/nodered in your
browser. Replace <CLUSTER-IP> with the IP address of your cluster, or
localhost if you are running the cluster locally.
From the top-right menu, select Export.
From the Export dialog, select wich nodes or flows you want to export.
Click Download to download the exported flows, or Copy to clipboard to
copy the exported flows to the clipboard.
The credentials of the connector nodes are not exported. You will need to
re-enter them after importing the flows.
Import Node-RED Flows
To import Node-RED flows, please follow the steps below:
Access Node-RED by navigating to http://<CLUSTER-IP>:1880/nodered in your
browser. Replace <CLUSTER-IP> with the IP address of your cluster, or
localhost if you are running the cluster locally.
From the top-right menu, select Import.
From the Import dialog, select the file containing the exported flows, or
paste the exported flows from the clipboard.
Click Import to import the flows.
4.5 - Security
This section contains information about how to secure the United Manufacturing
Hub.
4.5.1 - Change VerneMQ ACL Configuration
This page describes how to change the ACL configuration to allow more users
to publish to the MQTT broker
Change VerneMQ ACL configuration
Open UMHLens / OpenLens
Navigate to Helm > Releases.
Select the united-manufacturing-hub release and click Upgrade.
Find the _000_commonConfig.infrastrucutre.mqtt section.
Update the AclConfig value to allow unrestricted access, for example:
AclConfig:| pattern # allow all
Click Upgrade to apply the changes.
What’s next
You can find more information about the ACL configuration in the
VerneMQ documentation.
4.5.2 - Enable RBAC for the MQTT Broker
This page describes how to enable Role-Based Access Control (RBAC) for the
MQTT broker.
Enable RBAC
Open UMHLens / OpenLens
Navigate to Helm > Releases.
Select the united-manufacturing-hub release and click Upgrade.
Find the mqtt_broker section.
Locate the rbacEnabled parameter and change its value from false to true.
Click Upgrade.
Now all MQTT connections require password authentication with the following defaults:
Username: node-red
Password: INSECURE_INSECURE_INSECURE
Change default credentials
Open UMHLens / OpenLens
Navigate to Workloads > Pods.
Select the united-manufacturing-hub-hivemqce-0 Pod.
Click the Pod Shell button to open a shell in the container.
Navigate to the installation directory of the RBAC extension.
Replace <version> with the version of the HiveMQ CE extension. If you are
not sure which version is installed, you can press Tab after typing
java -jar hivemq-file-rbac-extension- to autocomplete the version.
Replace <password> with your desired password. Do not use any whitespaces.
Copy the output of the command. It should look similar to this:
$2a$10$Q8ZQ8ZQ8ZQ8ZQ8ZQ8ZQ8Zu
Navigate to Config > ConfigMaps.
Select the united-manufacturing-hub-hivemqce-extension ConfigMap.
Click the Edit button to open the ConfigMap editor.
In the data.credentials.xml section, replace the strings inbetween the
<password> tags with the password hash generated in step 7.
You can use a different password for each different microservice. Just
remember that you will need to update the configuration in each one
to use the new password.
Click Save to apply the changes.
Go back to Workloads > Pods and select the united-manufacturing-hub-hivemqce-0 Pod.
The Public Key Infrastructure for HiveMQ consists of two Java Key Stores (JKS):
Keystore: The Keystore contains the HiveMQ certificate and private keys.
This store must be confidential, since anyone with access to it could generate
valid client certificates and read or send messages in your MQTT infrastructure.
Truststore: The Truststore contains all the clients public certificates.
HiveMQ uses it to verify the authenticity of the connections.
Before you begin
You need to have the following tools installed:
OpenSSL. If you are using Windows, you can install it with
Chocolatey.
<password>: The password for the keystore. You can use any password you want.
<days>: The number of days the certificate should be valid.
The command runs for a few minutes and generates a file named hivemq.jks in
the current directory, which contains the HiveMQ certificate and private key.
If you want to explore the contents of the keystore, you can use
Keystore Explorer.
Generate client certificates
Open a terminal and create a directory for the client certificates:
mkdir pki
Follow these steps for each client you want to generate a certificate for.
You could also do it manually with the following command:
openssl base64 -A -in <filename> -out <filename>.b64
Now you can import the PKI into the United Manufacturing Hub. To do so:
Open UMHLens / OpenLens.
Navigate to Helm > Releases.
Select the united-manufacturing-hub release.
Click the Upgrade button.
Find the _000_commonConfig.infrastructure.mqtt.tls section.
Update the value of the keystoreBase64 field with the content of the
hivemq.jks.b64 file and the value of the keystorePassword field with the
password you used for the keystore.
Update the value of the truststoreBase64 field with the content of the
hivemq-trust-store.jks.b64 file and the value of the truststorePassword
field with the password you used for the truststore.
Update the value of the <servicename>.cert field with the content of the
<servicename>-cert.pem.b64 file and the value of the <servicename>.key field
with the content of the <servicename>-key.pem.b64 file.
This section contains information about the new features and changes in the
United Manufacturing Hub introduced in version 0.9.14.
Welcome to United Manufacturing Hub version 0.9.14! In this release we changed
the Kafka broker from Apache Kafka to RedPanda, which is a Kafka-compatible
event streaming platform. We also started migrating to a different kafka
library in our micoservices, which will allow full ARM support in the future.
Finally, we tweaked the overall resource usage of the United Manufacturing Hub
to improve performance and efficiency, along with some bug fixes.
For a complete list of changes, refer to the
release notes.
RedPanda
RedPanda is a Kafka-compatible event streaming
platform. It is built with modern hardware in mind and utilizes multi-core CPUs
efficiently, which can result in better performance compared to Kafka. RedPanda
also offers lower latency and higher throughput, making it a better fit for
real-time use cases in IIoT applications. Additionally, RedPanda has a simpler
setup and management process compared to Kafka, which can save time and
resources for development teams. Finally, RedPanda is fully compatible with
Kafka’s API, allowing for a seamless transition for existing Kafka users.
Overall, Redpanda can provide improved performance and efficiency for IIoT
applications that require real-time data processing and management with a lower
setup and management cost.
Sarama Kafka Library
We started migrating our microservices to use the
Sarama Kafka library. This library is
written in Go and is fully compatible with RedPanda. This change will allow us
to support ARM-based devices in the future, which will be useful for edge
computing use cases. Addedd bonus is that Sarama is faster and requires less
memory than the previous library.
For now we only migrated the following microservices:
barcodereader
kafka-init (used as an init container for components that communicate with
Kafka)
mqtt-kafka-bridge
Resources tweaking
With this release we tweaked the resource requests of each default component
of the United Manufacturing Hub to respect the minimum requirements of 4 cores
and 8GB of RAM. This allowed us to increase the memory allocated for the MQTT
broker, resulting in solving the common Out Of Memory issue that caused the
broker to restart.
Be sure to follow the upgrade guide
to adjust your resources accordingly.
The following table shows the new resource requests and limits when deploying
the United Manufacturing Hub with the default configuration or with all the
components enabled. CPU values are expressed in millicores and memory values
are expressed in mebibytes.
resources
Resource
Requests
Limits
CPU (default values)
1080m (27%)
1890m (47%)
Memory (default values)
1650Mi (21%)
2770Mi (35%)
CPU (all components)
2002m (50%)
2730m (68%)
Memory (all components)
2873Mi (36%)
3578Mi (45%)
The requested resources are the ones immediately allocated to the container
when it starts, and the limits are the maximum amount of resources that the
container can (but is not forced to) use. For more information about Kubernetes
resources, refer to the
official documentation.
Container registry
We moved our container registry from Docker Hub to GitHub Container Registry.
This change won’t affect the way you deploy the United Manufacturing Hub, but
it will allow us to better manage our container images and provide a better
experience for our developers. For the time being, we will continue to publish
our images to Docker Hub, but we will eventually deprecate the old registry.
Others
Implemented a new test build to detect race conditions in the codebase. This
will help us to improve the stability of the United Manufacturing Hub.
All our custom images now run as non-root by default, except for the ones that
require root privileges.
The custom microservices now allow to change the type of Service used to
expose them by setting serviceType field.
Added an SQL trigger function that deletes duplicate records from the
statetable table after insertion.
Enhanced the environment variables validation in the codebase.
Added possibility to set the aggregation interval when calculating the
throughput of an asset.
Various dependencies has been updated to their latest version.
5.2 - What's New in Version 0.9.13
This section contains information about the new features and changes in the
United Manufacturing Hub introduced in version 0.9.13.
Welcome to United Manufacturing Hub version 0.9.13! This is a minor release that
only updates the new metrics feature.
For a complete list of changes, refer to the
release notes.
5.3 - What's New in Version 0.9.12
This section contains information about the new features and changes in the
United Manufacturing Hub introduced in version 0.9.12.
Welcome to United Manufacturing Hub version 0.9.12! Read on to learn about
the new features of the UMH Datasource V2 plugin for Grafana, Redis running
in standalone mode, and more.
For a complete list of changes, refer to the
release notes.
Grafana
New Grafana version
Grafana has been upgraded to version 9.4.3. This introduces new search and
navigation features, a redesigned details section of the logs, and a new
data source connection page.
We have upgraded Node-RED to version 3.0.2.
Checkout the Node-RED release notes for more information.
UMH Datasource V2 plugin
The latest update to the datasource has incorporated typesafe JSON parsing,
significantly enhancing the overall performance and dependability of the plugin.
This implementation ensures that the parsing process strictly adheres to predefined
data types, eliminating the possibility of unexpected errors or data corruption
that can occur with loosely-typed JSON parsing.
Redis in standalone mode
Redis, the service used for caching, is now deployed in standalone mode. This
change introduces these benefits:
Simplicity: Running Redis in standalone mode is simpler than using a
master-replica topology with Sentinel. With standalone mode, there is only one
Redis instance to manage, whereas with master-replica, you need to manage
multiple Redis instances and the Sentinel process. This simplicity can reduce
complexity and make it easier to manage Redis instances.
Lower Overhead: Standalone mode has lower overhead than using a master-replica
topology with Sentinel. In a master-replica topology, there is a communication
overhead between the master and the replicas, and Sentinel adds additional
overhead for monitoring and failover management. In contrast, standalone mode
does not have this overhead.
Better Performance: Since standalone mode does not have the overhead of
master-replica topology with Sentinel, it can provide better performance.
Standalone mode provides faster response times and can handle more requests
per second than a master-replica topology with Sentinel.
That being said, it’s important to note that a master-replica topology with
Sentinel provides higher availability and failover capabilities than standalone
mode.
All basic services are now exposed by a LoadBalancer Service
The MQTT Broker, Kafka Broker, and Kafka Console are now exposed by a
LoadBalancer Service, along with the Database, Grafana and Node-RED. This
change makes it easier to access these services from outside the cluster, as
they are now accessible via the IP address of the cluster.
When installing the United Manufacturing Hub locally, the cluster ports are
automatically mapped to the host ports. This means that you can access the
services from your browser by using localhost and the port number.
Read more about connecting to the services from outside the cluster in the
related documentation.
Metrics
We introduced an optional microservice that can be used to collect metrics
about the system, like OS, CPU, memory, hostname and average load. These metrics
are then sent to our server for analysis, and are completely anonymous. This
microservice is enabled by default, but can be disabled by setting the
_000_commonConfig.metrics.enabled value to false in the values.yaml file.