This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

The OSS blueprint for the Industrial IoT

The United Manufacturing Hub is an Open-Source Helm Chart for Kubernetes, which combines state-of -the-art IT / OT tools & technologies and brings them into the hands of the engineer.

Bringing the worlds best IT and OT tools into the hands of the engineer

Why start from scratch when you can leverage a proven open-source blueprint? Kafka, MQTT, Node-RED, TimescaleDB and Grafana with the press of a button - tailored for manufacturing and ready-to-go



What can you do with it?


Everything That You Need To Do To Generate Value On The Shopfloor

Prevent Vendor Lock-In and Customize to Your Needs

  • The only requirement is Kubernetes, which is available in various flavors, including k3s, bare-metal k8s, and Kubernetes-as-a-service offerings like AWS EKS or Azure AKS
  • Swap components with other options at any time. Not a fan of Node-RED? Replace it with Kepware. Prefer a different MQTT broker? Use it!
  • Leverage existing systems and add only what you need.

Get Started Immediately

Connect with Like-Minded People

  • Tap into our community of experts and ask anything. No need to depend on external consultants or system integrators.
  • Leverage community content, from tutorials and Node-RED flows to Grafana dashboards. Although not all content is enterprise-supported, starting with a working solution saves you time and resources.
  • Get honest answers in a world where many companies spend millions on advertising.

How does it work?

Only requirement: a Kubernetes cluster (and we'll even help you with that!). You only need to install the United Manufacturing Hub Helm Chart on that cluster and configure it.

The United Manufacturing Hub will then generate all the required files for Kubernetes, including auto-generated secrets, various microservices like bridges between MQTT / Kafka, datamodels and configurations. From there on, Kubernetes will take care of all the container management.



FAQ

Yes - the United Manufacturing Hub is targeting specifically people and companies, who do not have the budget and/or knowledge to work on their own / develop everything from scratch.

With our extensive documentation, guides and knowledge sections you can learn everything that you need.

The United Manufacturing Hub abstracts these tools and technologies so that you can leverage all advantages, but still focus on what really matters: digitizing your production.

With our commercial Management Console you can manage your entire IT / OT infrastructure and work with Grafana / Node-RED without the need to ever touch or understand Kubernetes, Docker, Firewalls, Networking or similar.

Additionally, you can get support licenses providing unlimited support during development and maintenance of the system. Take a look at our website if you want to get more information on this.
Because very often these solutions do not target the actual pains of an engineer: implementation and maintenance. And then companies struggle in rolling out IIoT as the projects take much longer and cost way more than originally proposed.

In the United Manufacturing Hub, implementation and maintenance of the system are the first priority. We've had these pains too often ourselves and therefore incorporated and developed tools & technologies to avoid them.

For example, with sensorconnect we can retrofit production machines where it is impossible at the moment to extract data. Or, with our modular architecture we can fit the security needs of all IT departments - from integration into a demilitarized zone to on-premise and private cloud. With Apache Kafka we solve the pain of corrupted or missing messages when scaling out the system

How to proceed?

1 - Get Started!

You want to get started right away? Go ahead and jump into the action!

We are glad that you want to start setting up right away! This guide is divided into 5 steps: Installation, Managing the System, Data Acquisition & Manipulation and Moving to Production.

Contact Us!

Do you still have questions on how to get started? Message us on our Discord Server or submit a support ticket through the question mark in the lower right corner of the website.

1.1 - 1. Installation

Installing the united manufacturing hub using the Management Console

The United Manufacturing Hub can be installed locally or on an edge device, depending on your needs. For simple tinkering and development, we recommend installing it locally using our Management Console.

If you prefer an open-source approach, we also provide instructions for using k3d.

We’ve put together a comprehensive guide on how to install the UMH locally on your computer using our Management Console. The Management Console is a desktop application allowing you to setup, configure and maintain your IT / OT infrastructure - independent whether it is deployed as a test instance on the same device as the Management Console, or on an edge-device, on-premise server or in the cloud.

To access the documentation, simply click on the button below.

Management Console

Please note that the Management Console is available at the moment under Windows only and only allows setting up test instances on the same device. If you are using Linux or Mac, please look into the production guides for OS specific installation tutorials.

What’s next?

Once you’ve completed the installation process, we’ll guide you through accessing the microservices using UMHLens. To learn more click here.

1.2 - 2. Managing the System

Basics of UMHLens and importing Node-RED and Grafana flows

In this chapter, we’ll guide you through connecting to our Kubernetes cluster using UMHLens. Then, we’ll walk you through importing a Node-RED and Grafana flow to help you visualize how data flows through the stack. Check out the image below for a sneak peek

Untitled
Untitled

1. Connect to UMH

  1. Download & install UMHLens here.

  2. If you installed the UMH using the management console, you should see a cluster named “k3d-united-manufacturing-hub” under Browse. Click on it to connect.

  3. You can check the status of all pods by navigating to Workloads -> Pods and selecting united-manufacturing-hub as the namespace on the top right. Depending on your system, it may take a while for all pods to start.

    Untitled
    Untitled
    Untitled
    Untitled

  4. To access the web interfaces of the microservices, e.g. node-red or grafana, navigate to Network -> Services on the left-hand side. Again make sure to change the namespace to united-manufacturing-hub at the top right.

    Untitled
    Untitled
    Untitled
    Untitled

  5. Click on the appropriate service you wish to connect to, scroll down to Connection and forward the port.

    Untitled
    Untitled

2. Import flows to Node-RED

  1. Access the Node-RED Web UI. To do this, click on the service and forward the port as shown above. When the UI opens in the browser, add nodered to the URL as shown in the figure below to avoid the cannot get error.

    Untitled
    Untitled

  2. Once you are in the web interface, click on the three lines in the upper right corner and select Import.

    Untitled
    Untitled

  3. Now copy this json file and paste it into the import field. Then press Import.

    Untitled
    Untitled

  4. To activate the imported flow, simply click on the Deploy button located at the top right of the screen. If everything is working as expected, you should see green dots above the input and output. Once you’ve confirmed that the data is flowing correctly, you can proceed to display it in Grafana

    Untitled
    Untitled

3. Import flows to Grafana & view dashboard

  1. Go into UMHLens and forward the grafana service as you did with node-red. To log in, you need the grafana Secrets, which you can find in UMHLens under Config -> Secrets -> Grafana-Secret. Click on the eye to display the username and password and enter it in grafana.

    Untitled
    Untitled

  2. Once you are logged in, click on Dashboards on the left and select Import. Now copy this Grafana json and paste it into Import via panel json. Then click on Load. You will then be redirected to Options where you need to select the umh-v2-datasource. Finally, click on Import.

    Untitled
    Untitled

  3. If everything is working properly, you should now see a functional dashboard with a temperature curve.

    Untitled
    Untitled

What’s next?

Next, you can create a node-red flow for yourself and then learn how to create a dashboard in Grafana. Click here to proceed.

1.3 - 3. Data Acquisition and Manipulation

Formatting raw data into the UMH data model using node-red.

The United Manufacturing Hub has several simulators. These simulators simulate different data types/protocols such as MQTT, PackML or OPC/UA. In this chapter we will take the MQTT simulated data and show you how to format it into the UMH data model.

Creating Node-RED flow with simulated MQTT-Data

  1. Access the Node-RED Web UI. To do this, click on the service and forward the port as shown below. Once the UI opens in the browser, add nodered to the URL to avoid the cannot get error.

    Untitled
    Untitled

  2. From the left-hand column, drag a mqtt-in node, a mqtt-out node, and a debug node into your flow.

  3. Connect the mqtt-in and to the debug-node.

    Untitled
    Untitled

  4. Double-click on the mqtt-in node and add a new MQTT broker. To do so, click on Edit and use the service name of HiveMQ as the host (located in UMHLens under services -> name). Leave the port as autoconfigured and click on Add to save your changes.

    Untitled
    Untitled

  5. To view all incoming messages from a specific topic, type ia/# under Topic and click on Done.

    Untitled
    Untitled

  6. To apply the changes, click on Deploy located at the top right of the screen. Once the changes have been deployed, you can view the debug information by clicking on Debug-Messages located under Deploy.

    Untitled
    Untitled

  7. In this column, you can view all incoming messages and their respective topics. The incoming topics follow this format: ia/raw/development/ioTSensors/. For the purpose of this tutorial, we will be using only the temperature topic, but feel free to choose any topic you’d like. To proceed, copy the temperature topic (ia/raw/development/ioTSensors/Temperature), open the mqtt-in node, paste the copied topic in the Topic field, click on Done, and then press Deploy again to apply the changes.

    Untitled
    Untitled

  8. To format the incoming message, add a JSON node and a Function node to your flow. Connect the nodes in the following order: mqtt-in → JSON → Function → mqtt-out.

    Untitled
    Untitled

  9. Open the function node and paste in the following:

    msg.payload ={
    
        "timestamp_ms": Date.now(), 
        "temperature": parseFloat(msg.payload, 10)
    }
    msg.topic = "ia/factoryinsight/Aachen/testing/processValue";
    return msg;
    

We are creating a new object (array) with two keys timestamp_ms and temperature and their corresponding value Date.now() and parseFloat(msg.payload,10). The parseFloat function converts the incoming string into a float with the base 10 and the Date.now() creates a timestamp. We also created a msg.topic for the mqtt-out node, which will automatically apply this topic. The topic ends with the key processValue which is used whenever a custom process value with unique name has been prepared. The value is numerical. You can learn more about our message structure here.

  1. Add another mqtt-in node to your flow, and set the topic to ia/factoryinsight/Aachen/testing/processValue. Make sure to select the created broker. Connect a debug node to the new mqtt-in node, and then click on Deploy to save the changes.

    Untitled
    Untitled

  2. You should now see the converted message under Debug-messages. To clear any previous messages, click on the trash bin icon.

    Untitled
    Untitled

  3. Congratulations, you have successfully converted the incoming message and exported it via MQTT. However, since we are currently only exporting the temperature without actually working with the data, let’s create a function that counts the critical temperature exceedances.

  4. Drag another function-node into your flow, open it and navigate to On Start.

    Untitled
    Untitled

  5. Paste in the following code, which will only run on start:

    flow.set("count", 0);
    flow.set("current", 0)
    
  6. Then click on On-Message and paste in the following and click done:

    flow.set("current",msg.payload);
    if (flow.get("current")>47){
        flow.set("count", flow.get("count")+1);
        msg.payload = {"TemperatureWarning":flow.get("count"),"timestamp_ms":Date.now()}
        msg.topic = "ia/factoryinsight/Aachen/testing/processValue";
        return msg;
    }
    

    The pasted in code will work as shown in the diagram below.

    Untitled
    Untitled

  7. Finally, connect the function-node like shown below and click on deploy.

    Untitled
    Untitled

  8. If the incoming value of temperature is now greater than 47, you will see another message consisting of TemperatureWarning and a timestamp in debug-messages.

    Untitled
    Untitled

What’s next?

In the next chapter we will use Grafana to display the formatted data. Click here to proceed.

1.4 - 4. Data Visualization

Building a simple Grafana dashboard

The next step is to visualize the data. In this chapter, we will be creating a Grafana dashboard that is based on the Node-RED flow we created in the previous chapter. The dashboard will display the temperature readings and temperature warnings.

Creating a Grafana dashboard

  1. Open Grafana with UMHLens and enter the secrets also found in UMHLens.

  2. Once you are in Grafana navigate to the left and click on New dashboard.

    Untitled
    Untitled

  3. Click on Add a new panel.

    Untitled
    Untitled

  4. Next we will configure the datasource-v2, to retrieve the data we earlier transformed in Node-red. Click on umh-v2-datasource.

    Untitled
    Untitled

  5. Go to Work cell to query and select under Select new work cell: factoryinsight->Aachen->DefaultArea->DefaultProductionLine->testing

  6. Next go to Value to query and select under Select new value: tags->custom->temperature .

  7. If you now click on Refresh Dashboard at the top right-hand corner, the graph will refresh and display the temperature data.

    Untitled
    Untitled

  8. Next, you can customise your dashboard. On the right side are several options, such as specifying a unit or setting thresholds, etc. Just play around until it suits your needs.

  9. When you have finished making adjustments, click Apply in the top right-hand corner to save the panel and return to the overview.

  10. Next we will display the temperature warnings. Click Add Panel at the top right to create an additional panel.

    Untitled
    Untitled

  11. To set up the umh-v2 data source, repeat the steps discussed earlier, but select under Value to query: TemperatureWarning instead of temperature.

  12. instead of a time series chart to display the temperature warnings, we select Stat on the right side.

    Untitled
    Untitled

  13. Now you can again customize your panel and when you are done click on Apply.

  14. Congratulations, you have created your first Grafana dashboard, and it should look something like the one below.

    Untitled
    Untitled

What’s next?

The next topic is “Moving to Production”, where we will explain what it means to move the umh to a manufacturing environment. Click here to proceed.

1.5 - 5. Moving to Production

Moving the United Manufacturing Hub to production

The next big step is to use the UMH on a virtual machine or an edge device in your production and connect your production assets. However, we understand that you might want to understand a little bit more about the United Manufacturing Hub first. So, you can either read more about, deep-dive into your local installation, or continue with the deployment in production.

Check out our community

We are quite active on GitHub and Discord. Feel free to join, introduce yourself and share your best-practices and experiences.

Learn more about the United Manufacturing Hub

If you like reading more about its features and architecture, check out the following chapters:

  • Features to understand the capabilities of the United Manufacturing Hub and learn how to use them
  • Architecture to learn what is behind the United Manuacturing Hub and how everything works together

If reading is not your thing, you can always …

Play around with it locally

If you want to try around locally, we recommend you try out the following topics.

Grafana Canvas

If you’re interested in creating visually appealing Grafana dashboards, you might want to try Grafana-Canvas. In our previous blog article, we explained why Grafana-Canvas is a valuable addition to your standard Grafana dashboard. If you’d like to learn how to build one, check out our tutorial.

Untitled
Untitled

OPC/UA-Simulator

If you want to get a good overview of how the OPC/UA protocol works and how to connect it to the UMH, the OPC/UA-simulator is a useful tool. Detailed instructions can be found in this guide.

Untitled
Untitled

PackML-Simulator

For those looking to get started with PackML, the PackML Simulator is another helpful simulator. Check out our tutorial on how to create a Node-RED flow with PackML data.

Untitled
Untitled

Benthos

Benthos is a highly scalable data manipulation and IT connection tool. If you’re interested in learning more about it, check out our tutorial.

Untitled
Untitled

Kepware

At times, you may need to connect different, older protocols. In such cases, KepwareServerEx can help bridge the gap between these older protocols and the UMH. If you’re interested in learning more, check out our tutorial.

Deployment to production

Ready to go to production? Go install it!

Follow our step-by-step tutorial on how to install the UMH on an edge device or an virtual machine using Flatcar. We’ve also written a blog article explaining why we use Flatcar as the operating system for the industrial IoT, which you can find here.

Make sure to check out our advanced production guides, which include detailed instructions on how to secure your setup and how to best integrate with your infrastructure.

2 - Features

Do you want to understand the capabilities of the United Manufacturing Hub, but do not want to get lost in technical architecture diagrams? Here you can find all the features explained on few pages.

2.1 - Unified Namespace / Message Broker

Exchange events and messages across all your shopfloor equipment, IT / OT systems such as ERP or MES and microservices.

The Unified Namespace is an event-driven architecture that allows for seamless communication between nodes in a network. It operates on the principle that all data, regardless of whether there is an immediate consumer, should be published and made available for consumption. This means that any node in the network can work as either a producer or a consumer, depending on the needs of the system at any given time.

To use any functionalities of the United Manufacturing Hub, you need to use the Unified Namespace as well. More information can be found in our Learning Hub on the topic of Unified Namespace.

When should I use it?

An application consists always out of multiple building blocks. To connect those building blocks, one can either exchange data between them through databases, through service calls (such as REST), or through a message broker.

Opinion: We think for most applications in manufacturing, communication via a message broker is the best choice as it prevents spaghetti diagrams and allows for real-time data processing. For more information about this, you can check out this blog article.

In the United Manufacturing Hub, each single piece of information / “message” / “event” is sent through a message broker, which is also called the Unified Namespace.

What can I do with it?

The Unified Namespace / Message Broker in the United Manufacturing Hub provides several notable functionalities in addition to the features already mentioned:

  • Easy integration using MQTT: Many modern shopfloor equipment can send and receive data using the MQTT protocol.
  • Easy integration with legacy equipment: Using tools like Node-RED, data can be easily extracted from various protocols such as Siemens S7, OPC-UA, or Modbus
  • Get notified in real-time via MQTT: The Unified Namespace allows you to receive real-time notifications via MQTT when new messages are published. This can be useful for applications that require near real-time processing of data, such as an AGV waiting for new commands.
  • Retrieve past messages from Kafka logs: By looking into the Kafka logs, you can always be aware of the last messages that have been sent to a topic. This allows you to replay certain scenarios for troubleshooting or testing purposes.
  • Efficiently process messages from millions of devices: The Unified Namespace is designed to handle messages from millions of devices in your factory, even over unreliable connections. By using Kafka, you can efficiently at-least-once process each message, ensuring that each message arrives at-least-once (1 or more times).
  • Trace messages through the system: The Unified Namespace provides tracing capabilities, allowing you to understand where messages are coming from and where they go. This can be useful for debugging and troubleshooting purposes. You can use the Management Console to visualize the flow of messages through the system.to visualize the flow of messages through the system.

How can I use it?

Using the Unified Namespace is quite simple:

Configure your IoT devices and devices on the shopfloor to use the in-built MQTT broker of the United Manufacturing Hub by specifying the MQTT protocol, selecting unencrypted (1883) / encrypted (8883) ports depending on your configuration, and send the messages into a topic starting with ia/raw. From there on, you can start processing the messages in Node-RED by reading in the messages again via MQTT or Kafka, adjusting the payload or the topic to match the UMH datamodel and sending it back again to MQTT or Kafka.

If you send the messages into other topics, some features might not work correctly (see also limitations).

Recommendation: Send messages from IoT devices via MQTT and then work in Kafka only.

What are the limitations?

  • Messages are only bridged between MQTT and Kafka if they fulfill the following requirements:
    • payload is a valid JSON OR message is sent to the ia/raw topic
    • only sent to topics matching the allowed topics in the UMH datamodel, independent of what is configured in the environment variables (will be changed soon)
    • The topic lengths can be maximum 249 characters as this is a Kafka limitation
    • Only the following characters are allowed in the topic: a-z, A-Z, _ and -
    • Max. messages size for the mqtt-kafka-bridge is 0.95MB (1000000 bytes). If you have more, we recommend using Kafka directly and not bridging it via MQTT.
  • Messages from MQTT to Kafka will be published under a different topic:
    • Spaces will be removed
    • / characters will be replaced with a .
    • and vice versa
  • By default, there will be no Authorization and Authentication on the MQTT broker. You need to enable authentication and authorization yourself.
  • The MQTT or Kafka broker is not exposed externally by default. You need to enable external MQTT access first, or alternatively expose Kafka externally.

Where to get more information?

2.2 - Historian / Data Storage

Learn how the United Manufacturing Hub’s Historian feature provides reliable data storage and analysis for your manufacturing data.

The Historian / Data Storage feature in the United Manufacturing Hub provides reliable data storage and analysis for your manufacturing data. Essentially, a Historian is just another term for a data storage system, designed specifically for time-series data in manufacturing.

When should I use it?

If you want to reliably store data from your shop floor that is not designed to fulfill any legal purposes, such as GxP, then the United Manufacturing Hub’s Historian feature is ideal. Open-Source databases such as TimescaleDB are superior to traditional historians in terms of reliability, scalability and maintainability, but can be challenging to use for the OT engineer. The United Manufacturing Hub fills this usability gap, allowing OT engineers to easily ingest, process, and store data permanently in an Open-Source database.

What can I do with it?

The Historian / Data Storage feature of the United Manufacturing Hub allows you to:

Store and analyze data

  • Automatically store data from the processValue topics in the Unified Namespace. Data can be sent to the Unified Namespace from various sources, allowing you to store tags from your PLC and production lines reliably.
  • Conduct basic data analysis, including automatic downsampling, gap filling, and statistical functions such as Min, Max, and Avg

Query and visualize data

  • Query data in an ISA95 model, from enterprise to site, area, production line, and work cell.
  • Visualize your data in Grafana to easily monitor and troubleshoot your production processes.

More information about the exact analytics functionalities can be found in the umh-datasource-v2 documentation. Further below some screenshots of said datasource.

Efficiently manage data

  • Compress and retain data to reduce database size using various techniques.

How can I use it?

Convert your data in your Unified Namespace to processValue messages, and the Historian feature will store them automatically. You can then view the data in Grafana. An example can be found in the Getting Started guide.

For more information about what exactly is behind the Historian feature, check out our our architecture page

What are the limitations?

Apart from these limitations, the United Manufacturing Hub’s Historian feature is highly performant compared to legacy Historians.

Where to get more information?

2.3 - Shopfloor KPIs / Analytics

The Shopfloor KPI/Analytics feature of the United Manufacturing Hub provides equipment-based KPIs, configurable dashboards, and detailed analytics for production transparency. Configure OEE calculation and track root causes of low OEE using drill-downs. Easily ingest, process, and analyze data in Grafana.

The Shopfloor KPI / Analytics feature of the United Manufacturing Hub provides a configurable and plug-and-play approach to create “Shopfloor Dashboards” for production transparency consisting of various KPIs and drill-downs.

Click on the images to enlarge them. More examples can be found in this YouTube video and in our community-repo on GitHub.

When should I use it?

If you want to create production dashboards that are highly configurable and can drill down into specific KPIs, the Shopfloor KPI / Analytics feature of the United Manufacturing Hub is an ideal choice. This feature is designed to help you quickly and easily create dashboards that provide a clear view of your shop floor performance.

What can I do with it?

The Shopfloor KPI / Analytics feature of the United Manufacturing Hub allows you to:

Query and visualize

In Grafana, you can:

  • Calculate the OEE (Overall Equipment Effectiveness) and view trends over time
    • Availability is calculated using the formula (plannedTime - stopTime) / plannedTime, where plannedTime is the duration of time for all machines states that do not belong in the Availability or Performance category, and stopTime is the duration of all machine states configured to be an availability stop.
    • Performance is calculated using the formula runningTime / (runningTime + stopTime), where runningTime is the duration of all machine states that consider the machine to be running, and stopTime is the duration of all machine states that are considered a performance loss. Note that this formula does not take into account losses caused by letting the machine run at a lower speed than possible. To approximate this, you can use the LowSpeedThresholdInPcsPerHour configuration option (see further below).
    • Quality is calculated using the formula good pieces / total pieces
  • Drill down into stop reasons (including histograms) to identify the root-causes for a potentially low OEE.
  • List all produced and planned orders including target vs actual produced pieces, total production time, stop reasons per order, and more using job and product tables.
  • See machine states, shifts, and orders on timelines to get a clear view of what happened during a specific time range.
  • View production speed and produced pieces over time.

Configure

In the database, you can configure:

  • Stop Reasons Configuration: Configure which stop reasons belong into which category for the OEE calculation and whether they should be included in the OEE calculation at all. For instance, some companies define changeovers as availability losses, some as performance losses. You can easily move them into the correct category.
  • Automatic Detection and Classification: Configure whether to automatically detect/classify certain types of machine states and stops:
    • AutomaticallyIdentifyChangeovers: If the machine state was an unspecified machine stop (UnknownStop), but an order was recently started, the time between the start of the order until the machine state turns to running, will be considered a Changeover Preparation State (10010). If this happens at the end of the order, it will be a Changeover Post-processing State (10020).
    • MicrostopDurationInSeconds: If an unspecified stop (UnknownStop) has a duration smaller than a configurable threshold (e.g., 120 seconds), it will be considered a Microstop State (50000) instead. Some companies put small unknown stops into a different category (performance) than larger unknown stops, which usually land up in the availability loss bucket.
    • IgnoreMicrostopUnderThisDurationInSeconds: In some cases, the machine can actually stop for a couple of seconds in routine intervals, which might be unwanted as it makes analysis difficult. One can set a threshold to ignore microstops that are smaller than a configurable threshold (usually like 1-2 seconds).
    • MinimumRunningTimeInSeconds: Same logic if the machine is running for a couple of seconds only. With this configurable threshold, small run-times can be ignored. These can happen, for example, during the changeover phase.
    • ThresholdForNoShiftsConsideredBreakInSeconds: If no shift was planned, an UnknownStop will always be classified as a NoShift state. Some companies move smaller NoShift’s into their category called “Break” and move them either into Availability or Performance.
    • LowSpeedThresholdInPcsPerHour: For a simplified performance calculation, a threshold can be set, and if the machine has a lower speed than this, it could be considered a LowSpeedState and could be categorized into the performance loss bucket.
  • Language Configuration: The language of the machine states can be configured using the languageCode configuration option (or overwritten in Grafana).

You can find the configuration options in the configurationTable

How can I use it?

Using it is very easy:

  1. Send messages according to the UMH datamodel to the Unified Namespace (similar to the Historian feature)
  2. Configure your OEE calculation by adjusting the configuration table
  3. Open Grafana, select your equipment and select the analysis you want to have. More information can be found in the umh-datasource-v2.

For more information about what exactly is behind the Analytics feature, check out our our architecture page and our datamodel

What are the limitations?

At the moment, the limitations are:

  • Speed losses in Performance are not calculated and can only be approximated using the LowSpeedThresholdInPcsPerHour configuration option
  • There is no way of tracking losses through reworked products. Either a product is scrapped or not.

Where to get more information?

2.4 - Data connectivity with Node-RED

Connect devices on the shop floor using Node-RED with United Manufacturing Hub’s Unified Namespace. Simplify data integration across PLCs, Quality Stations, and MES/ERP systems with a user-friendly UI.

One feature of the United Manufacturing Hub is to connect devices on the shopfloor such as PLCs, Quality Stations or MES / ERP systems with the Unified Namespace using Node-RED. Node-RED has a large library of nodes, which lets you connect various protocols. It also has a user-friendly UI with little code, making it easy to configure the desired nodes.

When should I use it?

Sometimes it is necessary to connect a lot of different protocols (e.g Siemens-S7, OPC-UA, Serial, …) and node-RED can be a maintainable solution to connect all these protocols without the need for other data connectivity tools. Node-RED is largely known in the IT/OT-Community making it a familiar tool for a lot of users.

What can I do with it?

By default, there are connector nodes for common protocols:

  • connect to MQTT using the MQTT node
  • connect to HTTP using the HTTP node
  • connect to TCP using the TCP node
  • connect to IP using the UDP node

Furthermore, you can install packages to support more connection protocols. For example:

You can additionally contextualize the data, using function or other different nodes do manipulate the received data.

How can I use it?

Node-RED comes preinstalled as a microservice with the United Manufacturing Hub.

  1. To access Node-RED navigate to Network -> Services on the left-hand side in UMHLens. You can download UMHLens / OpenLens here.

  2. On the top right, change the Namespace from default to united-manufacturing-hub.

    Untitled
    Untitled

  3. Click on united-manufacturing-hub-nodered-service, scroll down to Connection and forward the port.

    Untitled
    Untitled

  4. Once Node-RED opens in the browser, add nodered to the URL to avoid the cannot get error.

  5. Begin exploring right away! If you require inspiration on where to start, we provide a variety of guides to help you become familiar with various node-red workflows, including how to process data and align it with the UMH datamodel:

What are the limitations?

  • Most packages have no enterprise support. If you encounter any errors, you need to ask the community. However, we found that these packages are often more stable than the commercial ones out there, as they have been battle tested by way more users than commercial software.
  • Having many flows without following a strict structure, leads in general to confusion.
  • One additional limitation is “the speed of development of Node-RED”. After a big Node-RED and JavaScript update dependencies most likely break, and those single community maintained nodes need to be updated.

Where to get more information?

2.5 - Retrofitting with ifm IO-link master and sensorconnect

Upgrade older machines with ifm IO-Link master and Sensorconnect for seamless data collection and integration. Retrofit your shop floor with plug-and-play sensors for valuable insights and improved efficiency.

Retrofitting older machines with sensors is sometimes the only-way to capture process-relevant information. In this article, we will focus on retrofitting with ifm IO-Link master and Sensorconnect, a microservice of the United Manufacturing Hub, that finds and reads out ifm IO-Link masters in the network and pushes sensor data to MQTT/Kafka for further processing.

When should I use it?

Retrofitting with ifm IO-Link master such as the AL1350 and using Sensorconnect is ideal when dealing with older machines that are not equipped with any connectable hardware to read relevant information out of the machine itself. By placing sensors on the machine and connecting them with IO-Link master, required information can be gathered for valuable insights. Sensorconnect helps to easily connect to all sensors correctly and properly capture the large amount of sensor data provided.

What can I do with it?

With ifm IO-Link master and Sensorconnect, you can collect data from sensors and make it accessible for further use. Sensorconnect offers:

  • Automatic detection of ifm IO-Link masters in the network.
  • Identification of IO-Link and alternative digital or analog sensors connected to the master using converter such as the DP2200. Digital Sensors employ a voltage range from 10 to 30V DC, producing binary outputs of true or false. In contrast, analog sensors operate at 24V DC, with a current range spanning from 4 to 20 mA. Utilizing the appropriate converter, analog outputs can be effectively transformed into digital signals.
  • Constant polling of data from the detected sensors.
  • Interpreting the received data based on a sensor database containing thousands of entries.
  • Sending data in JSON format to MQTT and Kafka for further data processing.

How can I use it?

To use ifm IO-link gateways and Sensorconnect please follow these instructions:

  1. Ensure all IO-Link gateways are in the same network or accessible from your instance of the United Manufacturing Hub.
  2. Retrofit the machines by connecting the desired sensors and establish a connection with ifm IO-Link gateways.
  3. Configure the Sensorconnect IP-range to either match the IP address using subnet notation /32, or, in cases involving multiple masters, configure it to scan an entire range, for example /24. To change the value, go to the Customize the United Manufacturing Hub section.
  4. Once completed, the data should be available in your Unified Namespace.

What are the limitations?

  • The current ifm firmware has a software bug, that will cause the IO-Link master to crash if it receives to many requests. To resolve this issue, you can either request an experimental firmware, which is available exclusively from ifm, or re-connect the power to the IO-Link gateway.

Where to get more information?

2.6 - Retrofitting with USB barcodereader

Integrate USB barcode scanners with United Manufacturing Hub’s barcodereader microservice for seamless data publishing to Unified Namespace. Ideal for inventory, order processing, and quality testing stations.

The barcodereader microservice enables the processing of barcodes from USB-linked scanner devices, subsequently publishing the acquired data to the Unified Namespace.

When should I use it?

When you need to connect a barcode reader or any other USB devices acting as a keyboard (HID). These cases could be to scan an order at the production machine from the accompanying order sheet. Or To scan material for inventory and track and trace.

What can I do with it?

You can connect USB devices acting as a keyboard to the Unified Namespace. It will record all inputs and send it out once a return / enter character has been detected. A lof of barcode scanners work that way. Additionally, you can also connect something like a quality testing station (we once connected a Mitutoyo quality testing station).

How can I use it?

To use the microservice barcode reader, you will need configure the helm-chart and enable it.

  1. Enable _000_commonConfig.datasources.barcodereader.enabled in the Helm Chart
  2. During startup, it will show all connected USB devices. Remember yours and then change the INPUT_DEVICE_NAME and INPUT_DEVICE_PATH
  3. Also set ASSET_ID, CUSTOMER_ID, etc. as this will then send it into the topic ia/ASSET_ID/…/barcode
  4. Restart the pod
  5. Scan a device, and it will be written into the topic xxx

Once installed, you can configure the microservice by setting the needed environment variables. The program will continuously scan for barcodes using the device and publish the data to the Kafka topic.

What are the limitations?

  • Sometimes special characters are not parsed correctly. They need to be adjusted afterward in th Unified Namespace.

Where to get more information?

2.7 - Alerting

Monitor and maintain your manufacturing processes with real-time Grafana alerts from the United Manufacturing Hub. Get notified of potential issues and reduce downtime by proactively addressing problems.

The United Manufacturing Hub utilizes a TimescaleDB database, which is based on PostgreSQL. Therefore, you can use the PostgreSQL plugin in Grafana to implement and configure alerts and notifications.

Why should I use it?

Alerts based on real-time data enable proactive problem detection. For example, you will receive a notification if the temperature of machine oil or an electrical component of a production line exceeds limitations. By utilizing such alerts, you can schedule maintenance, enhance efficiency, and reduce downtime in your factories.

What can I do with it?

Grafana alerts help you keep an eye on your production and manufacturing processes. By setting up alerts, you can quickly identify problems, ensuring smooth operations and high-quality products. An example of using alerts is the tracking of the temperature of an industrial oven. If the temperature goes too high or too low, you will get an alert, and the responsible team can take action before any damage occurs. Alerts can be configured in many different ways, for example, to set off an alarm if a maximum is reached once or if it exceeds a limit when averaged over a time period. It is also possible to include several values to create an alert, for example if a temperature surpasses a limit and/or the concentration of a component is too low. Notifications can be sent simultaneously across many services like Discord, Mail, Slack, Webhook, Telegram, or Microsoft Teams. It is also possible to forward the alert with SMS over a personal Webhook. A complete list can be found on the Grafana page about alerting.

How can I use it?

For a detailed tutorial on how to set up an alert, please visit our learn page with the detailed step-by-step tutorial. Here you can find an overview of the process.

  1. Install the PostgreSQL plugin in Grafana: Before you can formulate alerts, you need to install the PostgreSQL plugin, which is already integrated into Grafana.

  2. Alert Rule: When creating an alert, you first have to set the alert rule in Grafana. Here you set a name, specify which values are used for the rule, and when the rule is fired. Additionally, you can add labels for your rules, to link them to the correct contact points. You have to use SQL to select the desired values.

  3. Contact Point: In a contact point you create a collection of addresses and services that should be notified in case of an alert. This could be a Discord channel or Slack for example. When a linked alert is triggered, everyone within the contact point receives a message. The messages can be preconfigured and are specific to every service or contact.

  4. Notification Policies: In a notification policy, you establish the connection of a contact point with the desired alerts. This is done by adding the labels of the desired alerts and the contact point to the policy.

  5. Mute Timing: In case you do not want to receive messages during a recurring time period, you can add a mute timing to Grafana. If added to the notification policy, no notifications will be sent out by the contact point. This could be times without shifts, like weekends or during regular maintenance.

  6. Silence: You can also add silences for a specific time frame and labels, in case you only want to mute alerts once.

An alert is only sent out once after being triggered. For the next alert, it has to return to the normal state, so the data no longer violates the rule.

What are the limitations?

It can be complicated to select and manipulate the desired values to create the correct function for your application. Grafana cannot differentiate between data points of the same source. For example, you want to make a temperature threshold based on a single sensor. If your query selects the last three values and two of them are above the threshold, Grafana will fire two alerts which it cannot tell apart. This results in errors. You have to configure the rule to reduce the selected values to only one per source to avoid this. It can be complicated to create such a specific rule with this limitation, and it requires some testing.

Another thing to keep in mind is that the alerts can only work with data from the database. It also does not work with the machine status; these values only exist in a raw, unprocessed form in TimescaleDB and are not processed through an API like process values.

Where to get more information?

3 - Architecture

A detailed view of the architecture of the UMH stack.

The United Manufacturing Hub at its core is a Helm Chart for Kubernetes consisting of several microservices and open source 3rd party applications, such as Node-RED and Grafana. This Helm Chart can be deployed in various environments, from edge devices and virtual machines to managed Kubernetes offerings. In large-scale deployments, you find typically a combination out of all these deployment options.

In this chapter, we’ll explore the various microservices and applications that make up the United Manufacturing Hub, and how they work together to help you extract, contextualize, store, and visualize data from your shop floor.

flowchart subgraph UMH["United Manufacturing Hub"] style UMH fill:#47a0b5 subgraph UNS["Unified Namespace"] style UNS fill:#f4f4f4 kafka["Apache Kafka"] mqtt["HiveMQ"] console["Console"] kafka-bridge mqtt-kafka-bridge["mqtt-kafka-bridge"] click kafka "./microservices/core/kafka" click mqtt "./microservices/core/mqtt-broker" click console "./microservices/core/console" click kafka-bridge "./microservices/core/kafka-bridge" click mqtt-kafka-bridge "./microservices/core/mqtt-kafka-bridge" mqtt <-- MQTT --> mqtt-kafka-bridge <-- Kafka --> kafka kafka -- Kafka --> console end subgraph custom["Custom Microservices"] custom-microservice["A user provied custom microservice in the Helm Chart"] custom-application["A user provided custom application deployed as Kubernetes resources or as a Helm Chart"] click custom-microservice "./microservices/core/custom" end subgraph Historian style Historian fill:#f4f4f4 kafka-to-postgresql timescaledb[("TimescaleDB")] factoryinsight umh-datasource grafana["Grafana"] redis click kafka-to-postgresql "./microservices/core/kafka-to-postgresql" click timescaledb "./microservices/core/database" click factoryinsight "./microservices/core/factoryinsight" click grafana "./microservices/core/grafana" click redis "./microservices/core/redis" kafka -- Kafka ---> kafka-to-postgresql kafka-to-postgresql -- SQL --> timescaledb timescaledb -- SQL --> factoryinsight factoryinsight -- HTTP --> umh-datasource umh-datasource --Plugin--> grafana factoryinsight <--RESP--> redis kafka-to-postgresql <--RESP--> redis end subgraph Connectivity style Connectivity fill:#f4f4f4 nodered["Node-RED"] barcodereader sensorconnect click nodered "./microservices/core/node-red" click barcodereader "./microservices/community/barcodereader" click sensorconnect "./microservices/core/sensorconnect" nodered <-- Kafka --> kafka barcodereader -- Kafka --> kafka sensorconnect -- Kafka --> kafka end subgraph Simulators style Simulators fill:#f4f4f4 mqtt-simulator["IoT sensors simulator"] packml-simulator["PackML simulator"] opcua-simulator["OPC-UA simulator"] click mqtt-simulator "./microservices/community/mqtt-simulator" click packml-simulator "./microservices/community/packml-simulator" click opcua-simulator "./microservices/community/opcua-simulator" mqtt-simulator -- MQTT --> mqtt packml-simulator -- MQTT --> mqtt opcua-simulator -- OPC-UA --> nodered end end subgraph Datasources plc["PLCs"] other["Other systems on the shopfloor (MES, ERP, etc.)"] barcode["USB barcode reader"] ifm["IO-link sensor"] iot["IoT devices"] plc -- "Siemens S7, OPC-UA, Modbus, etc." --> nodered other -- " " ----> nodered ifm -- HTTP --> sensorconnect barcode -- USB --> barcodereader iot <-- MQTT --> mqtt %% at the end for styling purposes nodered <-- MQTT --> mqtt end subgraph Data sinks umh-other["Other UMH instances"] other-systems["Other systems (cloud analytics, cold storage, BI tools, etc.)"] kafka <-- Kafka --> kafka-bridge kafka-bridge <-- Kafka ----> umh-other factoryinsight -- HTTP ----> other-systems end

Simulators

The United Manufacturing Hub includes several simulators to generate data during development and testing.

Microservices

  • iotsensorsmqtt simulates data in three different MQTT topics, providing a simple way to test and visualize MQTT data streams.
  • packml-simulator simulates a PackML machine which sends and receives MQTT messages
  • opcua-simulator simulates an OPC-UA server, which can then be used to test connectivity of OPC-UA clients and to generate sample data for OPC-UA clients

Data connectivity microservices

The United Manufacturing Hub includes microservices that extract data from the shop floor and push it into the Unified Namespace. Additionally, you can deploy your own microservices or third-party solutions directly into the Kubernetes cluster using the custom microservice feature. To learn more about third-party solutions, check out our extensive tutorials on our learning hub

Microservices

  • sensorconnect automatically reads out IO-Link Master and their connected sensors, and pushes the data to the message broker.
  • barcodereader connects to USB barcode reader devices and pushes the data to the message broker.
  • Node-RED is a versatile tool with many community plugins and allows access to machine PLCs or connections with other systems on the shopfloor. It plays an important role and is explained in the next section.

Node-RED: connectivity & contextualization

Node-RED is not just a tool for connectivity, but also for stream processing and data contextualization. It is often used to extract data from the message broker, reformat the event, and push it back into a different topic, such as the UMH datamodel.

In addition to the built-in microservices, third-party contextualization solutions can be deployed similarly to data connectivity microservices. For more information on these solutions, check out our extensive tutorials on our learning hub. In addition to the built-in microservices, third-party contextualization solutions can be deployed similarly to data connectivity microservices. For more information on these solutions, check out our extensive tutorials on our learning hub.

Microservices

  • Node-RED is a programming tool that can wire together hardware devices, APIs, and online services.

Unified Namespace

At the core of the United Manufacturing Hub lies the Unified Namespace, which serves as the central source of truth for all events and messages occurring on your shop floor. The Unified Namespace is implemented using two message brokers: HiveMQ for MQTT and Apache Kafka. MQTT is used to receive data from IoT devices on the shop floor because it excels at handling a large number of unreliable connections. On the other hand, Kafka is used to enable communication between the microservices, leveraging its large-scale data processing capabilities.

The data between both brokers is bridged automatically using the mqtt-to-kafka microservice, allowing you to send data to MQTT and process it reliably in Kafka.

If you’re curious about the benefits of this dual approach to MQTT/Kafka, check out our blog article about Tools & Techniques for Scalable Dataprocessing in Industrial IoT.

For more information on the Unified Namespace feature and how to use it, check out the detailed description of the Unified Namespace feature.

Microservices

  • HiveMQ is an MQTT broker used for receiving data from IoT devices on the shop floor. It excels at handling large numbers of unreliable connections.
  • Apache Kafka is a distributed streaming platform used for communication between microservices. It offers large-scale data processing capabilities.
  • mqtt-kafka-bridge is a microservice that bridges messages between MQTT and Kafka, allowing you to send data to MQTT and process them reliably in Kafka.
  • kafka-bridge a microservice that bridges messages between multiple Kafka instances.
  • console is a web-based user interface for Kafka, which provides a graphical view of topics and messages.

Historian / data storage and visualization

The United Manufacturing Hub stores events according to our datamodel. These events are automatically stored in TimescaleDB, an open-source time-series SQL database. From there, you can access the stored data using Grafana, a visualization and analytics software. With Grafana, you can perform on-the-fly data analysis by executing simple min, max, and avg on tags, or extended KPI calculations such as OEE. These calculations can be selected in the umh-datasource microservice.

For more information on the Historian or Analytics feature and how to use it, check out the detailed description of the Historian feature or the Analytics features.

Microservices

  • kafka-to-postgresql stores data in selected topics from the Kafka broker in a PostgreSQL compatible database such as TimescaleDB.
  • TimescaleDB, which is an open-source time-series SQL database
  • factoryinsight provides REST endpoints to fetch data and calculate KPIs
  • Grafana is a visualization and analytics software
  • umh-datasource is a plugin providing access factoryinsight
  • redis is an in-memory data structure store, used for cache.

Custom Microservices

The Helm Chart allows you to add your own microservices or Docker containers to the United Manufacturing Hub. These can be used, for example, to connect with third-party systems or to analyze the data. Additionally, you can deploy any other third-party application as long as it is available as a Helm Chart, Kubernetes resource, or Docker Compose (which can be converted to Kubernetes resources).

3.1 - Helm Chart

This page describes the Helm Chart of the United Manufacturing Hub and the possible configuration options.

An Helm chart is a package manager for Kubernetes that simplifies the installation, configuration, and deployment of applications and services. It contains all the necessary Kubernetes manifests, configuration files, and dependencies required to run a particular application or service. One of the main advantages of Helm is that it allows to define the configuration of the installed resources in a single YAML file, called values.yaml. Helm provides great documentation on how to acheive this at https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing

The Helm Chart of the United Manufacturing Hub is composed of both custom microservices and third-party applications. If you want a more in-depth view of the architecture of the United Manufacturing Hub, you can read the Architecture overview page.

Helm Chart structure

Custom microservices

The Helm Chart of the United Manufacturing Hub is composed of the following custom microservices:

  • barcodereader: reads the input from a barcode reader and sends it to the MQTT broker for further processing.
  • customMicroservice: a template for deploying any number of custom microservices.
  • factoryinput: provides REST endpoints for MQTT messages.
  • factoryinsight: provides REST endpoints to fetch data and calculate KPIs.
  • grafanaproxy: provides a proxy to the backend services.
  • MQTT Simulator: simulates sensors and sends the data to the MQTT broker for further processing.
  • kafka-bridge: connects Kafka brokers on different Kubernetes clusters.
  • kafkatopostgresql: stores the data from the Kafka broker in a PostgreSQL database.
  • mqtt-kafka-bridge: connects the MQTT broker and the Kafka broker.
  • mqttbridge: connects MQTT brokers on different Kubernetes clusters.
  • opcuasimulator: simulates OPC UA servers and sends the data to the MQTT broker for further processing.
  • packmlmqttsimulator: simulates a PackML state machine and sends the data to the MQTT broker for further processing.
  • sensorconnect: connects to a sensor and sends the data to the MQTT and Kafka brokers for further processing.
  • tulip-connector: exposes internal APIs to the internet, especially tailored for the Tulip platform.

Third-party applications

The Helm Chart of the United Manufacturing Hub is composed of the following third-party applications:

  • Grafana: a visualization and analytics software.
  • HiveMQ: an MQTT broker.
  • Node-RED: a programming tool for wiring together hardware devices, APIs and online services.
  • Redis: an in-memory data structure store, used for cache.
  • RedPanda: a Kafka-compatible distributed event streaming platform.
  • RedPanda Console: a web-based user interface for RedPanda.
  • TimescaleDB: an open-source time-series SQL database.

Configuration options

The Helm Chart of the United Manufacturing Hub can be configured by setting values in the values.yaml file. This file has three main sections that can be used to configure the applications:

  • customers: contains the definition of the customers that will be created during the installation of the Helm Chart. This section is optional, and it’s used only by factoryinsight and factoryinput.
  • _000_commonConfig: contains the basic configuration options to customize the United Manufacturing Hub, and it’s divided into sections that group applications with similar scope, like the ones that compose the infrastructure or the ones responsible for data processing. This is the section that should be mostly used to configure the microservices.
  • _001_customMicroservices: used to define the configuration of custom microservices that are not included in the Helm Chart.

After those three sections, there are the specific sections for each microservice, which contain their advanced configuration. This is the so called Danger Zone, because the values in those sections should not be changed, unlsess you absolutely know what you are doing.

When a parameter contains . (dot) characters, it means that it is a nested parameter. For example, in the tls.factoryinput.cert parameter the cert parameter is nested inside the tls.factoryinput section, and the factoryinput section is nested inside the tls section.

Customers

The customers section contains the definition of the customers that will be created during the installation of the Helm Chart. It’s a simple dictionary where the key is the name of the customer, and the value is the password.

For example, the following snippet creates two customers:

customers:
  customer1: password1
  customer2: password2

Common configuration options

The _000_commonConfig contains the basic configuration options to customize the United Manufacturing Hub, and it’s divided into sections that group applications with similar scope.

The following table lists the configuration options that can be set in the _000_commonConfig section:

_000_commonConfig section parameters
ParameterDescriptionTypeAllowed valuesDefault
datainputThe configuration of the microservices used to input data.objectSee belowSee below
dataprocessingThe configuration of the microservices used to process data.objectSee belowSee below
datasourcesThe configuration of the microservices used to acquire data.objectSee belowSee below
datastorageThe configuration of the microservices used to store data.objectSee belowSee below
debugThe configuration for the debug mode.objectSee belowSee below
infrastructureThe configuration of the microservices used to provide infrastructure services.objectSee belowSee below
kafkaBridgeThe configuration for the Kafka bridge.objectSee belowSee below
kafkaStateDetectorThe configuration for the Kafka state detector.objectSee belowSee below
metrics.enabledWhether to enable the anonymous metrics service or not.booltrue or falsetrue
mqttBridgeThe configuration for the MQTT bridge.objectSee belowSee below
serialNumberThe hostname of the device. Used by some microservices to identify the device.stringAnydefault
tulipconnectorThe configuration for the Tulip connector.objectSee belowSee below

Data sources

The _000_commonConfig.datasources section contains the configuration of the microservices used to acquire data, like the ones that connect to a sensor or simulate data.

The following table lists the configuration options that can be set in the _000_commonConfig.datasources section:

datasources section parameters
ParameterDescriptionTypeAllowed valuesDefault
barcodereaderThe configuration of the barcodereader microservice.objectSee belowSee below
iotsensorsmqttThe configuration of the IoTSensorsMQTT microservice.objectSee belowSee below
opcuasimulatorThe configuration of the opcuasimulator microservice.objectSee belowSee below
packmlmqttsimulatorThe configuration of the packmlsimulator microservice.objectSee belowSee below
sensorconnectThe configuration of the sensorconnect microservice.objectSee belowSee below
Barcode reader

The _000_commonConfig.datasources.barcodereader section contains the configuration of the barcodereader microservice.

The following table lists the configuration options that can be set in the _000_commonConfig.datasources.barcodereader section:

barcodereader section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether the barcodereader microservice is enabled.booltrue, falsefalse
USBDeviceNameThe name of the USB device to use.stringAnyDatalogic ADC, Inc. Handheld Barcode Scanner
USBDevicePathThe path of the USB device to use. It is recommended to use a wildcard (for example, /dev/input/event*) or leave emptystringValid Unix device path""
customerIDThe customer ID to use in the topic structure.stringAnyraw
locationThe location to use in the topic structure.stringAnybarcodereader
machineIDThe asset ID to use in the topic structure.stringAnybarcodereader
IoT Sensors MQTT

The _000_commonConfig.datasources.iotsensorsmqtt section contains the configuration of the IoTSensorsMQTT microservice.

The following table lists the configuration options that can be set in the _000_commonConfig.datasources.iotsensorsmqtt section:

iotsensorsmqtt section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether the IoTSensorsMQTT microservice is enabled.booltrue, falsetrue
OPC UA Simulator

The _000_commonConfig.datasources.opcuasimulator section contains the configuration of the opcuasimulator microservice.

The following table lists the configuration options that can be set in the _000_commonConfig.datasources.opcuasimulator section:

opcuasimulator section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether the opcuasimulator microservice is enabled.booltrue, falsetrue
PackML MQTT Simulator

The _000_commonConfig.datasources.packmlmqttsimulator section contains the configuration of the packmlsimulator microservice.

The following table lists the configuration options that can be set in the _000_commonConfig.datasources.packmlmqttsimulator section:

packmlmqttsimulator section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether the packmlsimulator microservice is enabled.booltrue, falsetrue
Sensor connect

The _000_commonConfig.datasources.sensorconnect section contains the configuration of the sensorconnect microservice.

The following table lists the configuration options that can be set in the _000_commonConfig.datasources.sensorconnect section:

sensorconnect section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether the sensorconnect microservice is enabled.booltrue, falsefalse
iprangeThe IP range of the sensors in CIDR notation.stringValid IP range192.168.10.1/24
enableKafkaWhether the sensorconnect microservice should use Kafka.booltrue, falsetrue
enableMQTTWhether the sensorconnect microservice should use MQTT.booltrue, falsefalse
testModeWhether to enable test mode. Only useful for development.booltrue, falsefalse

Data processing

The _000_commonConfig.dataprocessing section contains the configuration of the microservices used to process data, such as the nodered microservice.

The following table lists the configuration options that can be set in the _000_commonConfig.dataprocessing section:

dataprocessing section parameters
ParameterDescriptionTypeAllowed valuesDefault
noderedThe configuration of the nodered microservice.objectSee belowSee below
Node-RED

The _000_commonConfig.dataprocessing.nodered section contains the configuration of the nodered microservice.

The following table lists the configuration options that can be set in the _000_commonConfig.dataprocessing.nodered section:

nodered section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether the nodered microservice is enabled.booltrue, falsetrue
defaultFlowsWhether the default flows should be used.booltrue, falsefalse

Infrastructure

The _000_commonConfig.infrastructure section contains the configuration of the microservices responsible for connecting all the other microservices, such as the MQTT broker and the Kafka broker.

The following table lists the configuration options that can be set in the _000_commonConfig.infrastructure section:

infrastructure section parameters
ParameterDescriptionTypeAllowed valuesDefault
mqttThe configuration of the MQTT broker.objectSee belowSee below
kafkaThe configuration of the Kafka broker.objectSee belowSee below
MQTT

The _000_commonConfig.infrastructure.mqtt section contains the configuration of the MQTT broker.

The following table lists the configuration options that can be set in the _000_commonConfig.infrastructure.mqtt section:

mqtt section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether the MQTT broker is enabledbooltrue, falsetrue
adminUser.enabledWhether the admin user is enabledbooltrue, falsefalse
adminUser.nameThe name of the admin userstringAny UTF-8 stringadmin-user
adminUser.encryptedPasswordThe encrypted password of the admin userstringAny""
tls.useTLSWhether TLS should be usedbooltrue, falsetrue
tls.insecureSkipVerifyWhether the SSL certificate validation should be skippedbooltrue, falsetrue
tls.keystoreBase64The base64 encoded keystorestringAny""
tls.keystorePasswordThe password of the keystorestringAny""
tls.truststoreBase64The base64 encoded truststorestringAny""
tls.truststorePasswordThe password of the truststorestringAny""
tls.caCertThe CA certificatestringAny""
tls.factoryinput.certThe certificate used for the factoryinput microservicestringAny""
tls.factoryinput.keyThe key used for the factoryinput microservicestringAny""
tls.mqtt_kafka_bridge.certThe certificate used for the mqttkafkabridgestringAny""
tls.mqtt_kafka_bridge.keyThe key used for the mqttkafkabridgestringAny""
tls.mqtt_bridge.local_certThe certificate used for the local mqttbridge brokerstringAny""
tls.mqtt_bridge.local_keyThe key used for the local mqttbridge brokerstringAny""
tls.mqtt_bridge.remote_certThe certificate used for the remote mqttbridge brokerstringAny""
tls.mqtt_bridge.remote_keyThe key used for the remote mqttbridge brokerstringAny""
tls.sensorconnect.certThe certificate used for the sensorconnect microservicestringAny""
tls.sensorconnect.keyThe key used for the sensorconnect microservicestringAny""
tls.iotsensorsmqtt.certThe certificate used for the iotsensorsmqtt microservicestringAny""
tls.iotsensorsmqtt.keyThe key used for the iotsensorsmqtt microservicestringAny""
tls.packmlsimulator.certThe certificate used for the packmlsimulator microservicestringAny""
tls.packmlsimulator.keyThe key used for the packmlsimulator microservicestringAny""
tls.nodered.certThe certificate used for the nodered microservicestringAny""
tls.nodered.keyThe key used for the nodered microservicestringAny""
Kafka

The _000_commonConfig.infrastructure.kafka section contains the configuration of the Kafka broker and related services, like mqttkafkabridge, kafkatopostgresql and the Kafka console.

The following table lists the configuration options that can be set in the _000_commonConfig.infrastructure.kafka section:

kafka section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether the Kafka broker and related services are enabledbooltrue, falsetrue
useSSLWhether SSL should be usedbooltrue, falsetrue
defaultTopicsThe default topics that should be createdstringSemicolon separated list of valid Kafka topicsia.test.test.test.processValue;ia.test.test.test.count;umh.v1.kafka.newTopic
tls.CACertThe CA certificatestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafka.certThe certificate used for the kafka brokerstringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafka.privkeyThe private key of the certificate for the Kafka brokerstringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.barcodereader.sslKeyPasswordThe encrypted password of the SSL key for the barcodereader microservice. If empty, no password is usedstringAny""
tls.barcodereader.sslKeyPemThe private key for the SSL certificate of the barcodereader microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.barcodereader.sslCertificatePemThe private SSL certificate for the barcodereader microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkabridge.sslKeyPasswordLocalThe encrypted password of the SSL key for the local mqttbridge broker. If empty, no password is usedstringAny""
tls.kafkabridge.sslKeyPemLocalThe private key for the SSL certificate of the local mqttbridge brokerstringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkabridge.sslCertificatePemLocalThe private SSL certificate for the local mqttbridge brokerstringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkabridge.sslCACertRemoteThe CA certificate for the remote mqttbridge brokerstringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkabridge.sslCertificatePemRemoteThe private SSL certificate for the remote mqttbridge brokerstringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkabridge.sslKeyPasswordRemoteThe encrypted password of the SSL key for the remote mqttbridge broker. If empty, no password is usedstringAny""
tls.kafkabridge.sslKeyPemRemoteThe private key for the SSL certificate of the remote mqttbridge brokerstringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkadebug.sslKeyPasswordThe encrypted password of the SSL key for the kafkadebug microservice. If empty, no password is usedstringAny""
tls.kafkadebug.sslKeyPemThe private key for the SSL certificate of the kafkadebug microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkadebug.sslCertificatePemThe private SSL certificate for the kafkadebug microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkainit.sslKeyPasswordThe encrypted password of the SSL key for the kafkainit microservice. If empty, no password is usedstringAny""
tls.kafkainit.sslKeyPemThe private key for the SSL certificate of the kafkainit microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkainit.sslCertificatePemThe private SSL certificate for the kafkainit microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkastatedetector.sslKeyPasswordThe encrypted password of the SSL key for the kafkastatedetector microservice. If empty, no password is usedstringAny""
tls.kafkastatedetector.sslKeyPemThe private key for the SSL certificate of the kafkastatedetector microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkastatedetector.sslCertificatePemThe private SSL certificate for the kafkastatedetector microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkatopostgresql.sslKeyPasswordThe encrypted password of the SSL key for the kafkatopostgresql microservice. If empty, no password is usedstringAny""
tls.kafkatopostgresql.sslKeyPemThe private key for the SSL certificate of the kafkatopostgresql microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkatopostgresql.sslCertificatePemThe private SSL certificate for the kafkatopostgresql microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kowl.sslKeyPasswordThe encrypted password of the SSL key for the kowl microservice. If empty, no password is usedstringAny""
tls.kowl.sslKeyPemThe private key for the SSL certificate of the kowl microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kowl.sslCertificatePemThe private SSL certificate for the kowl microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.mqttkafkabridge.sslKeyPasswordThe encrypted password of the SSL key for the mqttkafkabridge microservice. If empty, no password is usedstringAny""
tls.mqttkafkabridge.sslKeyPemThe private key for the SSL certificate of the mqttkafkabridge microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.mqttkafkabridge.sslCertificatePemThe private SSL certificate for the mqttkafkabridge microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.nodered.sslKeyPasswordThe encrypted password of the SSL key for the nodered microservice. If empty, no password is usedstringAny""
tls.nodered.sslKeyPemThe private key for the SSL certificate of the nodered microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.nodered.sslCertificatePemThe private SSL certificate for the nodered microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.sensorconnect.sslKeyPasswordThe encrypted password of the SSL key for the sensorconnect microservice. If empty, no password is usedstringAny""
tls.sensorconnect.sslKeyPemThe private key for the SSL certificate of the sensorconnect microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.sensorconnect.sslCertificatePemThe private SSL certificate for the sensorconnect microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–

Data storage

The _000_commonConfig.datastorage section contains the configuration of the microservices used to store data. Specifically, it controls the following microservices:

If you want to specifically configure one of these microservices, you can do so in their respective sections in the Danger Zone.

The following table lists the configurable parameters of the _000_commonConfig.datastorage section.

datastorage section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the data storage microservicesbooltrue, falsetrue
db_passwordThe password for the database. Used by all the microservices that need to connect to the databasestringAnychangeme

Data input

The _000_commonConfig.datainput section contains the configuration of the microservices used to input data. Specifically, it controls the following microservices:

If you want to specifically configure one of these microservices, you can do so in their respective sections in the danger zone.

The following table lists the configurable parameters of the _000_commonConfig.datainput section./

datainput section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the data input microservicesbooltrue, falsefalse

MQTT Bridge

The _000_commonConfig.mqttBridge section contains the configuration of the mqtt-bridge microservice, responsible for bridging MQTT brokers in different Kubernetes clusters.

The following table lists the configurable parameters of the _000_commonConfig.mqttBridge section.

mqttBridge section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the mqtt-bridge microservicebooltrue, falsefalse
localSubTopicThe topic that the local MQTT broker subscribes tostringAny valid MQTT topicia/factoryinsight
localPubTopicThe topic that the local MQTT broker publishes tostringAny valid MQTT topicia/factoryinsight
oneWayWhether to enable one-way communication, from local to remotebooltrue, falsetrue
remoteBrokerUrlThe URL of the remote MQTT brokerstringAny valid MQTT broker URLssl://united-manufacturing-hub-mqtt.united-manufacturing-hub:8883
remoteBrokerSSLEnablesWhether to enable SSL for the remote MQTT brokerbooltrue, falsetrue
remoteSubTopicThe topic that the remote MQTT broker subscribes tostringAny valid MQTT topicia
remotePubTopicThe topic that the remote MQTT broker publishes tostringAny valid MQTT topicia/factoryinsight

Kafka Bridge

The _000_commonConfig.kafkaBridge section contains the configuration of the kafka-bridge microservice, responsible for bridging Kafka brokers in different Kubernetes clusters.

The following table lists the configurable parameters of the _000_commonConfig.kafkaBridge section.

kafkaBridge section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the kafka-bridge microservicebooltrue, falsefalse
remotebootstrapServerThe URL of the remote Kafka brokerstringAny""
topicCreationLocalListThe list of topics to create locallystringSemicolon separated list of valid Kafka topicsia.test.test.test.processValue;ia.test.test.test.count;umh.v1.kafka.newTopic
topicCreationRemoteListThe list of topics to create remotelystringSemicolon separated list of valid Kafka topicsia.test.test.test.processValue;ia.test.test.test.count;umh.v1.kafka.newTopic
topicmapThe list of topic maps of topics to forwardobjectSee belowempty
Topic Map

The topicmap parameter is a list of topic maps, each of which contains the following parameters:

topicmap section parameters
ParameterDescriptionTypeAllowed values
bidirectionalWhether to enable bidirectional communication for that topicbooltrue, false
nameThe name of the mapstringHighIntegrity, HighThroughput
send_directionThe direction of the communication for that topicstringto_remote, to_local
topicThe topic to forward. A regex can be used to match multiple topics.stringAny valid Kafka topic

For more information about the topic maps, see the kafka-bridge documentation.

Kafka State Detector

The _000_commonConfig.kafkaStateDetector section contains the configuration of the kafka-state-detector microservice, responsible for detecting the state of the Kafka broker.

The following table lists the configurable parameters of the _000_commonConfig.kafkaStateDetector section.

kafkastatedetector section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the kafka-state-detector microservicebooltrue, falsefalse

Debug

The _000_commonConfig.debug section contains the debug configuration for all the microservices. This values should not be enabled in production.

The following table lists the configurable parameters of the _000_commonConfig.debug section.

debug section parameters
ParameterDescriptionTypeAllowed valuesDefault
enableFGTraceWhether to enable the foreground tracebooltrue, falsefalse

Tulip Connector

The _000_commonConfig.tulipconnector section contains the configuration of the tulip-connector microservice, responsible for connecting a Tulip instance with the United Manufacturing Hub.

The following table lists the configurable parameters of the _000_commonConfig.tulipconnector section.

tulipconnector section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the tulip-connector microservicebooltrue, falsefalse
domainThe domain name pointing to you clusterstringAny valid domain nametulip-connector.changme.com

Custom microservices configuration

The _001_customConfig section contains a list of custom microservices definitions. It can be used to deploy any application of your choice, which can be configured using the following parameters:

Custom microservices configuration parameters
ParameterDescriptionTypeAllowed valuesDefault
nameThe name of the microservicestringAnyexample
imageThe image and tag of the microservicestringAnyhello-world:latest
enabledWhether to enable the microservicebooltrue, falsefalse
imagePullPolicyThe image pull policy of the microservicestring“Always”, “IfNotPresent”, “Never”“Always”
envThe list of environment variables to set for the microserviceobjectAny[{name: LOGGING_LEVEL, value: PRODUCTION}]
portThe internal port of the microservice to targetintAny80
externalPortThe host port to which expose the internal portintAny8080
probePortThe port to use for the liveness and startup probesintAny9091
startupProbeThe interval in seconds for the startup probeintAny200
livenessProbeThe interval in seconds for the liveness probeintAny500
statefulEnabledCreate a PersistentVolumeClaim for the microservice and mount it in /databooltrue, falsefalse

Danger zone

The next sections contain a more advanced configuration of the microservices. Usually, changing the values of the previous sections is enough to run the United Manufacturing Hub. However, you may need to adjust some of the values below if you want to change the default behavior of the microservices.

Everything below this point should not be changed, unless you know what you are doing.
Danger zone advanced configuration
SectionDescription
barcodereaderConfiguration for barcodereader
factoryinputConfiguration for factoryinput
factoryinsightConfiguration for factoryinsight
grafanaConfiguration for Grafana
grafanaproxyConfiguration for the Grafana proxy
iotsensorsmqttConfiguration for the IoTSensorsMQTT simulator
kafkabridgeConfiguration for kafka-bridge
kafkastatedetectorConfiguration for kafka-state-detector
kafkatopostgresqlConfiguration for kafka-to-postgresql
metricsConfiguration for the metrics
mqtt_brokerConfiguration for the MQTT broker
mqttbridgeConfiguration for mqtt-bridge
mqttkafkabridgeConfiguration for mqtt-kafka-bridge
noderedConfiguration for Node-RED
opcuasimulatorConfiguration for the OPC UA simulator
packmlmqttsimulatorConfiguration for the PackML MQTT simulator
redisConfiguration for Redis
redpandaConfiguration for the Kafka broker
sensorconnectConfiguration for sensorconnect
serviceAccountConfiguration for the service account used by the microservices
timescaledb-singleConfiguration for TimescaleDB
tulipconnectorConfiguration for tulip-connector

Sections

barcodereader

The barcodereader section contains the advanced configuration of the barcodereader microservice.

barcodereader advanced section parameters
ParameterDescriptionTypeAllowed valuesDefault
annotationsAnnotations to add to the Kubernetes resourcesobjectAny{}
enabledWhether to enable the barcodereader microservicebooltrue, falsefalse
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the barcodereader microservicestringAnyghcr.io/united-manufacturing-hub/barcodereader
image.tagThe tag of the barcodereader microservice. Defaults to Chart version if not setstringAny0.9.14
resources.limits.cpuThe CPU limitstringAny10m
resources.limits.memoryThe memory limitstringAny60Mi
resources.requests.cpuThe CPU requeststringAny2m
resources.requests.memoryThe memory requeststringAny30Mi
scanOnlyWhether to only scan without sending the data to the Kafka brokerbooltrue, falsefalse

factoryinput

The factoryinput section contains the advanced configuration of the factoryinput microservice.

factoryinput advanced section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the factoryinput microservicebooltrue, falsefalse
envThe environment variablesobjectAnySee env section
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the factoryinput microservicestringAnyghcr.io/united-manufacturing-hub/factoryinput
image.tagThe tag of the factoryinput microservice. Defaults to Chart version if not setstringAny0.9.14
mqtt.encryptedPasswordThe encrypted password of the MQTT brokerstringAnyBase 64 encrypted password
mqtt.passwordThe password of the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
pdb.enabledWhether to enable a PodDisruptionBudgetbooltrue, falsetrue
pdb.minAvailableThe minimum number of available podsintAny1
replicasThe number of Pod replicasintAny1
service.annotationsAnnotations to add to the factoryinput ServiceobjectAny{}
storageRequestThe amount of storage for the PersistentVolumeClaimstringAny1Gi
userThe user of factoryinputstringAnyfactoryinsight
env

The env section contains the configuration of the environment variables to add to the Pod.

factoryinput env parameters
ParameterDescriptionTypeAllowed valuesDefault
loggingLevelThe logging level of the factoryinput microservicestringPRODUCTION, DEVELOPMENTPRODUCTION
mqttQueueHandlerNumber of queue workers to spawnint0-6553510
versionThe version of the API used. Each version also enables all the previous onesintAny2

factoryinsight

The factoryinsight section contains the advanced configuration of the factoryinsight microservice.

factoryinsight advanced section parameters
ParameterDescriptionTypeAllowed valuesDefault
db_databaseThe database namestringAnyfactoryinsight
db_hostThe host of the databasestringAny[i18n] resource_service_database
db_userThe database userstringAnyfactoryinsight
enabledWhether to enable the factoryinsight microservicebooltrue, falsefalse
hpa.enabledWhether to enable a HorizontalPodAutoscalerbooltrue, falsefalse
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the factoryinsight microservicestringAnyghcr.io/united-manufacturing-hub/factoryinsight
image.tagThe tag of the factoryinsight microservice. Defaults to Chart version if not setstringAny0.9.14
ingress.enabledWhether to enable an Ingressbooltrue, falsefalse
ingress.publicHostSecretNameThe secret name of the public host of the IngressstringAny""
ingress.publicHostThe public host of the IngressstringAny""
insecure_no_authWhether to enable the insecure_no_auth modebooltrue, falsefalse
pdb.enabledWhether to enable a PodDisruptionBudgetbooltrue, falsefalse
redis.URIThe URI of the Redis instancestringAnyunited-manufacturing-hub-redis-headless:6379
replicasThe number of Pod replicasintAny2
resources.limits.cpuThe CPU limitstringAny200m
resources.limits.memoryThe memory limitstringAny200Mi
resources.requests.cpuThe CPU requeststringAny50m
resources.requests.memoryThe memory requeststringAny50Mi
service.annotationsAnnotations to add to the factoryinsight ServiceobjectAny{}
userThe user of factoryinsightstringAnyfactoryinsight
versionThe version of the API used. Each version also enables all the previous onesintAny2

grafana

The grafana section contains the advanced configuration of the grafana microservice. This is based on the official Grafana Helm chart. For more information about the parameters, please refer to the official documentation.

Here are only the values different from the default ones.

grafana advanced section parameters
ParameterDescriptionTypeAllowed valuesDefault
admin.existingSecretThe name of the secret containing the admin passwordstringAnygrafana-secret
admin.passwordKeyThe key of the admin password in the secretstringAnyadminpassword
admin.userKeyThe key of the admin password in the secretstringAnyadminuser
datasourcesThe datasources configuration.objectAnySee datasources section
envValueFromEnvironment variables to add to the Pod, from a secret or a configmapobjectAnySee envValueFrom section
envEnvironment variables to add to the PodobjectAnySee env section
extraInitContainersExtra init containers to add to the PodobjectAnySee extraInitContainers section
grafana.iniThe grafana.ini configuration.objectAnySee grafana.ini section
initChownData.enabledWhether to enable the initChownData job, to reset data ownership at startupbooltrue, falsetrue
persistence.enabledWhether to enable persistencebooltrue, falsetrue
persistence.sizeThe size of the persistent volumestringAny5Gi
podDisruptionBudget.minAvailableThe minimum number of available podsintAny1
service.portThe port of the ServiceintAny8080
service.typeThe type of Service to exposestringClusterIP, LoadBalancerLoadBalancer
serviceAccount.createWhether to create a ServiceAccountbooltrue, falsefalse
testFramework.enabledWhether to enable the test frameworkbooltrue, falsefalse
datasources

The datasources section contains the configuration of the datasources provisioning. See the Grafana documentation for more information.

datasources.yaml:
  apiVersion: 1
  datasources:
    - name: umh-v2-datasource
      # <string, required> datasource type. Required
      type: umh-v2-datasource
      # <string, required> access mode. proxy or direct (Server or Browser in the UI). Required
      access: proxy
      # <int> org id. will default to orgId 1 if not specified
      orgId: 1
      url: "http://united-manufacturing-hub-factoryinsight-service/"
      jsonData:
        customerID: $FACTORYINSIGHT_CUSTOMERID
        apiKey: $FACTORYINSIGHT_PASSWORD
        baseURL: "http://united-manufacturing-hub-factoryinsight-service/"
        apiKeyConfigured: true
      version: 1
      # <bool> allow users to edit datasources from the UI.
      isDefault: false
      editable: false
    # <string, required> name of the datasource. Required
    - name: umh-datasource
      # <string, required> datasource type. Required
      type: umh-datasource
      # <string, required> access mode. proxy or direct (Server or Browser in the UI). Required
      access: proxy
      # <int> org id. will default to orgId 1 if not specified
      orgId: 1
      url: "http://united-manufacturing-hub-factoryinsight-service/"
      jsonData:
        customerId: $FACTORYINSIGHT_CUSTOMERID
        apiKey: $FACTORYINSIGHT_PASSWORD
        serverURL: "http://united-manufacturing-hub-factoryinsight-service/"
        apiKeyConfigured: true
      version: 1
      # <bool> allow users to edit datasources from the UI.
      isDefault: true
      editable: false
    # <string, required> name of the datasource. Required
envValueFrom

The envValueFrom section contains the configuration of the environment variables to add to the Pod, from a secret or a configmap.

grafana envValueFrom section parameters
ParameterDescriptionValue fromNameKey
FACTORYINSIGHT_APIKEYThe API key to use to authenticate to the Factoryinsight APIsecretKeyReffactoryinsight-secretapiKey
FACTORYINSIGHT_BASEURLThe base URL of the Factoryinsight APIsecretKeyReffactoryinsight-secretbaseURL
FACTORYINSIGHT_CUSTOMERIDThe customer ID to use to authenticate to the Factoryinsight APIsecretKeyReffactoryinsight-secretcustomerID
FACTORYINSIGHT_PASSWORDThe password to use to authenticate to the Factoryinsight APIsecretKeyReffactoryinsight-secretpassword
env

The env section contains the configuration of the environment variables to add to the Pod.

grafana env section parameters
ParameterDescriptionTypeAllowed valuesDefault
GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINSList of plugin identifiers to allow loading even if they lack a valid signaturestringComma separated listumh-datasource,umh-factoryinput-panel,umh-v2-datasource
extraInitContainers

The extraInitContainers section contains the configuration of the extra init containers to add to the Pod.

The init-plugins container is used to install the default plugins shipped with the UMH version of Grafana without the need to have an internet connection. See the documentation for a list of the plugins.

- image: unitedmanufacturinghub/grafana-umh:1.2.0
  name: init-plugins
  imagePullPolicy: IfNotPresent
  command: ['sh', '-c', 'cp -r /plugins /var/lib/grafana/']
  volumeMounts:
    - name: storage
      mountPath: /var/lib/grafana
grafana.ini

The grafana.ini section contains the configuration of the grafana.ini file. See the Grafana documentation for more information.

paths:
  data: /var/lib/grafana/data
  logs: /var/log/grafana
  plugins: /var/lib/grafana/plugins
  provisioning: /etc/grafana/provisioning
database:
  host: united-manufacturing-hub
  user: "grafana"
  name: "grafana"
  password: "changeme"
  ssl_mode: require
  type: postgres

grafanaproxy

The grafanaproxy section contains the configuration of the Grafana proxy microservice.

grafanaproxy section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the Grafana proxy microservicebooltrue, falsetrue
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the grafana-proxy microservicestringAnyghcr.io/united-manufacturing-hub/barcodereader
image.tagThe tag of the grafana-proxy microservice. Defaults to Chart version if not setstringAny0.9.14
replicasThe number of Pod replicasintAny1
service.annotationsAnnotations to add to the serviceobjectAny{}
service.portThe port of the serviceintAny2096
service.typeThe type of the servicestringClusterIP, LoadBalancerLoadBalancer
service.targetPortThe target port of the serviceintAny80
service.protocolThe protocol of the servicestringTCP, UDPTCP
service.nameThe name of the port of the servicestringAnyservice
resources.limits.cpuThe CPU limitstringAny300m
resources.requests.cpuThe CPU requeststringAny100m

iotsensorsmqtt

The iotsensorsmqtt section contains the configuration of the IoT Sensors MQTT microservice.

iotsensorsmqtt section parameters
ParameterDescriptionTypeAllowed valuesDefault
imageThe image of the iotsensorsmqtt microservicestringAnyamineamaach/sensors-mqtt
mqtt.encryptedPasswordThe encrypted password of the MQTT brokerstringAnyBase 64 encrypted password
mqtt.passwordThe password of the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
resources.limits.cpuThe CPU limitstringAny30m
resources.limits.memoryThe memory limitstringAny50Mi
resources.requests.cpuThe CPU requeststringAny10m
resources.requests.memoryThe memory requeststringAny20Mi
tagThe tag of the iotsensorsmqtt microservice. Defaults to latest if not setstringAnyv1.0.0

kafkabridge

The kafkabridge section contains the configuration of the Kafka bridge.

kafkabridge section parameters
ParameterDescriptionTypeAllowed valuesDefault
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the kafka-bridge microservicestringAnyghcr.io/united-manufacturing-hub/kafka-bridge
image.tagThe tag of the kafka-bridge microservice. Defaults to Chart version if not setstringAny0.9.14
initContainer.pullPolicyThe image pull policy of the init containerstringAlways, IfNotPresent, NeverIfNotPresent
initContainer.repositoryThe image of the init containerstringAnyghcr.io/united-manufacturing-hub/kafka-init
initContainer.tagThe tag of the init container. Defaults to Chart version if not setstringAny0.9.14

kafkastatedetector

The kafkastatedetector section contains the configuration of the Kafka state detector.

kafkastatedetector section parameters
ParameterDescriptionTypeAllowed valuesDefault
activityEnabledControls wheter to check the activity of the Kafka brokerbooltrue, falsetrue
anomalyEnabledControls wheter to check for anomalies in the Kafka brokerbooltrue, falsetrue
enabledWhether to enable the Kafka state detectorbooltrue, falsetrue
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the kafkastatedetector microservicestringAnyghcr.io/united-manufacturing-hub/kafka-state-detector
image.tagThe tag of the kafkastatedetector microservice. Defaults to Chart version if not setstringAny0.9.14

kafkatopostgresql

The kafkatopostgresql section contains the configuration of the Kafka to PostgreSQL microservice.

kafkatopostgresql section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the Kafka to PostgreSQL microservicebooltrue, falsetrue
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the kafkatopostgresql microservicestringAnyghcr.io/united-manufacturing-hub/kafka-to-postgresql
image.tagThe tag of the kafkatopostgresql microservice. Defaults to Chart version if not setstringAny0.9.14
initContainer.pullPolicyThe image pull policy of the init containerstringAlways, IfNotPresent, NeverIfNotPresent
initContainer.repositoryThe image of the init containerstringAnyghcr.io/united-manufacturing-hub/kafka-init
initContainer.tagThe tag of the init container. Defaults to Chart version if not setstringAny0.9.14
replicasThe number of Pod replicasintAny1
resources.limits.cpuThe CPU limitstringAny200m
resources.limits.memoryThe memory limitstringAny300Mi
resources.requests.cpuThe CPU requeststringAny50m
resources.requests.memoryThe memory requeststringAny150Mi

metrics

The metrics section contains the configuration of the metrics CronJob that sends anonymous usage data.

metrics section parameters
ParameterDescriptionTypeAllowed valuesDefault
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the metrics microservicestringAnyghcr.io/united-manufacturing-hub/metrics
cronJob.scheduleThe schedule of the CronJobstringAny0 */4 * * * (every 4 hours)

mqtt_broker

The mqtt_broker section contains the configuration of the MQTT broker.

mqtt_broker section parameters
ParameterDescriptionTypeAllowed valuesDefault
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the mqtt_broker microservicestringAnyhivemq/hivemq-ce
image.tagThe tag of the mqtt_broker microservice. Defaults to 2022.1 if not setstringAny2022.1
initContainerThe init container configurationobjectAnySee initContainer section
persistence.extension.sizeThe size of the persistence volume for the extensionsstringAny100Mi
persistence.storage.sizeThe size of the persistence volume for the storagestringAny2Gi
rbacEnabledWhether to enable RBACbooltrue, falsefalse
resources.limits.cpuThe CPU limitstringAny700m
resources.limits.memoryThe memory limitstringAny1700Mi
resources.requests.cpuThe CPU requeststringAny300m
resources.requests.memoryThe memory requeststringAny1000Mi
service.mqtt.enabledWhether to enable the MQTT servicebooltrue, falsetrue
service.mqtt.portThe port of the MQTT serviceintAny1883
service.mqtts.cipher_suitesThe ciphersuites to enablestring arrayAnyTLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA
service.mqtts.enabledWhether to enable the MQTT over TLS servicebooltrue, falsetrue
service.mqtts.portThe port of the MQTT over TLS serviceintAny8883
service.mqtts.tls_versionsThe TLS versions to enablestring arrayAnyTLSv1.3, TLSv1.2
service.ws.enabledWhether to enable the WebSocket servicebooltrue, falsefalse
service.ws.portThe port of the WebSocket serviceintAny8080
service.wss.cipher_suitesThe ciphersuites to enablestring arrayAnyTLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA
service.wss.enabledWhether to enable the WebSocket over TLS servicebooltrue, falsefalse
service.wss.portThe port of the WebSocket over TLS serviceintAny8443
service.wss.tls_versionsThe TLS versions to enablestring arrayAnyTLSv1.3, TLSv1.2
initContainer

The initContainer section contains the configuration for the init containers. By default, the hivemqextensioninit container is used to initialize the HiveMQ extensions.

initContainer:
  hivemqextensioninit:
    image:
      repository: unitedmanufacturinghub/hivemq-init
      tag: 2.0.0
      pullPolicy: IfNotPresent

mqttbridge

The mqttbridge section contains the configuration of the MQTT bridge.

mqttbridge section parameters
ParameterDescriptionTypeAllowed valuesDefault
imageThe image of the mqtt-bridge microservicestringAnyghcr.io/united-manufacturing-hub/mqtt-bridge
mqtt.encryptedPasswordThe encrypted password of the MQTT brokerstringAnyBase 64 encrypted password
mqtt.passwordThe password of the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
resources.limits.cpuThe CPU limitstringAny200m
resources.limits.memoryThe memory limitstringAny100Mi
resources.requests.cpuThe CPU requeststringAny100m
resources.requests.memoryThe memory requeststringAny20Mi
storageRequestThe amount of storage for the PersistentVolumeClaimstringAny1Gi
tagThe tag of the mqtt-bridge microservice. Defaults to Chart version if not setstringAny0.9.14

mqttkafkabridge

The mqttkafkabridge section contains the configuration of the MQTT-Kafka bridge.

mqttkafkabridge section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the MQTT-Kafka bridgebooltrue, falsefalse
image.pullPolicyThe pull policy of the mqtt-kafka-bridge microservicestringAnyIfNotPresent
image.repositoryThe image of the mqtt-kafka-bridge microservicestringAnyghcr.io/united-manufacturing-hub/mqtt-kafka-bridge
image.tagThe tag of the mqtt-kafka-bridge microservice. Defaults to Chart version if not setstringAny0.9.14
initContainer.pullPolicyThe pull policy of the init containerstringAnyIfNotPresent
initContainer.repositoryThe image of the init containerstringAnyghcr.io/united-manufacturing-hub/kafka-init
initContainer.tagThe tag of the init container. Defaults to Chart version if not setstringAny0.9.14
kafkaAcceptNoOriginAllow access to the Kafka broker without a valid x-tracebooltrue, falsefalse
kafkaSenderThreadsThe number of threads for sending messages to the Kafka brokerintAny1
messageLRUSizeThe size of the LRU cache for messagesintAny100000
mqtt.encryptedPasswordThe encrypted password of the MQTT brokerstringAnyBase 64 encrypted password
mqtt.passwordThe password of the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
mqttSenderThreadsThe number of threads for sending messages to the MQTT brokerintAny1
pdb.enabledWhether to enable the pod disruption budgetbooltrue, falsetrue
pdb.minAvailableThe minimum number of pods that must be availableintAny1
rawMessageLRUSizeThe size of the LRU cache for raw messagesintAny100000
resources.limits.cpuThe CPU limitstringAny500m
resources.limits.memoryThe memory limitstringAny450Mi
resources.requests.cpuThe CPU requeststringAny400m
resources.requests.memoryThe memory requeststringAny300Mi

nodered

The nodered section contains the configuration of the Node-RED microservice.

nodered section parameters
ParameterDescriptionTypeAllowed valuesDefault
envEnvironment variables to add to the PodobjectAnySee env section
flowsA JSON string containing the flows to import into Node-REDstringAnySee the documentation
ingress.enabledWhether to enable the ingressbooltrue, falsefalse
ingress.publicHostSecretNameThe secret name of the public host of the IngressstringAny""
ingress.publicHostThe public host of the IngressstringAny""
mqtt.encryptedPasswordThe encrypted password of the MQTT brokerstringAnyBase 64 encrypted password
portThe port of the Node-RED serviceintAny1880
serviceTypeThe type of the servicestringClusterIP, LoadBalancerLoadBalancer
settingsA JSON string containing the settings of Node-REDstringAnySee the documentation
storageRequestThe amount of storage for the PersistentVolumeClaimstringAny1Gi
tagThe Node-RED versionstringAny2.0.6
timezoneThe timezonestringAnyBerlin/Europe
env

The env section contains the environment variables to add to the Pod.

env section parameters
ParameterDescriptionTypeAllowed valuesDefault
NODE_RED_ENABLE_SAVE_MODEWhether to enable the save modebooltrue, falsefalse

opcuasimulator

The opcuasimulator section contains the configuration of the OPC UA Simulator microservice.

opcuasimulator section parameters
ParameterDescriptionTypeAllowed valuesDefault
certadds.hostsHosts to add to the certificatestringAnyunited-manufacturing-hub-opcuasimulator-service
certadds.ipsIPs to add to the certificatestringAny""
imageThe image of the OPC UA Simulator microservicestringAnyghcr.io/united-manufacturing-hub/opcuasimulator
resources.limits.cpuThe CPU limitstringAny30m
resources.limits.memoryThe memory limitstringAny50Mi
resources.requests.cpuThe CPU requeststringAny10m
resources.requests.memoryThe memory requeststringAny20Mi
service.annotationsThe annotations of the serviceobjectAny{}
tagThe tag of the OPC UA Simulator microservice. Defaults to latest if not setstringAny0.1.0

packmlmqttsimulator

The packmlmqttsimulator section contains the configuration of the PackML MQTT Simulator microservice.

packmlmqttsimulator section parameters
ParameterDescriptionTypeAllowed valuesDefault
image.repositoryThe image of the PackML MQTT Simulator microservicestringAnyspruiktec/packml-simulator
image.hashThe hash of the image of the PackML MQTT Simulator microservicestringAny01e2f0da3542f1b4e0de830a8d24135de03fd9174dce184ed329bed3ee688e19
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
replicasThe number of replicasintAny1
resources.limits.cpuThe CPU limitstringAny30m
resources.limits.memoryThe memory limitstringAny50Mi
resources.requests.cpuThe CPU requeststringAny10m
resources.requests.memoryThe memory requeststringAny20Mi
envEnvironment variables to add to the PodobjectAnySee env section
env

The env section contains the environment variables to add to the Pod.

env section parameters
ParameterDescriptionTypeAllowed valuesDefault
areaISA-95 area name of the linestringAnyDefaultArea
productionLineISA-95 line name of the linestringAnyDefaultProductionLine
siteISA-95 site name of the linestringAnytestLocation
mqtt.passwordThe password of the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
mqtt.encryptedPasswordThe encrypted password of the MQTT brokerstringAnyBase 64 encrypted password

redis

The redis section contains the configuration of the Redis microservice. This is based on the official Redis Helm chart. For more information about the parameters, see the official documentation.

Here are only the values different from the default ones.

redis section parameters
ParameterDescriptionTypeAllowed valuesDefault
architectureRedis architecturestringstandalone, replicationstandalone
auth.existingSecretPasswordKeyPassword key to be retrieved from existing secretstringAnyredispassword
auth.existingSecretThe name of the existing secret with Redis credentialsstringAnyredis-secret
commonConfigurationCommon configuration to be added into the ConfigMapstringAnySee commonConfiguration section
master.extraFlagsArray with additional command line flags for Redis masterstring arrayAny–maxmemory 200mb
master.livenessProbe.initialDelaySecondsThe initial delay before the liveness probe startsintAny5
master.readinessProbe.initialDelaySecondsThe initial delay before the readiness probe startsintAny120
master.resources.limits.cpuThe CPU limitstringAny100m
master.resources.limits.memoryThe memory limitstringAny100Mi
master.resources.requests.cpuThe CPU requeststringAny50m
master.resources.requests.memoryThe memory requeststringAny50Mi
metrics.enabledStart a sidecar prometheus exporter to expose Redis metricsbooltrue, falsetrue
pdb.createWhether to create a Pod Disruption Budgetbooltrue, falsetrue
pdb.minAvailableMin number of pods that must still be available after the evictionintAny2
serviceAccount.createWhether to create a service accountbooltrue, falsefalse
commonConfiguration

The commonConfiguration section contains the common configuration to be added into the ConfigMap. For more information, see the documentation.

# Enable AOF https://redis.io/topics/persistence#append-only-file
appendonly yes
# Disable RDB persistence, AOF persistence already enabled.
save ""
# Backwards compatability with Redis version 6.*
replica-ignore-disk-write-errors yes

redpanda

The redpanda section contains the configuration of the Kafka broker. This is based on the RedPanda chart. For more information about the parameters, see the official documentation.

Here are only the values different from the default ones.

kafka section parameters
ParameterDescriptionTypeAllowed valuesDefault
config.cluster.auto_create_topics_enabledWhether to enable auto creation of topicsbooltrue, falsetrue
consoleThe configuration for RedPanda ConsoleobjectAnySee console section
external.typeThe type of Service for external accessstringNodePort, LoadBalancerLoadBalancer
fullnameOverrideThe full name overridestringAnyunited-manufacturing-hub-kafka
listeners.kafka.portThe port of the Kafka listenerintAny9092
rbac.enableWhether to enable RBACbooltrue, falsetrue
resources.cpu.coresThe number of CPU cores to allocate to the Kafka brokerintAny1
resources.memory.container.maxMaximum memory count for each brokerstringAny2Gi
resources.memory.enable_memory_lockingWhether to enable memory lockingbooltrue, falsetrue
serviceAccount.createWhether to create a service accountbooltrue, falsefalse
statefulset.replicasThe number of brokersintAny1
storage.persistentVolume.sizeThe size of the persistent volumestringAny10Gi
tls.enabledWhether to enable TLSbooltrue, falsefalse
console

The console section contains the configuration of the RedPanda Console.

For more information about the parameters, see the official documentation.

console section parameters
ParameterDescriptionTypeAllowed valuesDefault
console.config.kafka.brokersThe list of Kafka brokerslistAnyunited-manufacturing-hub-kafka:9092
service.portThe port of the Service to exposeintAny8090
service.targetPortThe target port of the Service to exposeintAny8080
service.typeThe type of Service to exposestringClusterIp, NodePort, LoadBalancerLoadBalancer
serviceAccount.createWhether to create a service accountbooltrue, falsefalse

sensorconnect

The sensorconnect section contains the configuration of the Sensorconnect microservice.

sensorconnect section parameters
ParameterDescriptionTypeAllowed valuesDefault
additionalSleepTimePerActivePortMsAdditional sleep time between pollings for each active port in millisecondsfloatAny0.0
additionalSlowDownMapJSON map of values, allows to slow down and speed up the polling time of specific sensorsJSONAny{}
allowSubTwentyMsWhether to allow sub 20ms polling time. Set to 1 to enable. Not recommendedint0, 10
deviceFinderTimeSecTime interval in second between new device discoveryintAny20
deviceFinderTimeoutSecTimeout in second for device discovery. Never set lower than deviceFinderTimeSecintAny1
imageThe image of the sensorconnect microservicestringAnyghcr.io/united-manufacturing-hub/sensorconnect
ioddfilepathThe path to the IODD filesstringAny/ioddfiles
lowerPollingTimeThe lower polling time in millisecondsintAny20
maxSensorErrorCountThe maximum number of sensor errors before the sensor is marked as not respondingintAny50
mqtt.encryptedPasswordThe encrypted password of the MQTT brokerstringAnyBase 64 encrypted password
mqtt.passwordThe password of the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
pollingSpeedStepDownMsThe time to subtract from the polling time in milliseconds when a sensor is respondingintAny1
pollingSpeedStepUpMsThe time to add to the polling time in milliseconds when a sensor is not respondingintAny20
resources.limits.cpuThe CPU limitstringAny100m
resources.limits.memoryThe memory limitstringAny200Mi
resources.requests.cpuThe CPU requeststringAny10m
resources.requests.memoryThe memory requeststringAny75Mi
storageRequestThe amount of storage for the PersistentVolumeClaimstringAny1Gi
tagThe tag of the sensorconnect microservice. Defaults to Chart version if not setstringAny0.9.14
upperPollingTimeThe upper polling time in millisecondsintAny1000

serviceAccount

The serviceAccount section contains the configuration of the service account. See the Kubernetes documentation for more information.

serviceAccount section parameters
ParameterDescriptionTypeAllowed valuesDefault
createWhether to create a service accountbooltrue, falsetrue

timescaledb-single

The timescaledb-single section contains the configuration of the TimescaleDB microservice. This is based on the official TimescaleDB Helm chart. For more information about the parameters, see the official documentation.

Here are only the values different from the default ones.

timescaledb-single section parameters
ParameterDescriptionTypeAllowed valuesDefault
replicaCountThe number of replicasintAny1
image.repositoryThe image of the TimescaleDB microservicestringAnyghcr.io/united-manufacturing-hub/timescaledb
image.tagThe Timescaledb-ha versionstringAnypg13.8-ts2.8.0-p1
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
patroni.postgresql.create_replica_methodsThe replica creation methodstring arrayAnybasebackup
postInitA list of sources that contain post init scriptsobject arrayAnySee postInit
service.primary.typeThe type of the primary servicestringClusterIP, NodePort, LoadBalancerLoadBalancer
serviceAccount.createWhether to create a service accountbooltrue, falsefalse
postInit

The postInit parameter is a list of references to sources that contain post init scripts. The scripts are executed after the database is initialized.

postInit:
  - configMap:
      name: {{ resource type="configmap" name="database" }}
      optional: false
  - secret:
      name: {{ resource type="secret" name="database" }}
      optional: false

tulipconnector

The tulipconnector section contains the configuration of the Tulip Connector microservice.

tulipconnector section parameters
ParameterDescriptionTypeAllowed valuesDefault
image.repositoryThe image of the Tulip Connector microservicestringAnyghcr.io/united-manufacturing-hub/tulip-connector
image.tagThe tag of the Tulip Connector microservice. Defaults to latest if not setstringAny0.1.0
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
replicasThe number of Pod replicasintAny1
envThe environment variablesobjectAnySee env
resources.limits.cpuThe CPU limitstringAny30m
resources.limits.memoryThe memory limitstringAny50Mi
resources.requests.cpuThe CPU requeststringAny10m
resources.requests.memoryThe memory requeststringAny20Mi
env

The env section contains the configuration of the environment variables to add to the Pod.

env section parameters
ParameterDescriptionTypeAllowed valuesDefault
modeIn which mode to run the Tulip Connectorstringdev, prodprod

What’s next

3.2 - Microservices

This section gives an overview of the microservices that can be found in the United Manufacturing Hub.

There are several microservices that are part of the United Manufacturing Hub. Some of them compose the core of the platform, and are mainly developed by the UMH team, with the addition of some third-party software. Others are maintained by the community, and are used to extend the functionality of the platform.

3.2.1 - Core

This section contains the overview of the Core components of the United Manufacturing Hub.

The microservices in this section are part of the Core of the United Manufacturing Hub. They are mainly developed by the UMH team, with the addition of some third-party software. They are used to provide the core functionality of the platform.

3.2.1.1 - Cache

The technical documentation of the redis microservice, which is used as a cache for the other microservices.

The cache in the United Manufacturing Hub is Redis, a key-value store that is used as a cache for the other microservices.

How it works

Recently used data is stored in the cache to reduce the load on the database. All the microservices that need to access the database will first check if the data is available in the cache. If it is, it will be used, otherwise the microservice will query the database and store the result in the cache.

By default, Redis is configured to run in standalone mode, which means that it will only have one master node.

Kubernetes resources

  • StatefulSet: united-manufacturing-hub-redis-master
  • Service:
    • Internal ClusterIP:
      • Redis: united-manufacturing-hub-redis-master at port 6379
      • Headless: united-manufacturing-hub-redis-headless at port 6379
      • Metrics: united-manufacturing-hub-redis-metrics at port 6379
  • ConfigMap:
    • Configuration: united-manufacturing-hub-redis-configuration
    • Health: united-manufacturing-hub-redis-health
    • Scripts: united-manufacturing-hub-redis-scripts
  • Secret: redis-secret
  • PersistentVolumeClaim: redis-data-united-manufacturing-hub-redis-master-0

Configuration

You shouldn’t need to configure the cache manually, as it’s configured automatically when the cluster is deployed. However, if you need to change the configuration, you can do it by editing the redis section of the Helm chart values file.

You can consult the Bitnami Redis chart for more information about the available configuration options.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
ALLOW_EMPTY_PASSWORDAllow empty passwordbooltrue, falsefalse
BITNAMI_DEBUGSpecify if debug values should be setbooltrue, falsefalse
REDIS_PASSWORDRedis passwordstringAnyRandom UUID
REDIS_PORTRedis port numberintAny6379
REDIS_REPLICATION_MODERedis replication modestringmaster, slavemaster
REDIS_TLS_ENABLEDEnable TLSbooltrue, falsefalse

3.2.1.2 - Database

The technical documentation of the database microservice, which stores the data of the application.

The database microservice is the central component of the United Manufacturing Hub and is based on TimescaleDB, an open-source relational database built for handling time-series data. TimescaleDB is designed to provide scalable and efficient storage, processing, and analysis of time-series data.

You can find more information on the datamodel of the database in the Data Model section, and read about the choice to use TimescaleDB in the blog article.

How it works

When deployed, the database microservice will create two databases, with the related usernames and passwords:

  • grafana: This database is used by Grafana to store the dashboards and other data.
  • factoryinsight: This database is the main database of the United Manufacturing Hub. It contains all the data that is collected by the microservices.

Then, it creates the tables based on the database schema.

If you want to learn more about how TimescaleDB works, you can read the TimescaleDB documentation.

Kubernetes resources

  • StatefulSet: united-manufacturing-hub-timescaledb
  • Service:
    • Internal ClusterIP for the replicas: united-manufacturing-hub-replica at port 5432
    • Internal ClusterIP for the config: united-manufacturing-hub-config at port 8008
    • External LoadBalancer: united-manufacturing-hub at port 5432
  • ConfigMap:
    • Patroni: united-manufacturing-hub-timescaledb-patroni
    • Post init: timescale-post-init
    • Postgres BackRest: united-manufacturing-hub-timescaledb-pgbackrest
    • Scripts: united-manufacturing-hub-timescaledb-scripts
  • Secret:
    • Certificate: united-manufacturing-hub-certificate
    • Patroni credentials: united-manufacturing-hub-credentials
    • Users passwords: timescale-post-init-pw
  • PersistentVolumeClaim:
    • Data: storage-volume-united-manufacturing-hub-timescaledb-0
    • WAL-E: wal-volume-united-manufacturing-hub-timescaledb-0

Configuration

There is only one parameter that usually needs to be changed: the password used to connect to the database. To do so, set the value of the db_password key in the _000_commonConfig.datastorage section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
BOOTSTRAP_FROM_BACKUPWhether to bootstrap the database from a backup or not.int0, 10
PATRONI_KUBERNETES_LABELSThe labels to use to find the pods of the StatefulSet.stringAny{app: united-manufacturing-hub-timescaledb, cluster-name: united-manufacturing-hub, release: united-manufacturing-hub}
PATRONI_KUBERNETES_NAMESPACEThe namespace in which the StatefulSet is deployed.stringAnyunited-manufacturing-hub
PATRONI_KUBERNETES_POD_IPThe IP address of the pod.stringAnyRandom IP
PATRONI_KUBERNETES_PORTSThe ports to use to connect to the pods.stringAny[{"name": "postgresql", "port": 5432}]
PATRONI_NAMEThe name of the pod.stringAnyunited-manufacturing-hub-timescaledb-0
PATRONI_POSTGRESQL_CONNECT_ADDRESSThe address to use to connect to the database.stringAny$(PATRONI_KUBERNETES_POD_IP):5432
PATRONI_POSTGRESQL_DATA_DIRThe directory where the database data is stored.stringAny/var/lib/postgresql/data
PATRONI_REPLICATION_PASSWORDThe password to use to connect to the database as a replica.stringAnyRandom 16 characters
PATRONI_REPLICATION_USERNAMEThe username to use to connect to the database as a replica.stringAnystandby
PATRONI_RESTAPI_CONNECT_ADDRESSThe address to use to connect to the REST API.stringAny$(PATRONI_KUBERNETES_POD_IP):8008
PATRONI_SCOPEThe name of the cluster.stringAnyunited-manufacturing-hub
PATRONI_SUPERUSER_PASSWORDThe password to use to connect to the database as the superuser.stringAnyRandom 16 characters
PATRONI_admin_OPTIONSThe options to use for the admin user.stringComma separated list of optionscreaterole,createdb
PATRONI_admin_PASSWORDThe password to use to connect to the database as the admin user.stringAnyRandom 16 characters
PGBACKREST_CONFIGThe path to the configuration file for Postgres BackRest.stringAny/etc/pgbackrest/pgbackrest.conf
PGDATAThe directory where the database data is stored.stringAny$(PATRONI_POSTGRESQL_DATA_DIR)
PGHOSTThe directory of the runnning databasestringAny/var/run/postgresql

3.2.1.3 - Factoryinsight

The technical documentation of the Factoryinsight microservice, which exposes a set of APIs to access the data from the database.

Factoryinsight is a microservice that provides a set of REST APIs to access the data from the database. It is particularly useful to calculate the Key Performance Indicators (KPIs) of the factories.

How it works

Factoryinsight exposes REST APIs to access the data from the database or calculate the KPIs. By default, it’s only accessible from the internal network of the cluster, but it can be configured to be accessible from the external network.

The APIs require authentication, that can be ehither a Basic Auth or a Bearer token. Both of these can be found in the Secret factoryinsight-secret.

API documentation

Kubernetes resources

  • Deployment: united-manufacturing-hub-factoryinsight-deployment
  • Service:
  • Secret: factoryinsight-secret

Configuration

You shouldn’t need to configure Factoryinsight manually, as it’s configured automatically when the cluster is deployed. However, if you need to change the configuration, you can do it by editing the factoryinsight section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
CUSTOMER_NAME_{NUMBER}Specifies a user for the REST API. Multiple users can be setstringAny""
CUSTOMER_PASSWORD_{NUMBER}Specifies the password of the user for the REST APIstringAny""
DEBUG_ENABLE_FGTRACEEnables the use of the fgtrace library. Not reccomended for productionstringtrue, falsefalse
DRY_RUNIf enabled, data wont be stored in databasebooltrue, falsefalse
FACTORYINSIGHT_PASSWORDSpecifies the password for the admin user for the REST APIstringAnyRandom UUID
FACTORYINSIGHT_USERSpecifies the admin user for the REST APIstringAnyfactoryinsight
INSECURE_NO_AUTHIf enabled, no authentication is required for the REST API. Not reccomended for productionbooltrue, falsefalse
LOGGING_LEVELDefines which logging level is used, mostly relevant for developersstringPRODUCTION, DEVELOPMENTPRODUCTION
MICROSERVICE_NAMEName of the microservice. Used for tracingstringAnyunited-manufacturing-hub-factoryinsight
POSTGRES_DATABASESpecifies the database name to usestringAnyfactoryinsight
POSTGRES_HOSTSpecifies the database DNS name or IP addressstringAnyunited-manufacturing-hub
POSTGRES_PASSWORDSpecifies the database password to usestringAnychangeme
POSTGRES_PORTSpecifies the database portintValid port number5432
POSTGRES_USERSpecifies the database user to usestringAnyfactoryinsight
REDIS_PASSWORDPassword to access the redis sentinelstringAnyRandom UUID
REDIS_URIThe URI of the Redis instancestringAnyunited-manufacturing-hub-redis-headless:6379
SERIAL_NUMBERSerial number of the cluster. Used for tracingstringAnydefalut
VERSIONThe version of the API used. Each version also enables all the previous onesintAny2

3.2.1.4 - Grafana

The technical documentation of the grafana microservice, which is a web application that provides visualization and analytics capabilities.

The grafana microservice is a web application that provides visualization and analytics capabilities. Grafana allows you to query, visualize, alert on and understand your metrics no matter where they are stored.

It has a rich ecosystem of plugins that allow you to extend its functionality beyond the core features.

How it works

Grafana is a web application that can be accessed through a web browser. It let’s you create dashboards that can be used to visualize data from the database.

Thanks to some custom datasource plugins, Grafana can use the various APIs of the United Manufacturing Hub to query the database and display useful information.

Kubernetes resources

  • Deployment: united-manufacturing-hub-grafana
  • Service:
    • External LoadBalancer: united-manufacturing-hub-grafana at port 8080
  • ConfigMap: united-manufacturing-hub-grafana
  • Secret: grafana-secret
  • PersistentVolumeClaim: united-manufacturing-hub-grafana

Configuration

Grafana is configured through its user interface. The default credentials are found in the grafana-secret Secret.

The Grafana installation that is provided by the United Manufacturing Hub is shipped with a set of preinstalled plugins:

  • ACE.SVG by Andrew Rodgers
  • Button Panel by CloudSpout LLC
  • Button Panel by UMH Systems Gmbh
  • Discrete by Natel Energy
  • Dynamic Text by Marcus Olsson
  • FlowCharting by agent
  • Pareto Chart by isaozler
  • Pie Chart (old) by Grafana Labs
  • Timepicker Buttons Panel by williamvenner
  • UMH Datasource by UMH Systems Gmbh
  • UMH Datasource v2 by UMH Systems Gmbh
  • Untimely by factry
  • Worldmap Panel by Grafana Labs

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
FACTORYINSIGHT_APIKEYThe API key to use to authenticate to the Factoryinsight APIstringAnyBase64 encoded string
FACTORYINSIGHT_BASEURLThe base URL of the Factoryinsight APIstringAnyunited-manufacturing-hub-factoryinsight-service
FACTORYINSIGHT_CUSTOMERIDThe customer ID to use to authenticate to the Factoryinsight APIstringAnyfactoryinsight
FACTORYINSIGHT_PASSWORDThe password to use to authenticate to the Factoryinsight APIstringAnyRandom UUID
GF_PATHS_DATAThe path where Grafana will store its datastringAny/var/lib/grafana/data
GF_PATHS_LOGSThe path where Grafana will store its logsstringAny/var/log/grafana
GF_PATHS_PLUGINSThe path where Grafana will store its pluginsstringAny/var/lib/grafana/plugins
GF_PATHS_PROVISIONINGThe path where Grafana will store its provisioning configurationstringAny/etc/grafana/provisioning
GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINSList of plugin identifiers to allow loading even if they lack a valid signaturestringComma separated listumh-datasource,umh-factoryinput-panel,umh-v2-datasource
GF_SECURITY_ADMIN_PASSWORDThe password of the admin userstringAnyRandom UUID
GF_SECURITY_ADMIN_USERThe username of the admin userstringAnyadmin

3.2.1.5 - Kafka Bridge

The technical documentation of the kafka-bridge microservice, which acts as a communication bridge between two Kafka brokers.

Kafka-bridge is a microservice that connects two Kafka brokers and forwards messages between them. It is used to connect the local broker of the edge computer with the remote broker on the server.

How it works

This microservice has two ways of operation:

  • High Integrity: This mode is used for topics that are critical for the user. It is garanteed that no messages are lost. This is achieved by committing the message only after it has been successfully inserted into the database. Ususally all the topics are forwarded in this mode, except for processValue, processValueString and raw messages.
  • High Throughput: This mode is used for topics that are not critical for the user. They are forwarded as fast as possible, but it is possible that messages are lost, for example if the database struggles to keep up. Usually only the processValue, processValueString and raw messages are forwarded in this mode.

Kubernetes resources

  • Deployment: united-manufacturing-hub-kafkabridge
  • Secret:
    • Local broker: united-manufacturing-hub-kafkabridge-secrets-local
    • Remote broker: united-manufacturing-hub-kafkabridge-secrets-remote

Configuration

You can configure the kafka-bridge microservice by setting the following values in the _000_commonConfig.kafkaBridge section of the Helm chart values file.

  kafkaBridge:
    enabled: true
    remotebootstrapServer: ""
    topicmap:
      - bidirectional: false
        name: HighIntegrity
        send_direction: to_remote
        topic: ^ia\..+\..+\..+\.((addMaintenanceActivity)|(addOrder)|(addParentToChild)|(addProduct)|(addShift)|(count)|(deleteShiftByAssetIdAndBeginTimestamp)|(deleteShiftById)|(endOrder)|(modifyProducedPieces)|(modifyState)|(productTag)|(productTagString)|(recommendation)|(scrapCount)|(startOrder)|(state)|(uniqueProduct)|(scrapUniqueProduct))$
      - bidirectional: false
        name: HighThroughput
        send_direction: to_remote
        topic: ^ia\..+\..+\..+\.(processValue).*$

Topic Map schema

The topic map is a list of objects, each object represents a topic (or a set of topics) that should be forwarded. The following JSON schema describes the structure of the topic map:

{
    "$schema": "http://json-schema.org/draft-07/schema",
    "type": "array",
    "title": "Kafka Topic Map",
    "description": "This schema validates valid Kafka topic maps.",
    "default": [],
    "additionalItems": true,
    "items": {
        "$id": "#/items",
        "anyOf": [
            {
                "$id": "#/items/anyOf/0",
                "type": "object",
                "title": "Unidirectional Kafka Topic Map with send direction",
                "description": "This schema validates entries, that are unidirectional and have a send direction.",
                "default": {},
                "examples": [
                    {
                        "name": "HighIntegrity",
                        "topic": "^ia\\..+\\..+\\..+\\.(?!processValue).+$",
                        "bidirectional": false,
                        "send_direction": "to_remote"
                    }
                ],
                "required": [
                    "name",
                    "topic",
                    "bidirectional",
                    "send_direction"
                ],
                "properties": {
                    "name": {
                        "$id": "#/items/anyOf/0/properties/name",
                        "type": "string",
                        "title": "Entry Name",
                        "description": "Name of the map entry, only used for logging & tracing.",
                        "default": "",
                        "examples": [
                            "HighIntegrity"
                        ]
                    },
                    "topic": {
                        "$id": "#/items/anyOf/0/properties/topic",
                        "type": "string",
                        "title": "The topic to listen on",
                        "description": "The topic to listen on, this can be a regular expression.",
                        "default": "",
                        "examples": [
                            "^ia\\..+\\..+\\..+\\.(?!processValue).+$"
                        ]
                    },
                    "bidirectional": {
                        "$id": "#/items/anyOf/0/properties/bidirectional",
                        "type": "boolean",
                        "title": "Is the transfer bidirectional?",
                        "description": "When set to true, the bridge will consume and produce from both brokers",
                        "default": false,
                        "examples": [
                            false
                        ]
                    },
                    "send_direction": {
                        "$id": "#/items/anyOf/0/properties/send_direction",
                        "type": "string",
                        "title": "Send direction",
                        "description": "Can be either 'to_remote' or 'to_local'",
                        "default": "",
                        "examples": [
                            "to_remote",
                            "to_local"
                        ]
                    }
                },
                "additionalProperties": true
            },
            {
                "$id": "#/items/anyOf/1",
                "type": "object",
                "title": "Bi-directional Kafka Topic Map with send direction",
                "description": "This schema validates entries, that are bi-directional.",
                "default": {},
                "examples": [
                    {
                        "name": "HighIntegrity",
                        "topic": "^ia\\..+\\..+\\..+\\.(?!processValue).+$",
                        "bidirectional": true
                    }
                ],
                "required": [
                    "name",
                    "topic",
                    "bidirectional"
                ],
                "properties": {
                    "name": {
                        "$id": "#/items/anyOf/1/properties/name",
                        "type": "string",
                        "title": "Entry Name",
                        "description": "Name of the map entry, only used for logging & tracing.",
                        "default": "",
                        "examples": [
                            "HighIntegrity"
                        ]
                    },
                    "topic": {
                        "$id": "#/items/anyOf/1/properties/topic",
                        "type": "string",
                        "title": "The topic to listen on",
                        "description": "The topic to listen on, this can be a regular expression.",
                        "default": "",
                        "examples": [
                            "^ia\\..+\\..+\\..+\\.(?!processValue).+$"
                        ]
                    },
                    "bidirectional": {
                        "$id": "#/items/anyOf/1/properties/bidirectional",
                        "type": "boolean",
                        "title": "Is the transfer bidirectional?",
                        "description": "When set to true, the bridge will consume and produce from both brokers",
                        "default": false,
                        "examples": [
                            true
                        ]
                    }
                },
                "additionalProperties": true
            }
        ]
    },
    "examples": [
   {
      "name":"HighIntegrity",
      "topic":"^ia\\..+\\..+\\..+\\.(?!processValue).+$",
      "bidirectional":true
   },
   {
      "name":"HighThroughput",
      "topic":"^ia\\..+\\..+\\..+\\.(processValue).*$",
      "bidirectional":false,
      "send_direction":"to_remote"
   }
]
}

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
DEBUG_ENABLE_FGTRACEEnables the use of the fgtrace library, do not enable in productionstringtrue, falsefalse
KAFKA_GROUP_ID_SUFFIXIdentifier appended to the kafka group ID, usually a serial numberstringAnydefalut
KAFKA_SSL_KEY_PASSWORD_LOCALPassword for the SSL key pf the local brokerstringAny""
KAFKA_SSL_KEY_PASSWORD_REMOTEPassword for the SSL key of the remote brokerstringAny""
KAFKA_TOPIC_MAPA json map of the kafka topics should be forwardedJSONSee below{}
KAKFA_USE_SSLEnables the use of SSL for the kafka connectionstringtrue, falsefalse
LOCAL_KAFKA_BOOTSTRAP_SERVERURL of the local kafka broker, port is requiredstringAny valid URLunited-manufacturing-hub-kafka:9092
LOGGING_LEVELDefines which logging level is used, mostly relevant for developers.stringPRODUCTION, DEVELOPMENTPRODUCTION
MICROSERVICE_NAMEName of the microservice (used for tracing)stringAnyunited-manufacturing-hub-kafka-bridge
REMOTE_KAFKA_BOOTSTRAP_SERVERURL of the remote kafka brokerstringAny valid URL""
SERIAL_NUMBERSerial number of the cluster (used for tracing)stringAnydefalut

3.2.1.6 - Kafka Broker

The technical documentation of the kafka-broker microservice, which handles the communication between the microservices.

The Kafka broker in the United Manufacturing Hub is RedPanda, a Kafka-compatible event streaming platform. It’s used to store and process messages, in order to stream real-time data between the microservices.

How it works

RedPanda is a distributed system that is made up of a cluster of brokers, designed for maximum performance and reliability. It does not depend on external systems like ZooKeeper, as it’s shipped as a single binary.

Read more about RedPanda in the official documentation.

Kubernetes resources

  • StatefulSet: united-manufacturing-hub-kafka
  • Service:
    • Internal ClusterIP (headless): united-manufacturing-hub-kafka
    • External NodePort: united-manufacturing-hub-kafka-external at port 9094 for the Kafka API listener, port 9644 for the Admin API listener, port 8083 for the HTTP Proxy listener, and port 8081 for the Schema Registry listener.
  • ConfigMap: united-manufacturing-hub-kafka
  • Secret: united-manufacturing-hub-kafka-sts-lifecycle
  • PersistentVolumeClaim: datadir-united-manufacturing-hub-kafka-0

Configuration

You shouldn’t need to configure the Kafka broker manually, as it’s configured automatically when the cluster is deployed. However, if you need to change the configuration, you can do it by editing the redpanda section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
HOST_IPThe IP address of the host machine.stringAnyRandom IP
POD_IPThe IP address of the pod.stringAnyRandom IP
SERVICE_NAMEThe name of the service.stringAnyunited-manufacturing-hub-kafka

3.2.1.7 - Kafka Console

The technical documentation of the kafka-console microservice, which provides a GUI to interact with the Kafka broker.

Kafka-console uses Redpanda Console to help you manage and debug your Kafka workloads effortlessy.

With it, you can explore your Kafka topics, view messages, list the active consumers, and more.

How it works

You can access the Kafka console via its Service.

It’s automatically connected to the Kafka broker, so you can start using it right away. You can view the Kafka broker configuration in the Broker tab, and explore the topics in the Topics tab.

Kubernetes resources

  • Deployment: united-manufacturing-hub-console
  • Service:
    • External LoadBalancer: united-manufacturing-hub-console at port 8090
  • ConfigMap: united-manufacturing-hub-console
  • Secret: united-manufacturing-hub-console

Configuration

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
LOGIN_JWTSECRETThe secret used to authenticate the communication to the backend.stringAnyRandom string

3.2.1.8 - Kafka to Postgresql

The technical documentation of the kafka-to-postgresql microservice, which consumes messages from a Kafka broker and writes them in a PostgreSQL database.

Kafka-to-postgresql is a microservice responsible for consuming kafka messages and inserting the payload into a Postgresql database. Take a look at the Datamodel to see how the data is structured.

This microservice requires that the Kafka Topic umh.v1.kafka.newTopic exits. This will happen automatically from version 0.9.12.

How it works

By default, kafka-to-postgresql sets up two Kafka consumers, one for the High Integrity topics and one for the High Throughput topics.

The graphic below shows the program flow of the microservice.

Kafka-to-postgres-flow
Kafka-to-postgres-flow

High integrity

The High integrity topics are forwarded to the database in a synchronous way. This means that the microservice will wait for the database to respond with a non error message before committing the message to the Kafka broker. This way, the message is garanteed to be inserted into the database, even though it might take a while.

Most of the topics are forwarded in this mode.

The picture below shows the program flow of the high integrity mode.

high-integrity-data-flow
high-integrity-data-flow

High throughput

The High throughput topics are forwarded to the database in an asynchronous way. This means that the microservice will not wait for the database to respond with a non error message before committing the message to the Kafka broker. This way, the message is not garanteed to be inserted into the database, but the microservice will try to insert the message into the database as soon as possible. This mode is used for the topics that are expected to have a high throughput.

The topics that are forwarded in this mode are processValue, processValueString and all the raw topics.

Kubernetes resources

  • Deployment: united-manufacturing-hub-kafkatopostgresql
  • Secret: united-manufacturing-hub-kafkatopostgresql-certificates

Configuration

You shouldn’t need to configure kafka-to-postgresql manually, as it’s configured automatically when the cluster is deployed. However, if you need to change the configuration, you can do it by editing the kafkatopostgresql section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
DEBUG_ENABLE_FGTRACEEnables the use of the fgtrace library. Not reccomended for productionstringtrue, falsefalse
DRY_RUNIf set to true, the microservice will not write to the databasebooltrue, falsefalse
KAFKA_BOOTSTRAP_SERVERURL of the Kafka broker used, port is requiredstringAnyunited-manufacturing-hub-kafka:9092
KAFKA_SSL_KEY_PASSWORDKey password to decode the SSL private keystringAny""
LOGGING_LEVELDefines which logging level is used, mostly relevant for developersstringPRODUCTION, DEVELOPMENTPRODUCTION
MEMORY_REQUESTMemory request for the message cachestringAny50Mi
MICROSERVICE_NAMEName of the microservice (used for tracing)stringAnyunited-manufacturing-hub-kafkatopostgresql
POSTGRES_DATABASEThe name of the PostgreSQL databasestringAnyfactoryinsight
POSTGRES_HOSTHostname of the PostgreSQL databasestringAnyunited-manufacturing-hub
POSTGRES_PASSWORDThe password to use for PostgreSQL connectionsstringAnychangeme
POSTGRES_SSLMODEIf set to true, the PostgreSQL connection will use SSLstringAnyrequire
POSTGRES_USERThe username to use for PostgreSQL connectionsstringAnyfactoryinsight

3.2.1.9 - MQTT Bridge

The technical documentation of the mqtt-bridge microservice, which acts as a communication bridge between two MQTT brokers.

MQTT-bridge is a microservice that connects two MQTT brokers and forwards messages between them. It is used to connect the local broker of the edge computer with the remote broker on the server.

How it works

This microservice subscribes to topics on the local broker and publishes the messages to the remote broker, while also subscribing to topics on the remote broker and publishing the messages to the local broker.

Kubernetes resources

  • StatefulSet: united-manufacturing-hub-mqttbridge
  • Secret: united-manufacturing-hub-mqttbridge-secrets
  • PersistentVolumeClaim: united-manufacturing-hub-mqttbridge-claim

Configuration

You can configure the URL of the remote MQTT broker that MQTT-bridge should connect to by setting the value of the remoteBrokerUrl parameter in the _000_commonConfig.mqttBridge section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
BRIDGE_ONE_WAYWhether to enable one-way communication, from local to remotebooltrue, falsetrue
INSECURE_SKIP_VERIFY_LOCALSkip TLS certificate verification for the local brokerbooltrue, falsetrue
INSECURE_SKIP_VERIFY_REMOTESkip TLS certificate verification for the remote brokerbooltrue, falsetrue
LOCAL_BROKER_SSL_ENABLEDWhether to enable SSL for the local MQTT brokerbooltrue, falsetrue
LOCAL_BROKER_URLURL for the local MQTT brokerstringAnyssl://united-manufacturing-hub-mqtt:8883
LOCAL_CERTIFICATE_NAMESet to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryptionstringUSE_TLS, NO_CERTUSE_TLS
LOCAL_PUB_TOPICLocal MQTT topic to publish tostringAnyia
LOCAL_SUB_TOPICLocal MQTT topic to subscribe tostringAnyia/factoryinsight
MQTT_PASSWORDPassword for the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
REMOTE_BROKER_SSL_ENABLEDWhether to enable SSL for the remote MQTT brokerbooltrue, falsetrue
REMOTE_BROKER_URLURL for the local MQTT brokerstringAnyssl://united-manufacturing-hub-mqtt.united-manufacturing-hub:8883
REMOTE_CERTIFICATE_NAMESet to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryptionstringUSE_TLS, NO_CERTUSE_TLS
REMOTE_PUB_TOPICRemote MQTT topic to publish tostringAnyia/factoryinsight
REMOTE_SUB_TOPICRemote MQTT topic to subscribe tostringAnyia

3.2.1.10 - MQTT Broker

The technical documentation of the mqtt-broker microservice, which forwards MQTT messages between the other microservices.

The MQTT broker in the United Manufacturing Hub is HiveMQ and is customized to fit the needs of the stack. It’s a core component of the stack and is used to communicate between the different microservices.

How it works

The MQTT broker is responsible for receiving MQTT messages from the different microservices and forwarding them to the MQTT Kafka bridge.

Kubernetes resources

  • StatefulSet: united-manufacturing-hub-hivemqce
  • Service:
    • Internal ClusterIP:
      • HiveMQ local: united-manufacturing-hub-hivemq-local-service at port 1883 (MQTT) and 8883 (MQTT over TLS)
      • VerneMQ (for backwards compatibility): united-manufacturing-hub-vernemq at port 1883 (MQTT) and 8883 (MQTT over TLS)
      • VerneMQ local (for backwards compatibility): united-manufacturing-hub-vernemq-local-service at port 1883 (MQTT) and 8883 (MQTT over TLS)
    • External LoadBalancer: united-manufacturing-hub-mqtt at port 1883 (MQTT) and 8883 (MQTT over TLS)
  • ConfigMap:
    • Configuration: united-manufacturing-hub-hivemqce-hive
    • Credentials: united-manufacturing-hub-hivemqce-extension
  • Secret: united-manufacturing-hub-hivemqce-secret-keystore
  • PersistentVolumeClaim:
    • Data: united-manufacturing-hub-hivemqce-claim-data
    • Extensions: united-manufacturing-hub-hivemqce-claim-extensions

Configuration

Most of the configuration is done through the XML files in the ConfigMaps. The default configuration should be sufficient for most use cases.

The HiveMQ installation of the United Manufacturing Hub comes with these extensions:

If you want to add more extensions, or to change the configuration, visit the HiveMQ documentation.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
HIVEMQ_ALLOW_ALL_CLIENTSWhether to allow all clients to connect to the brokerbooltrue, falsetrue

3.2.1.11 - MQTT Kafka Bridge

The technical documentation of the mqtt-kafka-bridge microservice, which transfers messages from MQTT brokers to Kafka Brokers and vice versa.

Mqtt-kafka-bridge is a microservice that acts as a bridge between MQTT brokers and Kafka brokers, transfering messages from one to the other and vice versa.

This microservice requires that the Kafka Topic umh.v1.kafka.newTopic exits. This will happen automatically from version 0.9.12.

Since version 0.9.10, it allows all raw messages, even if their content is not in a valid JSON format.

How it works

Mqtt-kafka-bridge consumes topics from a message broker, translates them to the proper format and publishes them to the other message broker.

Kubernetes resources

  • Deployment: united-manufacturing-hub-mqttkafkabridge
  • Secret:
    • Kafka: united-manufacturing-hub-mqttkafkabridge-kafka-secrets
    • MQTT: united-manufacturing-hub-mqttkafkabridge-mqtt-secrets

Configuration

You shouldn’t need to configure mqtt-kafka-bridge manually, as it’s configured automatically when the cluster is deployed. However, if you need to change the configuration, you can do it by editing the mqttkafkabridge section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
DEBUG_ENABLE_FGTRACEEnables the use of the fgtrace library. Not reccomended for productionstringtrue, falsefalse
INSECURE_SKIP_VERIFYSkip TLS certificate verificationbooltrue, falsetrue
KAFKA_BASE_TOPICThe Kafka base topicstringAnyia
KAFKA_BOOTSTRAP_SERVERURL of the Kafka broker used, port is requiredstringAnyunited-manufacturing-hub-kafka:9092
KAFKA_LISTEN_TOPICKafka topic to subscribe to. Accept regex valuesstringAny^ia.+
KAFKA_SENDER_THREADSNumber of threads used to send messages to KafkaintAny1
LOGGING_LEVELDefines which logging level is used, mostly relevant for developersstringPRODUCTION, DEVELOPMENTPRODUCTION
MESSAGE_LRU_SIZESize of the LRU cache used to store messages. This is used to prevent duplicate messages from being sent to Kafka.intAny100000
MICROSERVICE_NAMEName of the microservice (used for tracing)stringAnyunited-manufacturing-hub-mqttkafkabridge
MQTT_BROKER_URLThe MQTT broker URLstringAnyunited-manufacturing-hub-mqtt:1883
MQTT_CERTIFICATE_NAMESet to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryptionstringUSE_TLS, NO_CERTUSE_TLS
MQTT_PASSWORDPassword for the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
MQTT_SENDER_THREADSNumber of threads used to send messages to MQTTintAny1
MQTT_TOPICMQTT topic to subscribe to. Accept regex valuesstringAnyia/#
POD_NAMEName of the pod. Used for tracingstringAnyunited-manufacturing-hub-mqttkafkabridge-Random-ID
RAW_MESSSAGE_LRU_SIZESize of the LRU cache used to store raw messages. This is used to prevent duplicate messages from being sent to Kafka.intAny100000
SERIAL_NUMBERSerial number of the cluster (used for tracing)stringAnydefault

3.2.1.12 - Node-RED

The technical documentation of the nodered microservice, which wires together hardware devices, APIs and online services.

Node-RED is a programming tool for wiring together hardware devices, APIs and online services in new and interesting ways. It provides a browser-based editor that makes it easy to wire together flows using the wide range of nodes in the Node-RED library.

How it works

Node-RED is a JavaScript-based tool that can be used to create flows that interact with the other microservices in the United Manufacturing Hub or external services.

See our guides for Node-RED to learn more about how to use it.

Kubernetes resources

  • StatefulSet: united-manufacturing-hub-nodered
  • Service:
    • External LoadBalancer: united-manufacturing-hub-nodered-service at port 1880
  • ConfigMap:
    • Configuration: united-manufacturing-hub-nodered-config
    • Flows: united-manufacturing-hub-nodered-flows
  • Secret: united-manufacturing-hub-nodered-secrets
  • PersistentVolumeClaim: united-manufacturing-hub-nodered-claim

Configuration

You can enable the nodered microservice and decide if you want to use the default flows in the _000_commonConfig.dataprocessing.nodered section of the Helm chart values.

All the other values are set by default and you can find them in the Danger Zone section of the Helm chart values.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
NODE_RED_ENABLE_SAFE_MODEEnable safe mode, useful in case of broken flowsbooleantrue, falsefalse
TZThe timezone used by Node-REDstringAnyBerlin/Europe

3.2.1.13 - Sensorconnect

The technical documentation of the sensorconnect microservice, which reads data from sensors and sends them to the MQTT or Kafka broker.

Sensorconnect automatically detects ifm gateways connected to the network and reads data from the connected IO-Link sensors.

How it works

Sensorconnect continuosly scans the given IP range for gateways, making it effectively a plug-and-play solution. Once a gateway is found, it automatically download the IODD files for the connected sensors and starts reading the data at the configured interval. Then it processes the data and sends it to the MQTT or Kafka broker, to be consumed by other microservices.

If you want to learn more about how to use sensors in your asstes, check out the retrofitting section of the UMH Learn website.

IODD files

The IODD files are used to describe the sensors connected to the gateway. They contain information about the data type, the unit of measurement, the minimum and maximum values, etc. The IODD files are downloaded automatically from IODDFinder once a sensor is found, and are stored in a Persistent Volume. If downloading from internet is not possible, for example in a closed network, you can download the IODD files manually and store them in the folder specified by the IODD_FILE_PATH environment variable.

If no IODD file is found for a sensor, the data will not be processed, but sent to the broker as-is.

Kubernetes resources

  • StatefulSet: united-manufacturing-hub-sensorconnect
  • Secret:
    • Kafka: united-manufacturing-hub-sensorconnect-kafka-secrets
    • MQTT: united-manufacturing-hub-sensorconnect-mqtt-secrets
  • PersistentVolumeClaim: united-manufacturing-hub-sensorconnect-claim

Configuration

You can configure the IP range to scan for gateways, and which message broker to use, by setting the values of the parameters in the _000_commonConfig.datasources.sensorconnect section of the Helm chart values file.

The default values of the other parameters are usually good for most use cases, but you can change them in the Danger Zone section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
ADDITIONAL_SLEEP_TIME_PER_ACTIVE_PORT_MSAdditional sleep time between pollings for each active portfloatAny0.0
ADDITIONAL_SLOWDOWN_MAPJSON map of values, allows to slow down and speed up the polling time of specific sensorsJSONSee below[]
DEBUG_ENABLE_FGTRACEEnables the use of the fgtrace library. Not reccomended for productionstringtrue, falsefalse
DEVICE_FINDER_TIMEOUT_SECHTTP timeout in seconds for finding new devicesintAny1
DEVICE_FINDER_TIME_SECTime interval in seconds for finding new devicesintAny20
IODD_FILE_PATHFilesystem path where to store IODD filesstringAny valid Unix path/ioddfiles
IP_RANGEThe IP range to scan for new sensorstringAny valid IP in CIDR notation192.168.10.1/24
KAFKA_BOOTSTRAP_SERVERURL of the Kafka broker. Port is requiredstringAnyunited-manufacturing-hub-kafka:9092
KAFKA_SSL_KEY_PASSWORDThe encrypted password of the SSL key. If empty, no password is usedstringAny""
KAFKA_USE_SSLSet to true to use SSL encryption for the connection to the Kafka brokerstringtrue, falsefalse
LOGGING_LEVELDefines which logging level is used, mostly relevant for developersstringPRODUCTION, DEVELOPMENTPRODUCTION
LOWER_POLLING_TIME_MSTime in milliseconds to define the lower bound of time between sensor pollingintAny20
MAX_SENSOR_ERROR_COUNTAmount of errors before a sensor is temporarily disabledintAny50
MICROSERVICE_NAMEName of the microservice (used for tracing)stringAnyunited-manufacturing-hub-sensorconnect
MQTT_BROKER_URLURL of the MQTT broker. Port is requiredstringAnyunited-manufacturing-hub-mqtt:1883
MQTT_CERTIFICATE_NAMESet to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryptionstringUSE_TLS, NO_CERTUSE_TLS
MQTT_PASSWORDPassword for the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
POD_NAMEName of the pod (used for tracing)stringAnyunited-manufacturing-hub-sensorconnect-0
POLLING_SPEED_STEP_DOWN_MSTime in milliseconds subtracted from the polling interval after a successful pollingintAny1
POLLING_SPEED_STEP_UP_MSTime in milliseconds added to the polling interval after a failed pollingintAny20
SENSOR_INITIAL_POLLING_TIME_MSAmount of time in milliseconds before starting to request sensor data. Must be higher than LOWER_POLLING_TIME_MSintAny100
SUB_TWENTY_MSSet to 1 to allow LOWER_POLLING_TIME_MS of under 20 ms. This is not recommended as it might lead to the gateway becoming unresponsive until a manual rebootint0, 10
TESTIf enabled, the microservice will use a test IODD file from the filesystem to use with a mocked sensor. Only useful for development.stringtrue, falsefalse
TRANSMITTERIDSerial number of the cluster (used for tracing)stringAnydefault
UPPER_POLLING_TIME_MSTime in milliseconds to define the upper bound of time between sensor pollingintAny1000
USE_KAFKAIf enabled, uses Kafka as a message brokerstringtrue, falsetrue
USE_MQTTIf enabled, uses MQTT as a message brokerstringtrue, falsefalse

Slowdown map

The ADDITIONAL_SLOWDOWN_MAP environment variable allows you to slow down and speed up the polling time of specific sensors. It is a JSON array of values, with the following structure:

[
  {
    "serialnumber": "000200610104",
    "slowdown_ms": -10
  },
  {
    "url": "http://192.168.0.13",
    "slowdown_ms": 20
  },
  {
    "productcode": "AL13500",
    "slowdown_ms": 20.01
  }
]

3.2.2 - Community

This section contains the overview of the community-supported components of the United Manufacturing Hub used to extend the functionality of the platform.

The microservices in this section are not part of the Core of the United Manufacturing Hub, either because they are still in development, deprecated or only supported community. They can be used to extend the functionality of the platform.

It is not recommended to use these microservices in production as they might be unstable or not supported anymore.

3.2.2.1 - Barcodereader

The technical documentation of the barcodereader microservice, which reads barcodes and sends the data to the Kafka broker.

This microservice is still in development and is not considered stable for production use.

Barcodereader is a microservice that reads barcodes and sends the data to the Kafka broker.

How it works

Connect a barcode scanner to the system and the microservice will read the barcodes and send the data to the Kafka broker.

Kubernetes resources

  • Deployment: united-manufacturing-hub-barcodereader
  • Secret: united-manufacturing-hub-barcodereader-secrets

Configuration

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
ASSET_IDThe asset ID, which is used for the topic structurestringAnybarcodereader
CUSTOMER_IDThe customer ID, which is used for the topic structurestringAnyraw
DEBUG_ENABLE_FGTRACEEnables the use of the fgtrace library. Not recommended for productionstringtrue, falsefalse
INPUT_DEVICE_NAMEThe name of the USB device to usestringAnyDatalogic ADC, Inc. Handheld Barcode Scanner
INPUT_DEVICE_PATHThe path of the USB device to use. It is recommended to use a wildcard (for example, /dev/input/event*) or leave emptystringValid Unix device path""
KAFKA_BOOTSTRAP_SERVERURL of the Kafka broker used, port is requiredstringAnyunited-manufacturing-hub-kafka:9092
LOCATIONThe location, which is used for the topic structurestringAnybarcodereader
LOGGING_LEVELDefines which logging level is used, mostly relevant for developers.stringPRODUCTION, DEVELOPMENTPRODUCTION
MICROSERVICE_NAMEName of the microservice (used for tracing)stringAnyunited-manufacturing-hub-barcodereader
SCAN_ONLYPrevent message broadcasting if enabledbooltrue, falsefalse
SERIAL_NUMBERSerial number of the cluster (used for tracing)stringAnydefalut

3.2.2.2 - Factoryinput

The technical documentation of the factoryinput microservice, which provides REST endpoints for MQTT messages via HTTP requests.

This microservice is still in development and is not considered stable for production use

Factoryinput provides REST endpoints for MQTT messages via HTTP requests.

This microservice is typically accessed via grafana-proxy

How it works

The factoryinput microservice provides REST endpoints for MQTT messages via HTTP requests.

The main endpoint is /api/v1/{customer}/{location}/{asset}/{value}, with a POST request method. The customer, location, asset and value are all strings. And are used to build the MQTT topic. The body of the HTTP request is used as the MQTT payload.

Kubernetes resources

  • StatefulSet: united-manufacturing-hub-factoryinput
  • Service:
    • Internal ClusterIP: united-manufacturing-hub-factoryinput-service at port 80
  • Secret: factoryinput-secret

Configuration

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
BROKER_URLURL to the brokerstringallssl://united-manufacturing-hub-mqtt:8883
CERTIFICATE_NAMESet to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryptionstringUSE_TLS, NO_CERTUSE_TLS
CUSTOMER_NAME_{NUMBER}Specifies a user for the REST API. Multiple users can be setstringAny""
CUSTOMER_PASSWORD_{NUMBER}Specifies the password of the user for the REST APIstringAny""
DEBUG_ENABLE_FGTRACEEnables the use of the fgtrace library. Not reccomended for productionstringtrue, falsefalse
FACTORYINPUT_PASSWORDSpecifies the admin user for the REST APIstringAnyfactoryinsight
FACTORYINPUT_USERSpecifies the password for the admin user for the REST APIstringAnyRandom UUID
LOGGING_LEVELDefines which logging level is used, mostly relevant for developersstringPRODUCTION, DEVELOPMENTPRODUCTION
MQTT_QUEUE_HANDLERNumber of queue workers to spawnint0-6553510
MQTT_PASSWORDPassword for the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
POD_NAMEName of the pod. Used for tracingstringAnyunited-manufacturing-hub-factoryinput-0
SERIAL_NUMBERSerial number of the cluster. Used for tracingstringAnydefalut
VERSIONThe version of the API used. Each version also enables all the previous onesintAny1

3.2.2.3 - Grafana Proxy

The technical documentation of the grafana-proxy microservice, which proxies request from Grafana to the backend services.

This microservice is still in development and is not considered stable for production use

How it works

The grafana-proxy microservice serves an HTTP REST endpoint located at /api/v1/{service}/{data}. The service parameter specifies the backend service to which the request should be proxied, like factoryinput or factoryinsight. The data parameter specifies the API endpoint to forward to the backend service. The body of the HTTP request is used as the payload for the proxied request.

Kubernetes resources

  • Deployment: united-manufacturing-hub-grafanaproxy
  • Service:
    • External LoadBalancer: united-manufacturing-hub-grafanaproxy-service at port 2096

Configuration

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
DEBUG_ENABLE_FGTRACEEnables the use of the fgtrace library. Not reccomended for productionstringtrue, falsefalse
FACTORYINPUT_BASE_URLURL of factoryinputstringAnyhttp://united-manufacturing-hub-factoryinput-service
FACTORYINPUT_KEYSpecifies the password for the admin user for factoryinputstringAnyRandom UUID
FACTORYINPUT_USERSpecifies the admin user for factoryinputstringAnyfactoryinput
FACTORYINSIGHT_BASE_URLURL of factoryinsightstringAnyhttp://united-manufacturing-hub-factoryinsight-service
MICROSERVICE_NAMEName of the microservice. Used for tracingstringAnyunited-manufacturing-hub-factoryinput
SERIAL_NUMBERSerial number of the cluster. Used for tracingstringAnydefault
VERSIONThe version of the API used. Each version also enables all the previous onesintAny1

3.2.2.4 - Kafka State Detector

The technical documentation of the kafka-state-detector microservice, which detects the state of the Kafka broker.
This microservice is still in development and is not considered stable for production use

How it works

Kubernetes resources

  • Deployment: united-manufacturing-hub-kafkastatedetector
  • Secret: united-manufacturing-hub-kafkastatedetector-secrets

Configuration

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
ACTIVITY_ENABLEDControls wheter to check the activity of the Kafka brokerstringtrue, falsetrue
ANOMALY_ENABLEDControls wheter to check for anomalies in the Kafka brokerstringtrue, falsetrue
DEBUG_ENABLE_FGTRACEEnables the use of the fgtrace library. Not reccomended for productionstringtrue, falsefalse
KAFKA_BOOTSTRAP_SERVERURL of the Kafka broker used, port is requiredstringAnyunited-manufacturing-hub-kafka:9092
KAFKA_SSL_KEY_PASSWORDKey password to decode the SSL private keystringAny""
KAKFA_USE_SSLEnables the use of SSL for the kafka connectionstringtrue, falsefalse
MICROSERVICE_NAMEName of the microservice (used for tracing)stringAnyunited-manufacturing-hub-kafkastatedetector
SERIAL_NUMBERSerial number of the cluster. Used for tracingstringAnydefalut

3.2.2.5 - MQTT Simulator

The technical documentation of the iotsensorsmqtt microservice, which simulates sensors sending data to the MQTT broker.

This microservice is a community contribution and is not part of the main stack of the United Manufacturing Hub, but is enabled by default.

The IoTSensors MQTT Simulator is a microservice that simulates sensors sending data to the MQTT broker. You can read the full documentation on the

GitHub repository.

How it works

The microservice publishes messages on the topic ia/raw/development/ioTSensors/, creating a subtopic for each simulation. The subtopics are the names of the simulations, which are Temperature, Humidity, and Pressure. The values are calculated using a normal distribution with a mean and standard deviation that can be configured.

Kubernetes resources

  • Deployment: united-manufacturing-hub-iotsensorsmqtt
  • ConfigMap: united-manufacturing-hub-iotsensors-mqtt

Configuration

You can change the configuration of the microservice by updating the config.json file in the ConfigMap.

3.2.2.6 - MQTT to Postgresql

The technical documentation of the mqtt-to-postgresql microservice, which consumes messages from an MQTT broker and writes them in a PostgreSQL database.

If you landed here from Google, you probably might want to check out either the architecture of the United Manufacturing Hub or our knowledge website for more information on the general topics of IT, OT and IIoT.

This microservice is deprecated and should not be used anymore in production. Please use kafka-to-postgresql instead.

How it works

The mqtt-to-postgresql microservice subscribes to the MQTT broker and saves the values of the messages on the topic ia/# in the database.

3.2.2.7 - OPCUA Simulator

The technical documentation of the opcua-simulator microservice, which simulates OPCUA devices.

This microservice is a community contribution and is not part of the main stack of the United Manufacturing Hub, but is enabled by default.

How it works

The OPCUA Simulator is a microservice that simulates OPCUA devices. You can read the full documentation on the GitHub repository.

You can then connect to the simulated OPCUA server via Node-RED and read the values of the simulated devices. Learn more about how to connect to the OPCUA simulator to Node-RED in our guide.

Kubernetes resources

  • Deployment: united-manufacturing-hub-opcuasimulator-deployment
  • Service:
    • External LoadBalancer: united-manufacturing-hub-opcuasimulator-service at port 46010
  • ConfigMap: united-manufacturing-hub-opcuasimulator-config

Configuration

You can change the configuration of the microservice by updating the config.json file in the ConfigMap.

3.2.2.8 - PackML Simulator

The technical documentation of the packml-simulator microservice, which simulates a manufacturing line using PackML over MQTT.

This microservice is a community contribution and is not part of the main stack of the United Manufacturing Hub, but it is enabled by default.

PackML MQTT Simulator is a virtual line that interfaces using PackML implemented over MQTT. It implements the following PackML State model and communicates over MQTT topics as defined by environmental variables. The simulator can run with either a basic MQTT topic structure or SparkPlugB.

PackML StateModel
PackML StateModel

How it works

You can read the full documentation on the GitHub repository.

Kubernetes resources

  • Deployment: united-manufacturing-hub-packmlmqttsimulator

Configuration

You shouldn’t need to configure PackML Simulator manually, as it’s configured automatically when the cluster is deployed. However, if you need to change the configuration, you can do it by editing the packmlmqttsimulator section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
AREAISA-95 area name of the linestringAnyDefaultArea
LINEISA-95 line name of the linestringAnyDefaultProductionLine
MQTT_PASSWORDPassword for the MQTT broker. Leave empty if the server does not manage permissionsstringAnyINSECURE_INSECURE_INSECURE
MQTT_URLServer URL of the MQTT serverstringAnymqtt://united-manufacturing-hub-mqtt:1883
MQTT_USERNAMEName for the MQTT broker. Leave empty if the server does not manage permissionsstringAnyPACKMLSIMULATOR
SITEISA-95 site name of the linestringAnytestLocation

3.2.2.9 - Tulip Connector

The technical documentation of the tulip-connector microservice, which exposes internal APIs, such as factoryinsight, to the internet. Specifically designed to communicate with Tulip.

This microservice is still in development and is not considered stable for production use.

The tulip-connector microservice enables communication with the United Manufacturing Hub by exposing internal APIs, like factoryinsight, to the internet. With this REST endpoint, users can access data stored in the UMH and seamlessly integrate Tulip with a Unified Namespace and on-premise Historian. Furthermore, the tulip-connector can be customized to meet specific customer requirements, including integration with an on-premise MES system.

How it works

The tulip-connector acts as a proxy between the internet and the UMH. It exposes an endpoint to forward requests to the UMH and returns the response.

API documentation

Kubernetes resources

  • Deployment: united-manufacturing-hub-tulip-connector-deployment
  • Service:
    • Internal ClusterIP: united-manufacturing-hub-tulip-connector-service at port 80
  • Ingress: united-manufacturing-hub-tulip-connector-ingress

Configuration

You can enable the tulip-connector and set the domain for the ingress by editing the values in the _000_commonConfig.tulipconnector section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
FACTORYINSIGHT_PASSWORDSpecifies the password for the admin user for the REST APIstringAnyRandom UUID
FACTORYINSIGHT_URLSpecifies the URL of the factoryinsight microservice.stringAnyhttp://united-manufacturing-hub-factoryinsight-service
FACTORYINSIGHT_USERSpecifies the admin user for the REST APIstringAnyfactoryinsight
MODESpecifies the mode that the service will run in. Change only during developmentstringdev, prodprod

3.2.3 - Grafana Plugins

This section contains the overview of the custom Grafana plugins that can be used to access the United Manufacturing Hub.

3.2.3.1 - Umh Datasource V2

This page contains the technical documentation of the umh-datasource-v2 plugin, which allows for easy data extraction from factoryinsight.

The plugin, umh-datasource-v2, is a Grafana data source plugin that allows you to fetch resources from a database and build queries for your dashboard.

How it works

  1. When creating a new panel, select umh-datasource-v2 from the Data source drop-down menu. It will then fetch the resources from the database. The loading time may depend on your internet speed.

    selectingDatasource
    selectingDatasource

  2. Select the resources in the cascade menu to build your query. DefaultArea and DefaultProductionLine are placeholders for the future implementation of the new data model.

    selectingDatasource
    selectingDatasource

  3. Only the available values for the specified work cell will be fetched from the database. You can then select which data value you want to query.

    selectingDatasource
    selectingDatasource

  4. Next you can specify how to transform the data, depending on what value you selected. For example, all the custom tags will have the aggregation options available. For example if you query a processValue:

    • Time bucket: lets you group data in a time bucket
    • Aggregates: common statistical aggregations (maximum, minimum, sum or count)
    • Handling missing values: lets you choose how missing data should be handled

    selectingDatasource
    selectingDatasource

Configuration

  1. In Grafana, navigate to the Data sources configuration panel.

    selectingConfiguration
    selectingConfiguration

  2. Select umh-v2-datasource to configure it.

    selectingConfiguration
    selectingConfiguration

  3. Configurations:

    • Base URL: the URL for the factoryinsight backend. Defaults to http://united-manufacturing-hub-factoryinsight-service/.
    • Enterprise name: previously customerID for the old datasource plugin. Defaults to factoryinsight.
    • API Key: authenticates the API calls to factoryinsight. Can be found with UMHLens by going to Secrets → factoryinsight-secret → apiKey. It should follow the format Basic xxxxxxxx.

    selectingConfiguration
    selectingConfiguration

    [i18n] resource_grafanaplugin_baseurlfactoryinsight

3.2.3.2 - Umh Datasource

This page contains the technical documentation of the plugin umh-datasource, which allows for easy data extraction from factoryinsight.

We are no longer maintaining this microservice. Use instead our new microservice datasource-v2 for data extraction from factoryinsight.

The umh datasource is a Grafana 8.X compatible plugin, that allows you to fetch resources from a database and build queries for your dashboard.

How it works

  1. When creating a new panel, select umh-datasource from the Data source drop-down menu. It will then fetch the resources from the database. The loading time may depend on your internet speed.

    selectingDatasource
    selectingDatasource

  2. Select your query parameters Location, Asset and Value to build your query.

    selectingDatasource
    selectingDatasource

Configuration

  1. In Grafana, navigate to the Data sources configuration panel.

    selectingConfiguration
    selectingConfiguration

  2. Select umh-datasource to configure it.

    selectingConfiguration
    selectingConfiguration

  3. Configurations:

    • Base URL: the URL for the factoryinsight backend. Defaults to http://united-manufacturing-hub-factoryinsight-service/.
    • Enterprise name: previously customerID for the old datasource plugin. Defaults to factoryinsight.
    • API Key: authenticates the API calls to factoryinsight. Can be found with UMHLens by going to Secrets → factoryinsight-secret → apiKey. It should follow the format Basic xxxxxxxx.

    selectingConfiguration
    selectingConfiguration

3.2.3.3 - Factoryinput Panel

This page contains the technical documentation of the plugin factoryinput-panel, which allows for easy execution of MQTT messages inside the UMH stack from a Grafana panel.

This plugin is still in development and is not considered stable for production use

Requirements

  • A United Manufacturing Hub stack
  • External IP or URL to the grafana-proxy
    • In most cases it is the same IP address as your Grafana dashboard.

Getting started

For development, the steps to build the plugin from source are described here.

  1. Go to united-manufacturing-hub/grafana-plugins/umh-factoryinput-panel
  2. Install dependencies.
yarn install
  1. Build plugin in development mode or run in watch mode.
yarn dev
  1. Build plugin in production mode (not recommended due to Issue 32336).
yarn build
  1. Move the resulting dis folder in your Grafana plugins directory.
  • Windows: C:\Program Files\GrafanaLabs\grafana\data\plugins
  • Linux: /var/lib/grafana/plugins
  1. Rename the folder to umh-factoryinput-panel.

  2. Enable the enable development mode to load unsigned plugins.

  3. restart your Grafana service.

Technical Information

Below you will find a schematic of this flow, through our stack.

3.3 - Datamodel

This page describes the data model of the UMH stack - from the message payloads up to database tables.

Raw Data

If you have events that you just want to send to the message broker / Unified Namespace without the need for it to be stored, simply send it to the raw topic. This data will not be processed by the UMH stack, but you can use it to build your own data processing pipeline.

ProcessValue Data

If you have data that does not fit in the other topics (such as your PLC tags or sensor data), you can use the processValue topic. It will be saved in the database in the processValue or processValueString and can be queried using factorysinsight or the umh-datasource Grafana plugin.

Production Data

In a production environment, you should first declare products using addProduct. This allows you to create an order using addOrder. Once you have created an order, send an state message to tell the database that the machine is working (or not working) on the order.

When the machine is ordered to produce a product, send a startOrder message. When the machine has finished producing the product, send an endOrder message.

Send count messages if the machine has produced a product, but it does not make sense to give the product its ID. Especially useful for bottling or any other use case with a large amount of products, where not each product is traced.

You can also add shifts using addShift.

All messages land up in different tables in the database and will be accessible from factorysinsight or the umh-datasource Grafana plugin.

Recommendation: Start with addShift and state and continue from there on

Modifying Data

If you have accidentally sent the wrong state or if you want to modify a value, you can use the modifyState message.

Unique Product Tracking

You can use uniqueProduct to tell the database that a new instance of a product has been created. If the produced product is scrapped, you can use scrapUniqueProduct to change its state to scrapped.

3.3.1 - Messages

For each message topic you will find a short description what the message is used for and which structure it has, as well as what structure the payload is excepted to have.

Introduction

The United Manufacturing Hub provides a specific structure for messages/topics, each with its own unique purpose. By adhering to this structure, the UMH will automatically calculate KPIs for you, while also making it easier to maintain consistency in your topic structure.

3.3.1.1 - activity

activity messages are sent when a new order is added.

This is part of our recommended workflow to create machine states. The data sent here will not be stored in the database automatically, as it will be required to be converted into a state. In the future, there will be a microservice, which converts these automatically.

Topic


ia/<customerID>/<location>/<AssetID>/activity


ia.<customerID>.<location>.<AssetID>.activity

Usage

A message is sent here each time the machine runs or stops.

Content

keydata typedescription
timestamp_msintunix timestamp of message creation
activitybooltrue if asset is currently active, false if asset is currently inactive

JSON

Examples

The asset was active during the timestamp of the message:

{
  "timestamp_ms":1588879689394,
  "activity": true,
}

Schema

Producers

  • Typically Node-RED

Consumers

  • Typically Node-RED

3.3.1.2 - addOrder

AddOrder messages are sent when a new order is added.

Topic


ia/<customerID>/<location>/<AssetID>/addOrder


ia.<customerID>.<location>.<AssetID>.addOrder

Usage

A message is sent here each time a new order is added.

Content

keydata typedescription
product_idstringcurrent product name
order_idstringcurrent order name
target_unitsint64amount of units to be produced
  1. The product needs to be added before adding the order. Otherwise, this message will be discarded
  2. One order is always specific to that asset and can, by definition, not be used across machines. For this case one would need to create one order and product for each asset (reason: one product might go through multiple machines, but might have different target durations or even target units, e.g. one big 100m batch get split up into multiple pieces)

JSON

Examples

One order was started for 100 units of product “test”:

{
  "product_id":"test",
  "order_id":"test_order",
  "target_units":100
}

Schema

{
    "$schema": "http://json-schema.org/draft/2019-09/schema",
    "$id": "https://learn.umh.app/content/docs/architecture/datamodel/messages/addOrder.json",
    "type": "object",
    "default": {},
    "title": "Root Schema",
    "required": [
        "product_id",
        "order_id",
        "target_units"
    ],
    "properties": {
        "product_id": {
            "type": "string",
            "default": "",
            "title": "The product id to be produced",
            "examples": [
                "test",
                "Beierlinger 30x15"
            ]
        },
        "order_id": {
            "type": "string",
            "default": "",
            "title": "The order id of the order",
            "examples": [
                "test_order",
                "HA16/4889"
            ]
        },
        "target_units": {
            "type": "integer",
            "default": 0,
            "minimum": 0,
            "title": "The amount of units to be produced",
            "examples": [
                1,
                100
            ]
        }
    },
    "examples": [{
      "product_id": "Beierlinger 30x15",
      "order_id": "HA16/4889",
      "target_units": 1
    },{
      "product_id":"test",
      "order_id":"test_order",
      "target_units":100
    }]
}

Producers

  • Typically Node-RED

Consumers

3.3.1.3 - addParentToChild

AddParentToChild messages are sent when child products are added to a parent product.

Topic


ia/<customerID>/<location>/<AssetID>/addParentToChild


ia.<customerID>.<location>.<AssetID>.addParentToChild

Usage

This message can be emitted to add a child product to a parent product. It can be sent multiple times, if a parent product is split up into multiple child’s or multiple parents are combined into one child. One example for this if multiple parts are assembled to a single product.

Content

keydata typedescription
timestamp_msint64unix timestamp you want to go back from
childAIDstringthe AID of the child product
parentAIDstringthe AID of the parent product

JSON

Examples

A parent is added to a child:

{
  "timestamp_ms":1589788888888,
  "childAID":"23948723489",
  "parentAID":"4329875"
}

Schema

{
    "$schema": "http://json-schema.org/draft/2019-09/schema",
    "$id": "https://learn.umh.app/content/docs/architecture/datamodel/messages/scrapCount.json",
    "type": "object",
    "default": {},
    "title": "Root Schema",
    "required": [
        "timestamp_ms",
        "childAID",
        "parentAID"
    ],
    "properties": {
        "timestamp_ms": {
            "type": "integer",
            "default": 0,
            "minimum": 0,
            "title": "The unix timestamp you want to go back from",
            "examples": [
              1589788888888
            ]
        },
        "childAID": {
            "type": "string",
            "default": "",
            "title": "The AID of the child product",
            "examples": [
              "23948723489"
            ]
        },
        "parentAID": {
            "type": "string",
            "default": "",
            "title": "The AID of the parent product",
            "examples": [
              "4329875"
            ]
        }
    },
    "examples": [
        {
            "timestamp_ms":1589788888888,
            "childAID":"23948723489",
            "parentAID":"4329875"
        },
        {
            "timestamp_ms":1589788888888,
            "childAID":"TestChild",
            "parentAID":"TestParent"
        }
    ]
}

Producers

  • Typically Node-RED

Consumers

3.3.1.4 - addProduct

AddProduct messages are sent when a new product is produced.

Topic


ia/<customerID>/<location>/<AssetID>/addProduct


ia.<customerID>.<location>.<AssetID>.addProduct

Usage

A message is sent each time a new product is produced.

Content

keydata typedescription
product_idstringcurrent product name
time_per_unit_in_secondsfloat64the time it takes to produce one unit of the product

See also notes regarding adding products and orders in /addOrder

JSON

Examples

A new product “Beilinger 30x15” with a cycle time of 200ms is added to the asset.

{
  "product_id": "Beilinger 30x15",
  "time_per_unit_in_seconds": "0.2"
}

Schema

{
    "$schema": "http://json-schema.org/draft/2019-09/schema",
    "$id": "https://learn.umh.app/content/docs/architecture/datamodel/messages/scrapCount.json",
    "type": "object",
    "default": {},
    "title": "Root Schema",
    "required": [
        "product_id",
        "time_per_unit_in_seconds"
    ],
    "properties": {
        "product_id": {
          "type": "string",
          "default": "",
          "title": "The product id to be produced"
        },
        "time_per_unit_in_seconds": {
          "type": "number",
          "default": 0.0,
          "minimum": 0,
          "title": "The time it takes to produce one unit of the product"
        }
    },
    "examples": [
        {
            "product_id": "Beierlinger 30x15",
            "time_per_unit_in_seconds": "0.2"
        },
        {
            "product_id": "Test product",
            "time_per_unit_in_seconds": "10"
        }
    ]
}

Producers

  • Typically Node-RED

Consumers

3.3.1.5 - addShift

AddShift messages are sent to add a shift with start and end timestamp.

Topic


ia/<customerID>/<location>/<AssetID>/addShift


ia.<customerID>.<location>.<AssetID>.addShift

Usage

This message is send to indicate the start and end of a shift.

Content

keydata typedescription
timestamp_msint64unix timestamp of the shift start
timestamp_ms_endint64optional unix timestamp of the shift end

JSON

Examples

A shift with start and end:

{
  "timestamp_ms":1589788888888,
  "timestamp_ms_end":1589788888888
}

And shift without end:

{
  "timestamp_ms":1589788888888
}

Schema

{
    "$schema": "http://json-schema.org/draft/2019-09/schema",
    "$id": "https://learn.umh.app/content/docs/architecture/datamodel/messages/scrapCount.json",
    "type": "object",
    "default": {},
    "title": "Root Schema",
    "required": [
        "timestamp_ms"
    ],
    "properties": {
        "timestamp_ms": {
            "type": "integer",
            "description": "The unix timestamp, of shift start"
        },
        "timestamp_ms_end": {
            "type": "integer",
            "description": "The *optional* unix timestamp, of shift end"
        }
    },
    "examples": [
        {
            "timestamp_ms":1589788888888,
            "timestamp_ms_end":1589788888888
        },
        {
            "timestamp_ms":1589788888888
        }
    ]
}

Producers

Consumers

3.3.1.6 - count

Count Messages are sent everytime an asset has counted a new item.

Topic


ia/<customerID>/<location>/<AssetID>/count


ia.<customerID>.<location>.<AssetID>.count

Usage

A count message is send everytime an asset has counted a new item.

Content

keydata typedescription
timestamp_msint64unix timestamp of message creation
countint64amount of items counted
scrapint64optional amount of defective items. In unset 0 is assumed

JSON

Examples

One item was counted and there was no scrap:

{
  "timestamp_ms":1589788888888,
  "count":1,
  "scrap":0
}

Ten items where counted and there was five scrap:

{
  "timestamp_ms":1589788888888,
  "count":10,
  "scrap":5
}

Schema

{
    "$schema": "http://json-schema.org/draft/2019-09/schema",
    "$id": "https://learn.umh.app/content/docs/architecture/datamodel/messages/count.json",
    "type": "object",
    "default": {},
    "title": "Root Schema",
    "required": [
        "timestamp_ms",
        "count"
    ],
    "properties": {
        "timestamp_ms": {
            "type": "integer",
            "default": 0,
            "minimum": 0,
            "title": "The unix timestamp of message creation",
            "examples": [
                1589788888888
            ]
        },
        "count": {
            "type": "integer",
            "default": 0,
            "minimum": 0,
            "title": "The amount of items counted",
            "examples": [
                1
            ]
        },
        "scrap": {
            "type": "integer",
            "default": 0,
            "minimum": 0,
            "title": "The optional amount of defective items",
            "examples": [
                0
            ]
        }
    },
    "examples": [{
      "timestamp_ms": 1589788888888,
      "count": 1,
      "scrap": 0
    },{
      "timestamp_ms": 1589788888888,
      "count": 1
    }]
}

Producers

  • Typically Node-RED

Consumers

3.3.1.7 - deleteShift

DeleteShift messages are sent to delete a shift that starts at the designated timestamp.

Topic


ia/<customerID>/<location>/<AssetID>/deleteShift


ia.<customerID>.<location>.<AssetID>.deleteShift

Usage

deleteShift is generated to delete a shift that started at the designated timestamp.

Content

keydata typedescription
timestamp_msint32unix timestamp of the shift start

JSON

Example

The shift that started at the designated timestamp is deleted from the database.

{
    "begin_time_stamp": 1588879689394
}

Producers

  • Typically Node-RED

Consumers

3.3.1.8 - detectedAnomaly

detectedAnomaly messages are sent when an asset has stopped and the reason is identified.

This is part of our recommended workflow to create machine states. The data sent here will not be stored in the database automatically, as it will be required to be converted into a state. In the future, there will be a microservice, which converts these automatically.

Topic


ia/<customerID>/<location>/<AssetID>/detectedAnomaly


ia.<customerID>.<location>.<AssetID>.detectedAnomaly

Usage

A message is sent here each time a stop reason has been identified automatically or by input from the machine operator.

Content

keydata typedescription
timestamp_msintUnix timestamp of message creation
detectedAnomalystringreason for the production stop of the asset

JSON

Examples

The anomaly of the asset has been identified as maintenance:

{
  "timestamp_ms":1588879689394,
  "detectedAnomaly":"maintenance",
}

Producers

  • Typically Node-RED

Consumers

  • Typically Node-RED

3.3.1.9 - endOrder

EndOrder messages are sent whenever a new product is produced.

Topic


ia/<customerID>/<location>/<AssetID>/endOrder


ia.<customerID>.<location>.<AssetID>.endOrder

Usage

A message is sent each time a new product is produced.

Content

keydata typedescription
timestamp_msint64unix timestamp of message creation
order_idint64current order name

See also notes regarding adding products and orders in /addOrder

JSON

Examples

The order “test_order” was finished at the shown timestamp.

{
  "order_id":"test_order",
  "timestamp_ms":1589788888888
}

Schema

{
    "$schema": "http://json-schema.org/draft/2019-09/schema",
    "$id": "https://learn.umh.app/content/docs/architecture/datamodel/messages/endOrder.json",
    "type": "object",
    "default": {},
    "title": "Root Schema",
    "required": [
        "order_id",
        "timestamp_ms"
    ],
    "properties": {
        "timestamp_ms": {
          "type": "integer",
          "description": "The unix timestamp, of shift start"
        },
        "order_id": {
            "type": "string",
            "default": "",
            "title": "The order id of the order",
            "examples": [
                "test_order",
                "HA16/4889"
            ]
        }
    },
    "examples": [{
      "order_id": "HA16/4889",
      "timestamp_ms":1589788888888
    },{
      "product_id":"test",
      "timestamp_ms":1589788888888
    }]
}

Producers

  • Typically Node-RED

Consumers

3.3.1.10 - modifyProducedPieces

ModifyProducesPieces messages are sent whenever the count of produced and scrapped items need to be modified.

Topic


ia/<customerID>/<location>/<AssetID>/modifyProducedPieces


ia.<customerID>.<location>.<AssetID>.modifyProducedPieces

Usage

modifyProducedPieces is generated to change the count of produced items and scrapped items at the named timestamp.

Content

keydata typedescription
timestamp_msint64unix timestamp of the time point whose count is to be modified
countint32number of produced items
scrapint32number of scrapped items

JSON

Example

The count and scrap are overwritten to be to each at the timestamp.

{
    "timestamp_ms": 1588879689394,
    "count": 10,
    "scrap": 10
}

Producers

  • Typically Node-RED

Consumers

3.3.1.11 - modifyState

ModifyState messages are generated when a state of an asset during a certain timeframe needs to be modified.

Topic


ia/<customerID>/<location>/<AssetID>/modifyState


ia.<customerID>.<location>.<AssetID>.modifyState

Usage

modifyState is generated to modify the state from the starting timestamp to the end timestamp. You can find a list of all supported states here.

Content

keydata typedescription
timestamp_msint32unix timestamp of the starting point of the timeframe to be modified
timestamp_ms_endint32unix timestamp of the end point of the timeframe to be modified
new_stateint32new state code

JSON

Example

The state of the timeframe between the timestamp is modified to be 150000: OperatorBreakState

{
    "timestamp_ms": 1588879689394,
    "timestamp_ms_end": 1588891381023,
    "new_state": 150000
}

Producers

  • Typically Node-RED

Consumers

3.3.1.12 - processValue

ProcessValue messages are sent whenever a custom process value with unique name has been prepared. The value is numerical.

Topic


ia/<customerID>/<location>/<AssetID>/processValue 
or: ia/<customerID>/<location>/<AssetID>/processValue/<tagName>


ia.<customerID>.<location>.<AssetID>.processValue
or: ia.<customerID>.<location>.<AssetID>.processValue.<tagName>

If you have a lot of processValues, we’d recommend not using the /processValue as topic, but to append the tag name as well, e.g., /processValue/energyConsumption. This will structure it better for usage in MQTT Explorer or for data processing only certain processValues.

For automatic data storage in kafka-to-postgresql both will work fine as long as the payload is correct.

Please be aware that the values may only be int or float, other character are not valid, so make sure there is no quotation marks or anything sneaking in there. Also be cautious of using the JavaScript ToFixed() function, as it is converting a float into a string.

Usage

A message is sent each time a process value has been prepared. The key has a unique name.

Content

keydata typedescription
timestamp_msint64unix timestamp of message creation
<valuename>int64 or float64Represents a process value, e.g. temperature

Pre 0.10.0: As <valuename> is either of type ´int64´ or ´float64´, you cannot use booleans. Convert to integers as needed; e.g., true = “1”, false = “0”

Post 0.10.0: <valuename> will be converted, even if it is a boolean value. Check integer literals and floating-point literals for other valid values.

JSON

Example

At the shown timestamp the custom process value “energyConsumption” had a readout of 123456.

{
    "timestamp_ms": 1588879689394, 
    "energyConsumption": 123456
}

Producers

  • Typically Node-RED

Consumers

3.3.1.13 - processValueString

ProcessValueString messages are sent whenever a custom process value is prepared. The value is a string.

This message type is not functional as of 0.9.5!

Topic


ia/<customerID>/<location>/<AssetID>/processValueString


ia.<customerID>.<location>.<AssetID>.processValueString

Usage

A message is sent each time a process value has been prepared. The key has a unique name. This message is used when the datatype of the process value is a string instead of a number.

Content

keydata typedescription
timestamp_msint64unix timestamp of message creation
<valuename>stringRepresents a process value, e.g. temperature

JSON

Example

At the shown timestamp the custom process value “customer” had a readout of “miller”.

{
    "timestamp_ms": 1588879689394, 
    "customer": "miller"
}

Producers

  • Typically Node-RED

Consumers

3.3.1.14 - productTag

ProductTag messages are sent to contextualize processValue messages.

Topic


ia/<customerID>/<location>/<AssetID>/productTag


ia.<customerID>.<location>.<AssetID>.productTag

Usage

productTagString is usually generated by contextualizing a processValue.

Content

keydata typedescription
AIDstringAID of the product
namestringName of the product
valuefloat64key of the processValue
timestamp_msint64unix timestamp of message creation

JSON

Example

At the shown timestamp the product with the shown AID had 5 blemishes recorded.

{
    "AID": "43298756", 
    "name": "blemishes",
    "value": 5, 
    "timestamp_ms": 1588879689394
}

Producers

  • Typically Node-RED

Consumers

3.3.1.15 - productTagString

ProductTagString messages are sent to contextualize processValueString messages.

Topic


ia/<customerID>/<location>/<AssetID>/productTagString


ia.<customerID>.<location>.<AssetID>.productTagString

Usage

ProductTagString is usually generated by contextualizing a processValueString.

Content

keydata typedescription
AIDstringAID of the product
namestringKey of the processValue
valuestringvalue of the processValue
timestamp_msint64unix timestamp of message creation

JSON

Example

At the shown timestamp the product with the shown AID had the processValue of “test_value”.

{
    "AID": "43298756", 
    "name": "shirt_size",
    "value": "XL", 
    "timestamp_ms": 1588879689394
}

Producers

Consumers

3.3.1.16 - recommendation

Recommendation messages are sent whenever rapid actions would quickly improve efficiency on the shop floor.

Topic


ia/<customerID>/<location>/<AssetID>/recommendation


ia.<customerID>.<location>.<AssetID>.recommendation

Usage

recommendation are action recommendations, which require concrete and rapid action in order to quickly eliminate efficiency losses on the store floor.

Content

keydata typedescription
uidstringUniqueID of the product
timestamp_msint64unix timestamp of message creation
customerstringthe customer ID in the data structure
locationstringthe location in the data structure
assetstringthe asset ID in the data structure
recommendationTypeint32Name of the product
enabledbool-
recommendationValuesmapMap of values based on which this recommendation is created
diagnoseTextDEstringDiagnosis of the recommendation in german
diagnoseTextENstringDiagnosis of the recommendation in english
recommendationTextDEstringRecommendation in german
recommendationTextENstringRecommendation in english

JSON

Example

A recommendation for the demonstrator at the shown location has not been running for a while, so a recommendation is sent to either start the machine or specify a reason why it is not running.

{
    "UID": "43298756", 
    "timestamp_ms": 15888796894,
    "customer": "united-manufacturing-hub",
    "location": "dccaachen", 
    "asset": "DCCAachen-Demonstrator",
    "recommendationType": "1", 
    "enabled": true,
    "recommendationValues": { "Treshold": 30, "StoppedForTime": 612685 }, 
    "diagnoseTextDE": "Maschine DCCAachen-Demonstrator steht seit 612685 Sekunden still (Status: 8, Schwellwert: 30)" ,
    "diagnoseTextEN": "Machine DCCAachen-Demonstrator is not running since 612685 seconds (status: 8, threshold: 30)", 
    "recommendationTextDE":"Maschine DCCAachen-Demonstrator einschalten oder Stoppgrund auswählen.",
    "recommendationTextEN": "Start machine DCCAachen-Demonstrator or specify stop reason.", 
}

Producers

  • Typically Node-RED

Consumers

3.3.1.17 - scrapCount

ScrapCount messages are sent whenever a product is to be marked as scrap.

Topic


ia/<customerID>/<location>/<AssetID>/scrapCount


ia.<customerID>.<location>.<AssetID>.scrapCount

Usage

Here a message is sent every time products should be marked as scrap. It works as follows: A message with scrap and timestamp_ms is sent. It starts with the count that is directly before timestamp_ms. It is now iterated step by step back in time and step by step the existing counts are set to scrap until a total of scrap products have been scraped.

Content

  • timestamp_ms is the unix timestamp, you want to go back from
  • scrap number of item to be considered as scrap.
  1. You can specify maximum of 24h to be scrapped to avoid accidents
  2. (NOT IMPLEMENTED YET) If counts does not equal scrap, e.g. the count is 5 but only 2 more need to be scrapped, it will scrap exactly 2. Currently, it would ignore these 2. see also #125
  3. (NOT IMPLEMENTED YET) If no counts are available for this asset, but uniqueProducts are available, they can also be marked as scrap.

JSON

Examples

Ten items where scrapped:

{
  "timestamp_ms":1589788888888,
  "scrap":10
}

Schema

{
    "$schema": "http://json-schema.org/draft/2019-09/schema",
    "$id": "https://learn.umh.app/content/docs/architecture/datamodel/messages/scrapCount.json",
    "type": "object",
    "default": {},
    "title": "Root Schema",
    "required": [
        "timestamp_ms",
        "scrap"
    ],
    "properties": {
        "timestamp_ms": {
            "type": "integer",
            "default": 0,
            "minimum": 0,
            "title": "The unix timestamp you want to go back from",
            "examples": [
              1589788888888
            ]
        },
        "scrap": {
            "type": "integer",
            "default": 0,
            "minimum": 0,
            "title": "Number of items to be considered as scrap",
            "examples": [
                10
            ]
        }
    },
    "examples": [
        {
            "timestamp_ms": 1589788888888,
            "scrap": 10
        },
        {
            "timestamp_ms": 1589788888888,
            "scrap": 5
        }
    ]
}

Producers

  • Typically Node-RED

Consumers

3.3.1.18 - scrapUniqueProduct

ScrapUniqueProduct messages are sent whenever a unique product should be scrapped.

Topic


ia/<customerID>/<location>/<AssetID>/scrapUniqueProduct


ia.<customerID>.<location>.<AssetID>.scrapUniqueProduct

Usage

A message is sent here everytime a unique product is scrapped.

Content

keydata typedescription
UIDstringunique ID of the current product

JSON

Example

The product with the unique ID 22 is scrapped.

{
    "UID": 22, 
}

Producers

  • Typically Node-RED

Consumers

3.3.1.19 - startOrder

StartOrder messages are sent whenever a new order is started.

Topic


ia/<customerID>/<location>/<AssetID>/startOrder


ia.<customerID>.<location>.<AssetID>.startOrder

Usage

A message is sent here everytime a new order is started.

Content

keydata typedescription
order_idstringname of the order
timestamp_msint64unix timestamp of message creation
  1. See also notes regarding adding products and orders in /addOrder
  2. When startOrder is executed multiple times for an order, the last used timestamp is used.

JSON

Example

The order “test_order” is started at the shown timestamp.

{
  "order_id":"test_order",
  "timestamp_ms":1589788888888
}

Producers

  • Typically Node-RED

Consumers

3.3.1.20 - state

State messages are sent every time an asset changes status.

Topic


ia/<customerID>/<location>/<AssetID>/state


ia.<customerID>.<location>.<AssetID>.state

Usage

A message is sent here each time the asset changes status. Subsequent changes are not possible. Different statuses can also be process steps, such as “setup”, “post-processing”, etc. You can find a list of all supported states here.

Content

keydata typedescription
stateuint32value of the state according to the link above
timestamp_msuint64unix timestamp of message creation

JSON

Example

The asset has a state of 10000, which means it is actively producing.

{
  "timestamp_ms":1589788888888,
  "state":10000
}

Producers

  • Typically Node-RED

Consumers

3.3.1.21 - uniqueProduct

UniqueProduct messages are sent whenever a unique product was produced or modified.

Topic


ia/<customerID>/<location>/<AssetID>/uniqueProduct


ia.<customerID>.<location>.<AssetID>.uniqueProduct

Usage

A message is sent here each time a product has been produced or modified. A modification can take place, for example, due to a downstream quality control.

There are two cases of when to send a message under the uniqueProduct topic:

  • The exact product doesn’t already have a UID (-> This is the case, if it has not been produced at an asset incorporated in the digital shadow). Specify a space holder asset = “storage” in the MQTT message for the uniqueProduct topic.
  • The product was produced at the current asset (it is now different from before, e.g. after machining or after something was screwed in). The newly produced product is always the “child” of the process. Products it was made out of are called the “parents”.

Content

keydata typedescription
begin_timestamp_msint64unix timestamp of start time
end_timestamp_msint64unix timestamp of completion time
product_idstringproduct ID of the currently produced product
isScrapbooloptional information whether the current product is of poor quality and will be sorted out. Is considered false if not specified.
uniqueProductAlternativeIDstringalternative ID of the product

JSON

Example

The processing of product “Beilinger 30x15” with the AID 216381 started and ended at the designated timestamps. It is of low quality and due to be scrapped.

{
  "begin_timestamp_ms":1589788888888,
  "end_timestamp_ms":1589788893729,
  "product_id":"Beilinger 30x15",
  "isScrap":true,
  "uniqueProductAlternativeID":"216381"
}

Producers

  • Typically Node-RED

Consumers

3.3.2 - Database

The database stores the messages in different tables.

Introduction

We are using the database TimescaleDB, which is based on PostgreSQL and supports standard relational SQL database work, while also supporting time-series databases. This allows for usage of regular SQL queries, while also allowing to process and store time-series data. Postgresql has proven itself reliable over the last 25 years, so we are happy to use it.

If you want to learn more about database paradigms, please refer to the knowledge article about that topic. It also includes a concise video summarizing what you need to know about different paradigms.

Our database model is designed to represent a physical manufacturing process. It keeps track of the following data:

  • The state of the machine
  • The products that are produced
  • The orders for the products
  • The workers’ shifts
  • Arbitrary process values (sensor data)
  • The producible products
  • Recommendations for the production

Please note that our database does not use a retention policy. This means that your database can grow quite fast if you save a lot of process values. Take a look at our guide on enabling data compression and retention in TimescaleDB to customize the database to your needs.

A good method to check your db size would be to use the following commands inside postgres shell:

SELECT pg_size_pretty(pg_database_size('factoryinsight'));

3.3.2.1 - assetTable

assetTable is contains all assets and their location.

Usage

Primary table for our data structure, it contains all the assets and their location.

Structure

keydata typedescriptionexample
idintAuto incrementing id of the asset0
assetIDtextAsset namePrinter-03
locationtextPhysical location of the assetDCCAachen
customertextCustomer name, in most cases “factoryinsight”factoryinsight

Relations

assetTable
assetTable

DDL

 CREATE TABLE IF NOT EXISTS assetTable
 (
     id         SERIAL  PRIMARY KEY,
     assetID    TEXT    NOT NULL,
     location   TEXT    NOT NULL,
     customer   TEXT    NOT NULL,
     unique (assetID, location, customer)
 );

3.3.2.2 - configurationTable

configurationTable stores the configuration of the UMH system.

Usage

This table stores the configuration of the system

Structure

keydata typedescriptionexample
customertextCustomer namefactoryinsight
MicrostopDurationInSecondsintegerStop counts as microstop if smaller than this value120
IgnoreMicrostopUnderThisDurationInSecondsintegerIgnore stops under this value-1
MinimumRunningTimeInSecondsintegerMinimum runtime of the asset before tracking micro-stops0
ThresholdForNoShiftsConsideredBreakInSecondsintegerIf no shift is shorter than this value, it is a break2100
LowSpeedThresholdInPcsPerHourintegerThreshold once machine should go into low speed state-1
AutomaticallyIdentifyChangeoversbooleanAutomatically identify changeovers in productiontrue
LanguageCodeinteger0 is german, 1 is english1
AvailabilityLossStatesinteger[]States to count as availability loss{40000, 180000, 190000, 200000, 210000, 220000}
PerformanceLossStatesinteger[]States to count as performance loss{20000, 50000, 60000, 70000, 80000, 90000, 100000, 110000, 120000, 130000, 140000, 150000}

Relations

configurationTable
configurationTable

DDL

CREATE TABLE IF NOT EXISTS configurationTable
(
    customer TEXT PRIMARY KEY,
    MicrostopDurationInSeconds INTEGER DEFAULT 60*2,
    IgnoreMicrostopUnderThisDurationInSeconds INTEGER DEFAULT -1, --do not apply
    MinimumRunningTimeInSeconds INTEGER DEFAULT 0, --do not apply
    ThresholdForNoShiftsConsideredBreakInSeconds INTEGER DEFAULT 60*35,
    LowSpeedThresholdInPcsPerHour INTEGER DEFAULT -1, --do not apply
    AutomaticallyIdentifyChangeovers BOOLEAN DEFAULT true,
    LanguageCode INTEGER DEFAULT 1, -- english
    AvailabilityLossStates INTEGER[] DEFAULT '{40000, 180000, 190000, 200000, 210000, 220000}',
    PerformanceLossStates INTEGER[] DEFAULT '{20000, 50000, 60000, 70000, 80000, 90000, 100000, 110000, 120000, 130000, 140000, 150000}'
);

3.3.2.3 - countTable

countTable contains all reported counts of all assets.

Usage

This table contains all reported counts of the assets.

Structure

keydata typedescriptionexample
timestamptimestamptzEntry timestamp0
asset_idserialAsset id (see assetTable)1
countintegerA count greater 01

Relations

countTable
countTable

DDL

CREATE TABLE IF NOT EXISTS countTable
(
    timestamp                TIMESTAMPTZ                         NOT NULL,
    asset_id            SERIAL REFERENCES assetTable (id),
    count INTEGER CHECK (count > 0),
    UNIQUE(timestamp, asset_id)
);
-- creating hypertable
SELECT create_hypertable('countTable', 'timestamp');

-- creating an index to increase performance
CREATE INDEX ON countTable (asset_id, timestamp DESC);

3.3.2.4 - orderTable

orderTable contains orders for production.

Usage

This table stores orders for product production

Structure

keydata typedescriptionexample
order_idserialAuto incrementing id0
order_nametextName of the orderScarjit-500-DaVinci-1-24062022
product_idserialProduct id to produce1
begin_timestamptimestamptzBegin timestamp of the order0
end_timestamptimestamptzEnd timestamp of the order10000
target_unitsintegerHow many product to produce500
asset_idserialWhich asset to produce on (see assetTable)1

Relations

orderTable
orderTable

DDL

CREATE TABLE IF NOT EXISTS orderTable
(
    order_id        SERIAL          PRIMARY KEY,
    order_name      TEXT            NOT NULL,
    product_id      SERIAL          REFERENCES productTable (product_id),
    begin_timestamp TIMESTAMPTZ,
    end_timestamp   TIMESTAMPTZ,
    target_units    INTEGER,
    asset_id        SERIAL          REFERENCES assetTable (id),
    unique (asset_id, order_name),
    CHECK (begin_timestamp < end_timestamp),
    CHECK (target_units > 0),
    EXCLUDE USING gist (asset_id WITH =, tstzrange(begin_timestamp, end_timestamp) WITH &&) WHERE (begin_timestamp IS NOT NULL AND end_timestamp IS NOT NULL)
);

3.3.2.5 - processValueStringTable

processValueStringTable contains process values.

Usage

This table stores process values, for example toner level of a printer, flow rate of a pump, etc. This table, has a closely related table for storing number values, processValueTable.

Structure

keydata typedescriptionexample
timestamptimestamptzEntry timestamp0
asset_idserialAsset id (see assetTable)1
valueNametextName of the process valuetoner-level
valuestringValue of the process value100

Relations

processValueTable
processValueTable

DDL

CREATE TABLE IF NOT EXISTS processValueStringTable
(
    timestamp               TIMESTAMPTZ                         NOT NULL,
    asset_id                SERIAL                              REFERENCES assetTable (id),
    valueName               TEXT                                NOT NULL,
    value                   TEST                                NULL,
    UNIQUE(timestamp, asset_id, valueName)
);
-- creating hypertable
SELECT create_hypertable('processValueStringTable', 'timestamp');

-- creating an index to increase performance
CREATE INDEX ON processValueStringTable (asset_id, timestamp DESC);

-- creating an index to increase performance
CREATE INDEX ON processValueStringTable (valuename);

3.3.2.6 - processValueTable

processValueTable contains process values.

Usage

This table stores process values, for example toner level of a printer, flow rate of a pump, etc. This table, has a closely related table for storing string values, processValueStringTable.

Structure

keydata typedescriptionexample
timestamptimestamptzEntry timestamp0
asset_idserialAsset id (see assetTable)1
valueNametextName of the process valuetoner-level
valuedoubleValue of the process value100

Relations

processValueTable
processValueTable

DDL

CREATE TABLE IF NOT EXISTS processValueTable
(
    timestamp               TIMESTAMPTZ                         NOT NULL,
    asset_id                SERIAL                              REFERENCES assetTable (id),
    valueName               TEXT                                NOT NULL,
    value                   DOUBLE PRECISION                    NULL,
    UNIQUE(timestamp, asset_id, valueName)
);
-- creating hypertable
SELECT create_hypertable('processValueTable', 'timestamp');

-- creating an index to increase performance
CREATE INDEX ON processValueTable (asset_id, timestamp DESC);

-- creating an index to increase performance
CREATE INDEX ON processValueTable (valuename);

3.3.2.7 - productTable

productTable contains products in production.

Usage

This table products to be produced at assets

Structure

keydata typedescriptionexample
product_idserialAuto incrementing id0
product_nametextName of the productPainting-DaVinci-1
asset_idserialAsset producing this product (see assetTable)1
time_per_unit_in_secondsrealTime in seconds to produce this product600

Relations

productTable
productTable

DDL

CREATE TABLE IF NOT EXISTS productTable
(
    product_id                  SERIAL PRIMARY KEY,
    product_name                TEXT NOT NULL,
    asset_id                    SERIAL REFERENCES assetTable (id),
    time_per_unit_in_seconds    REAL NOT NULL,
    UNIQUE(product_name, asset_id),
    CHECK (time_per_unit_in_seconds > 0)
);

3.3.2.8 - recommendationTable

recommendationTable contains given recommendation for the shop floor assets.

Usage

This table stores recommendations

Structure

keydata typedescriptionexample
uidtextId of the recommendationrefill_toner
timestamptimestamptzTimestamp of recommendation insertion1
recommendationTypeintegerUsed to subscribe people to specific types only3
enabledboolRecommendation can be outputtedtrue
recommendationValuestextValues to change to resolve recommendation{ “toner-level”: 100 }
diagnoseTextDEtextDiagnose text in german“Der Toner ist leer”
diagnoseTextENtextDiagnose text in english“The toner is empty”
recommendationTextDEtextRecommendation text in german“Bitte den Toner auffüllen”
recommendationTextENtextRecommendation text in english“Please refill the toner”

Relations

recommendationTable
recommendationTable

DDL

CREATE TABLE IF NOT EXISTS recommendationTable
(
    uid                     TEXT                                PRIMARY KEY,
    timestamp               TIMESTAMPTZ                         NOT NULL,
    recommendationType      INTEGER                             NOT NULL,
    enabled                 BOOLEAN                             NOT NULL,
    recommendationValues    TEXT,
    diagnoseTextDE          TEXT,
    diagnoseTextEN          TEXT,
    recommendationTextDE    TEXT,
    recommendationTextEN    TEXT
);

3.3.2.9 - shiftTable

shiftTable contains shifts with asset, start and finish timestamp

Usage

This table stores shifts

Structure

keydata typedescriptionexample
idserialAuto incrementing id0
typeintegerShift type (1 for shift, 0 for no shift)1
begin_timestamptimestamptzBegin of the shift3
end_timestamptimestamptzEnd of the shift10
asset_idtextAsset ID the shift is performed on (see assetTable)1

Relations

shiftTable
shiftTable

DDL

-- Using btree_gist to avoid overlapping shifts
-- Source: https://gist.github.com/fphilipe/0a2a3d50a9f3834683bf
CREATE EXTENSION btree_gist;
CREATE TABLE IF NOT EXISTS shiftTable
(
    id              SERIAL      PRIMARY KEY,
    type            INTEGER,
    begin_timestamp TIMESTAMPTZ NOT NULL,
    end_timestamp   TIMESTAMPTZ,
    asset_id        SERIAL      REFERENCES assetTable (id),
    unique (begin_timestamp, asset_id),
    CHECK (begin_timestamp < end_timestamp),
    EXCLUDE USING gist (asset_id WITH =, tstzrange(begin_timestamp, end_timestamp) WITH &&)
);

3.3.2.10 - stateTable

stateTable contains the states of all assets.

Usage

This table contains all state changes of the assets.

Structure

keydata typedescriptionexample
timestamptimestamptzEntry timestamp0
asset_idserialAsset ID (see assetTable)1
stateintegerState ID (see states)40000

Relations

stateTable
stateTable

DDL

CREATE TABLE IF NOT EXISTS stateTable
(
    timestamp   TIMESTAMPTZ NOT NULL,
    asset_id    SERIAL      REFERENCES assetTable (id),
    state       INTEGER     CHECK (state >= 0),
    UNIQUE(timestamp, asset_id)
);
-- creating hypertable
SELECT create_hypertable('stateTable', 'timestamp');

-- creating an index to increase performance
CREATE INDEX ON stateTable (asset_id, timestamp DESC);

3.3.2.11 - uniqueProductTable

uniqueProductTable contains unique products and their IDs.

Usage

This table stores unique products.

Structure

keydata typedescriptionexample
uidtextID of a unique product0
asset_idserialAsset id (see assetTable)1
begin_timestamp_mstimestamptzTime when product was inputted in asset0
end_timestamp_mstimestamptzTime when product was output of asset100
product_idtextID of the product (see productTable)1
is_scrapbooleanTrue if product is scraptrue
quality_classtextQuality class of the productA
station_idtextID of the station where the product was processedSoldering Iron-1

Relations

uniqueProductTable
uniqueProductTable

DDL

CREATE TABLE IF NOT EXISTS uniqueProductTable
(
    uid                 TEXT        NOT NULL,
    asset_id            SERIAL      REFERENCES assetTable (id),
    begin_timestamp_ms  TIMESTAMPTZ NOT NULL,
    end_timestamp_ms    TIMESTAMPTZ NOT NULL,
    product_id          TEXT        NOT NULL,
    is_scrap            BOOLEAN     NOT NULL,
    quality_class       TEXT        NOT NULL,
    station_id          TEXT        NOT NULL,
    UNIQUE(uid, asset_id, station_id),
    CHECK (begin_timestamp_ms < end_timestamp_ms)
);

-- creating an index to increase performance
CREATE INDEX ON uniqueProductTable (asset_id, uid, station_id);

3.3.3 - States

States are the core of the database model. They represent the state of the machine at a given point in time.

States Documentation Index

Introduction

This documentation outlines the various states used in the United Manufacturing Hub software stack to calculate OEE/KPI and other production metrics.

State Categories

Glossary

  • OEE: Overall Equipment Effectiveness
  • KPI: Key Performance Indicator

Conclusion

This documentation provides a comprehensive overview of the states used in the United Manufacturing Hub software stack and their respective categories. For more information on each state category and its individual states, please refer to the corresponding subpages.

3.3.3.1 - Active (10000-29999)

These states represent that the asset is actively producing

10000: ProducingAtFullSpeedState

This asset is running at full speed.

Examples for ProducingAtFullSpeedState

  • WS_Cur_State: Operating
  • PackML/Tobacco: Execute

20000: ProducingAtLowerThanFullSpeedState

Asset is producing, but not at full speed.

Examples for ProducingAtLowerThanFullSpeedState

  • WS_Cur_Prog: StartUp
  • WS_Cur_Prog: RunDown
  • WS_Cur_State: Stopping
  • PackML/Tobacco : Stopping
  • WS_Cur_State: Aborting
  • PackML/Tobacco: Aborting
  • WS_Cur_State: Holding
  • Ws_Cur_State: Unholding
  • PackML:Tobacco: Unholding
  • WS_Cur_State Suspending
  • PackML/Tobacco: Suspending
  • WS_Cur_State: Unsuspending
  • PackML/Tobacco: Unsuspending
  • PackML/Tobacco: Completing
  • WS_Cur_Prog: Production
  • EUROMAP: MANUAL_RUN
  • EUROMAP: CONTROLLED_RUN

Currently not included:

  • WS_Prog_Step: all

3.3.3.2 - Unknown (30000-59999)

These states represent that the asset is in an unspecified state

30000: UnknownState

Data for that particular asset is not available (e.g. connection to the PLC is disrupted)

Examples for UnknownState

  • WS_Cur_Prog: Undefined
  • EUROMAP: Offline

40000 UnspecifiedStopState

The asset is not producing, but the reason is unknown at the time.

Examples for UnspecifiedStopState

  • WS_Cur_State: Clearing
  • PackML/Tobacco: Clearing
  • WS_Cur_State: Emergency Stop
  • WS_Cur_State: Resetting
  • PackML/Tobacco: Clearing
  • WS_Cur_State: Held
  • EUROMAP: Idle
  • Tobacco: Other
  • WS_Cur_State: Stopped
  • PackML/Tobacco: Stopped
  • WS_Cur_State: Starting
  • PackML/Tobacco: Starting
  • WS_Cur_State: Prepared
  • WS_Cur_State: Idle
  • PackML/Tobacco: Idle
  • PackML/Tobacco: Complete
  • EUROMAP: READY_TO_RUN

50000: MicrostopState

The asset is not producing for a short period (typically around five minutes), but the reason is unknown at the time.

3.3.3.3 - Material (60000-99999)

These states represent that the asset has issues regarding materials.

60000 InletJamState

This machine does not perform its intended function due to a lack of material flow in the infeed of the machine, detected by the sensor system of the control system (machine stop). In the case of machines that have several inlets, the condition o lack in the inlet refers to the main flow , i.e. to the material (crate, bottle) that is fed in the direction of the filling machine (Central machine). The defect in the infeed is an extraneous defect, but because of its importance for visualization and technical reporting, it is recorded separately.

Examples for InletJamState

  • WS_Cur_State: Lack

70000: OutletJamState

The machine does not perform its intended function as a result of a jam in the good flow discharge of the machine, detected by the sensor system of the control system (machine stop). In the case of machines that have several discharges, the jam in the discharge condition refers to the main flow, i.e. to the good (crate, bottle) that is fed in the direction of the filling machine (central machine) or is fed away from the filling machine. The jam in the outfeed is an external fault 1v, but it is recorded separately, because of its importance for visualization and technical reporting.

Examples for OutletJamState

  • WS_Cur_State: Tailback

80000: CongestionBypassState

The machine does not perform its intended function due to a shortage in the bypass supply or a jam in the bypass discharge of the machine, detected by the sensor system of the control system (machine stop). This condition can only occur in machines with two outlets or inlets and in which the bypass is in turn the inlet or outlet of an upstream or downstream machine of the filling line (packaging and palleting machines). The jam/shortage in the auxiliary flow is an external fault, but it is recoded separately due to its importance for visualization and technical reporting.

Examples for the CongestionBypassState

  • WS_Cur_State: Lack/Tailback Branch Line

90000: MaterialIssueOtherState

The asset has a material issue, but it is not further specified.

Examples for MaterialIssueOtherState

  • WS_Mat_Ready (Information of which material is lacking)
  • PackML/Tobacco: Suspended

3.3.3.4 - Process(100000-139999)

These states represent that the asset is in a stop, which belongs to the process and cannot be avoided.

100000: ChangeoverState

The asset is in a changeover process between products.

Examples for ChangeoverState

  • WS_Cur_Prog: Program-Changeover
  • Tobacco: CHANGE OVER

110000: CleaningState

The asset is currently in a cleaning process.

Examples for CleaningState

  • WS_Cur_Prog: Program-Cleaning
  • Tobacco: CLEAN
120000: EmptyingState

The asset is currently emptied, e.g. to prevent mold for food products over the long breaks, e.g. the weekend.

Examples for EmptyingState
  • Tobacco: EMPTY OUT

130000: SettingUpState

This machine is currently preparing itself for production, e.g. heating up.

Examples for SettingUpState
  • EUROMAP: PREPARING

3.3.3.5 - Operator (140000-159999)

These states represent that the asset is stopped because of operator related issues.

140000: OperatorNotAtMachineState

The operator is not at the machine.

150000: OperatorBreakState

The operator is taking a break.

This is different from a planned shift as it could contribute to performance losses.

Examples for OperatorBreakState

  • WS_Cur_Prog: Program-Break

3.3.3.6 - Planning (160000-179999)

These states represent that the asset is stopped as it is planned to stopped (planned idle time).

160000: NoShiftState

There is no shift planned at that asset.

170000: NO OrderState

There is no order planned at that asset.

3.3.3.7 - Technical (180000-229999)

These states represent that the asset has a technical issue.

180000: EquipmentFailureState

The asset itself is defect, e.g. a broken engine.

Examples for EquipmentFailureState

  • WS_Cur_State: Equipment Failure

190000: ExternalFailureState

There is an external failure, e.g. missing compressed air.

Examples for ExternalFailureState

  • WS_Cur_State: External Failure

200000: ExternalInterferenceState

There is an external interference, e.g. the crane to move the material is currently unavailable.

210000: PreventiveMaintenanceStop

A planned maintenance action.

Examples for PreventiveMaintenanceStop

  • WS_Cur_Prog: Program-Maintenance
  • PackML: Maintenance
  • EUROMAP: MAINTENANCE
  • Tobacco: MAINTENANCE

220000: TechnicalOtherStop

The asset has a technical issue, but it is not specified further.

Examples for TechnicalOtherStop

  • WS_Not_Of_Fail_Code
  • PackML: Held
  • EUROMAP: MALFUNCTION
  • Tobacco: MANUAL
  • Tobacco: SET UP
  • Tobacco: REMOTE SERVICE

4 - Production Guide

This section contains information about how to use the stack in a production environment.

4.1 - Installation

This section contains guides on how to install the United Manufacturing Hub.

Learn how to install the United Manufacturing Hub using completely Free and Open Source Software.

4.1.1 - Flatcar Installation (Bare Metal)

This page describes how to deploy the United Manufacturing Hub on Flatcar Linux on bare metal.

Here is a step-by-step guide on how to deploy the UMH stack on Flatcar Linux, a Linux distribution designed for container workloads, with high security and low maintenance.

This is a good option if you want to deploy the UMH stack on edge devices or IPCs.

Before you begin

Your system must meet the following requirements before you can install the United Manufacturing Hub:

  • CPU cores: 4
  • Memory size: 8 GB
  • Hard disk size: 32 GB

You need the latest version of our iPXE boot image:

The image needs to be written to a USB stick. If you want to know how to do this, follow our guide on how to flash an operating system onto a USB-stick.

You also need a computer with an SSH client (most modern operating systems already have it) and either UMHLens or OpenLens installed.

Additionally, this guide assumes a configuration similar to the following:

%%{ init: { 'flowchart': { 'curve': 'bumpY' } } }%% flowchart LR A(Internet) -. WAN .- B[Router] subgraph Internal network B -- LAN --- C[Edge device] B -- LAN --- D[Your computer] end
For optimal functionality, we recommend assigning a static IP address to your edge device. This can be accomplished through a static lease in the DHCP server or by setting the IP address during installation. Changing the IP address of the edge device after installation may result in certificate issues, so we strongly advise against doing so. By assigning a static IP address, you can ensure a more stable and reliable connection for your edge device.

Install Flatcar Linux on the edge device

  1. Connect the USB stick to the edge device and boot it. Each device has a different way of booting from USB, so you need to consult the documentation of your device.
  2. Accept the License.
  3. Select the correct network settings. If you are unsure, select DHCP, but keep in mind that a static IP address is strongly recommended.
  4. Select the correct drive to install Flatcar Linux on. If you are unsure, check the troubleshooting section.
  5. Check that the installation settings are correct and press Confirm to start the installation.

Now the installation will start. You should see a green command line soon after, that says [email protected] ~$~. Now remove the USB stick from the device. At this point the system is still installing. After a few minutes, depending on the speed of your network, the installation will finish and the system will reboot. Now you should see a grey login prompt that says flatcar-1-umh login:, as well as the IP address of the device.

Please note that the installation may take some time. This largely depends on the available resources including network speed and system performance.

Connect to the edge device

Now you can leave the edge device and connect to it from your computer via SSH.

If you are on Windows 11, we recommend using the default Windows terminal, that you can find by typing terminal in the Windows search bar or Start menu. Next, connect to the edge device via SSH, using the IP address you saw on the login prompt:

ssh [email protected]<ip-address>

If you are not on Windows 11, you can use MobaXTerm to connect to the edge device via SSH. Open MobaXTerm and click on Session in the top left corner. Then click on SSH and enter the IP address of the edge device in the Remote host field. Click on Advanced SSH settings and enter core in the Username field. Click on Save and then on Open.

The default password for the core user is umh.

Import the cluster configuration

  1. From your SSH session, run the following command to get the cluster configuration:

    sudo kubectl config view --raw
    

    The output should look similar to this:

    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: <long string>
        server: https://127.0.0.1:6443
      name: default
    contexts:
    - context:
        cluster: default
        user: default
      name: default
    current-context: default
    kind: Config
    preferences: {}
    users:
    - name: default
      user:
        client-certificate-data: <long string>
        client-key-data: <long string>
    
  2. Copy the output.

  3. Open UMHLens / OpenLens, click on the three horizontal lines in the upper left corner and choose Files > Add Cluster.

  4. Paste the output.

  5. Update the server field to the IP address of the device, e.g.:

    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: <long string>
        server: https://192.168.0.123:6443 # <- update this
    ...
    
  6. If you want, you can also update the name field to something more meaningful, e.g.:

    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: <long string>
        server: https://192.168.0.123:6443
        name: my-edge-device # <- update this
    ...
    
  7. Click on Add clusters.

Access the UMH stack

  1. Open UMHLens / OpenLens on your device.

  2. From the homepage, click on Browse Clusters in Catalog. You should see all your clusters.

  3. Click on a cluster to connect to it.

  4. Navigate to Helm > Releases and change the namespace from default to united-manufacturing-hub in the upper right corner.

    lens-namespaces
    lens-namespaces

  5. Select the united-manufacturing-hub Release to inspect the release details, the installed resources, and the Helm values.

Troubleshooting

The installation stops at the green login prompt

To check the status of the installation, run the following command:

systemctl status installer

If the installation is still running, you should see something like this:

● installer.service - Flatcar Linux Installer
     Loaded: loaded (/usr/lib/systemd/system/installer.service; static; vendor preset: enabled)
     Active: active (running) since Wed 2021-05-12 14:00:00 UTC; 1min 30s ago

Otherwise, the installation failed. You can check the logs to see what went wrong.

I don’t know which drive to select

You can check the drive type from the manual of your device.

  • For SATA drives (spinning hard disk or SSD), the drive type is SDA.
  • For NVMe drives, the drive type is NVMe.

If you are unsure, you can boot into the edge device with any Linux distribution and run the following command:

lsblk

The output should look similar to this:

NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 223.6G  0 disk
├─sda1   8:1    0   512M  0 part /boot
└─sda2   8:2    0 223.1G  0 part /
sdb      8:0    0  31.8G  0 disk
└─sdb1   8:1    0  31.8G  0 part /mnt/usb

In this case, the drive type is SDA. Generally, the drive type is the name of the first drive in the list, or at least the drive that doesn’t match the size of the USB stick.

I can access the cluster but there are no resources

First completely shut down UMHLens / OpenLens (from the system tray). Then start it again and try to access the cluster.

If that doesn’t work, access the edge device via SSH and run the following command:

systemctl status k3s

If the output contains a status different from active (running), the cluster is not running. Otherwise, the UMH installation failed. You can check the logs with the following commands:

systemctl status umh-install
systemctl status helm-install

If any of the commands returns some errors, is probably easier to reinstall the system.

What’s next

  • You can follow the Getting Started guide to get familiar with the UMH stack.
  • If you already know your way around the United Manufacturing Hub, you can follow the Administration guides to configure the stack for production.

4.1.2 - Flatcar Installation (Virtual Machine)

This page describes how to deploy the United Manufacturing Hub on Flatcar Linux in a virtual machine.

Here is a step-by-step guide on how to deploy the UMH stack on Flatcar Linux, a Linux distribution designed for container workloads, with high security and low maintenance, in a virtual machine.

This is a good option if you want to deploy the UMH stack on a virtual machine to try out the installation process or to test the UMH stack.

Before you begin

You need the latest version of our iPXE boot image.

The image needs to be written to a USB stick. If you want to know how to do this, follow our guide on how to flash an operating system onto a USB-stick.

You also need to have a virtual machine software installed on your computer. We recommend VirtualBox, which is free and open source, but other solutions are also possible.

Additionally, you need to have either UMHLens or OpenLens installed.

Create a virtual machine

Create a new virtual machine in your virtual machine software. Make sure to use the following settings:

  • Operating System: Linux
  • Version: Other Linux (64-bit)
  • CPU cores: 4
  • Memory size: 8 GB
  • Hard disk size: 32 GB

Also, the network settings of the virtual machine must allow communication with the internet and the host machine. If you are using VirtualBox, you can check the network settings by clicking on the virtual machine in the VirtualBox manager and then on Settings. In the Network tab, make sure that the Adapter 1 is set to Bridged Adapter.

Install Flatcar Linux

  1. Start the virtual machine.
  2. Accept the License.
  3. Set a static IP address.
  4. Select sda as the disk.
  5. Select Confirm.

Now the installation will start. You should see a green command line soon after, that says [email protected] ~$~. At this point the system is still installing. After a few minutes, depending on the speed of your network, the installation will finish and the system will reboot.

By default, it will reboot into the installation environment. Just shut down the virtual machine and remove the ISO image from the CD drive, then boot the virtual machine again. This way, the installation process will continue, at the end of which you will a grey login prompt that says flatcar-1-umh login:, as well as the IP address of the device.

Please note that the installation may take some time. This largely depends on the available resources including network speed and system performance.

Connect to the virtual machine

You can leave the virtual machine running and connect to it using SSH, so that is easier to work with it.

Open a terminal on your computer and connect to the edge device via SSH, using the IP address you saw on the login prompt:

ssh [email protected]<ip-address>

If you are on Windows, you can use MobaXTerm to connect to the edge device via SSH. Open MobaXTerm and click on Session in the top left corner. Then click on SSH and enter the IP address of the edge device in the Remote host field. Click on Advanced SSH settings and enter core in the Username field. Click on Save and then on Open.

The default password for the core user is umh.

Import the cluster configuration

  1. From your SSH session, run the following command to get the cluster configuration:

    sudo kubectl config view --raw
    

    The output should look similar to this:

    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: <long string>
        server: https://127.0.0.1:6443
      name: default
    contexts:
    - context:
        cluster: default
        user: default
      name: default
    current-context: default
    kind: Config
    preferences: {}
    users:
    - name: default
      user:
        client-certificate-data: <long string>
        client-key-data: <long string>
    
  2. Copy the output.

  3. Open UMHLens / OpenLens, click on the three horizontal lines in the upper left corner and choose Files > Add Cluster.

  4. Paste the output.

  5. Update the server field to the IP address of the device, e.g.:

    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: <long string>
        server: https://192.168.0.123:6443 # <- update this
    ...
    
  6. If you want, you can also update the name field to something more meaningful, e.g.:

    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: <long string>
        server: https://192.168.0.123:6443
        name: my-edge-device # <- update this
    ...
    
  7. Click on Add clusters.

Access the UMH stack

  1. Open UMHLens / OpenLens on your device.

  2. From the homepage, click on Browse Clusters in Catalog. You should see all your clusters.

  3. Click on a cluster to connect to it.

  4. Navigate to Helm > Releases and change the namespace from default to united-manufacturing-hub in the upper right corner.

    lens-namespaces
    lens-namespaces

  5. Select the united-manufacturing-hub Release to inspect the release details, the installed resources, and the Helm values.

Troubleshooting

The installation stops at the green login prompt

To check the status of the installation, run the following command:

systemctl status installer

If the installation is still running, you should see something like this:

● installer.service - Flatcar Linux Installer
     Loaded: loaded (/usr/lib/systemd/system/installer.service; static; vendor preset: enabled)
     Active: active (running) since Wed 2021-05-12 14:00:00 UTC; 1min 30s ago

Otherwise, the installation failed. You can check the logs to see what went wrong.

I can access the cluster but there are no resources

First completely shut down UMHLens / OpenLens (from the system tray). Then start it again and try to access the cluster.

If that doesn’t work, access the virtual machine via SSH and run the following command:

systemctl status k3s

If the output contains a status different from active (running), the cluster is not running. Otherwise, the UMH installation failed. You can check the logs with the following commands:

systemctl status umh-install
systemctl status helm-install

If any of the commands returns some errors, is probably easier to reinstall the system.

I can’t SSH into the virtual machine

If you can’t SSH into the virtual machine, make sure that the network settings for the virtual machine are correct. If you are using VirtualBox, you can check the network settings by clicking on the virtual machine in the VirtualBox manager and then on Settings. In the Network tab, make sure that the Adapter 1 is set to NAT.

Disable any VPNs that you might be using.

What’s next

  • You can follow the Getting Started guide to get familiar with the UMH stack.
  • If you already know your way around the United Manufacturing Hub, you can follow the Administration guides to configure the stack for production.

4.1.3 - Local k3d Installation

This page describes how to deploy the United Manufacturing Hub locally using k3d.

This can now be done using the Community Edition of the Management Console. Go check it out!

Here is a step-by-step guide on how to deploy the UMH stack using k3d, a lightweight wrapper to run k3s in Docker. k3d makes it very easy to create single- and multi-node k3s clusters in Docker, e.g. for local development on Kubernetes.

Before you begin

Your system must meet the following requirements before you can install the United Manufacturing Hub:

  • CPU cores: 4
  • Memory size: 8 GB
  • Hard disk size: 32 GB

You also need to have Docker up and running and either UMHLens or OpenLens installed.

Install dependencies

  1. Install kubectl. Refer to the kubectl installation if you need help.

    choco install kubernetes-cli
    

    brew install kubectl
    

    curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
    sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
    
  2. Install Helm. Refer to the Helm installation if you need help.

    choco install kubernetes-helm
    

    brew install helm
    

    curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
    chmod 700 get_helm.sh
    ./get_helm.sh
    
  3. Install k3d. Refer to the k3d installation if you need help.

    choco install k3d
    

    brew install k3d
    

    curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash
    

Create a cluster

  1. Create a cluster.

    k3d cluster create united-manufacturing-hub --api-port 127.0.0.1:6443 --port 8080:[email protected]:0 --port 8090:[email protected]:0 --port 1880:[email protected]:0 --port 5432:[email protected]:0 --port 1883:[email protected]:0 --port 8883:[email protected]:0 --port 9092:[email protected]:0 --port 46010:[email protected]:0
    

    The --api-port flag is used to expose the Kubernetes API server on the host machine. If the 6443 port is already in use, you can use any other port. The --port flag is used to expose the ports of the services running in the cluster on the host machine. If any of the ports on the left side of the : is already in use, you can use any other port.

  2. Verify that the cluster is up and running.

    kubectl get nodes
    

    The output should look like this:

    NAME                                            STATUS   ROLES                  AGE   VERSION
    k3d-united-manufacturing-hub-server-0           Ready    control-plane,master   10s   v1.24.4+k3s1
    

Install the UMH stack

  1. Add the UMH Helm repository.

    helm repo add united-manufacturing-hub https://repo.umh.app/
    
  2. Update the Helm repository.

    helm repo update
    
  3. Create the namespace.

    kubectl create namespace united-manufacturing-hub
    
  4. Install the UMH stack.

    helm install united-manufacturing-hub united-manufacturing-hub/united-manufacturing-hub -n united-manufacturing-hub
    
  1. Open UMHLens / OpenLens on your device.

  2. From the homepage, click on Browse Clusters in Catalog. You should see all your clusters.

  3. Click on a cluster to connect to it.

  4. Navigate to Helm > Releases and change the namespace from default to united-manufacturing-hub in the upper right corner.

    lens-namespaces
    lens-namespaces

  5. Select the united-manufacturing-hub Release to inspect the release details, the installed resources, and the Helm values.

Troubleshooting

I don’t see the cluster in UMHLens / OpenLens

If you don’t see the cluster in UMHLens / OpenLens, you might have to add the cluster manually. To do so, follow these steps:

  1. Open a terminal and run the following command to get the kubeconfig file:

    k3d kubeconfig get united-manufacturing-hub
    
  2. Copy the output of the command.

  3. Open UMHLens / OpenLens, click on the three horizontal lines in the upper left corner and choose Files > Add Cluster.

  4. Paste the kubeconfig and click Add clusters.

What’s next

  • You can follow the Getting Started guide to get familiar with the UMH stack.
  • If you already know your way around the United Manufacturing Hub, you can follow the Administration guides to configure the stack for production.

4.2 - Upgrading

This section contains the upgrading guides for the different versions the United Manufacturing Hub.

The United Manufacturing Hub is a continuously evolving product. This means that new features and bug fixes are added to the product on a regular basis. This section contains the upgrading guides for the different versions the United Manufacturing Hub.

The upgrading process is done by upgrading the Helm chart.

4.2.1 - Upgrade to v0.9.14

This page describes how to upgrade the United Manufacturing Hub to version 0.9.14

This page describes how to upgrade the United Manufacturing Hub to version 0.9.14. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

Add Helm repo in UMHLens / OpenLens

Check if the UMH Helm repository is added in UMHLens / OpenLens. To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,). Click on the Kubernetes tab and check if the Helm Chart section contains the https://repo.umh.app repository.

If it doesn’t, click the Add Custom Helm Repo button and fill in the following values:

Then click Add.

Clear Workloads

Some workloads need to be deleted before upgrading. This process does not delete any data, but it will cause downtime. If a workload is missing, it means that it was not enabled in your cluster, therefore you can skip it.

To delete a resource, you can select it using the box on the left of the resource name and click the - button on the bottom right corner.

  1. Open the Workloads tab.
  2. From the Deployment section, delete the following deployments:
    • united-manufacturing-hub-factoryinsight-deployment
    • united-manufacturing-hub-opcuasimulator-deployment
    • united-manufacturing-hub-iotsensorsmqtt
    • united-manufacturing-hub-grafanaproxy
  3. From the StatefulSet section, delete the following statefulsets:
    • united-manufacturing-hub-hivemqce
    • united-manufacturing-hub-kafka
    • united-manufacturing-hub-nodered
    • united-manufacturing-hub-sensorconnect
    • united-manufacturing-hub-mqttbridge
  4. Open the Network tab.
  5. From the Services section, delete the following services:
    • united-manufacturing-hub-kafka

Upgrade Helm Chart

Now everything is ready to upgrade the Helm chart.

  1. Navigate to the Helm > Releases tab.

  2. Select the united-manufacturing-hub release and click Upgrade.

  3. In the Helm Upgrade window, make sure that the Upgrade version field contains the version you want to upgrade to.

  4. You can also change the values of the Helm chart, if needed. For example, if you want to apply the new tweaks to the resources in order to avoid the Out Of Memory crash of the MQTT Broker, you can change the following values:

    iotsensorsmqtt:
      resources:
        requests:
          cpu: 10m
          memory: 20Mi
        limits:
          cpu: 30m
          memory: 50Mi
    grafanaproxy:
     resources:
       requests:
         cpu: 100m
       limits:
         cpu: 300m
    kafkatopostgresql:
      resources:
        requests:
          memory: 150Mi
        limits:
          memory: 300Mi
    opcuasimulator:
      resources:
        requests:
          cpu: 10m
          memory: 20Mi
        limits:
          cpu: 30m
          memory: 50Mi
    packmlmqttsimulator:
      resources:
        requests:
          cpu: 10m
          memory: 20Mi
        limits:
          cpu: 30m
          memory: 50Mi
    tulipconnector:
      resources:
        limits:
          cpu: 30m
          memory: 50Mi
        requests:
          cpu: 10m
          memory: 20Mi
    redis:
      master:
        resources:
          limits:
            cpu: 100m
            memory: 100Mi
          requests:
            cpu: 50m
            memory: 50Mi
    mqtt_broker:
      resources:
        limits:
          cpu: 700m
          memory: 1700Mi
        requests:
          cpu: 300m
          memory: 1000Mi
    

    You can also enable the new container registry by changing the values in the image or image.repository fields from unitedmanufacturinghub/<image-name> to ghcr.io/united-manufacturing-hub/<image-name>.

  5. Click Upgrade.

The upgrade process can take a few minutes. The upgrade is complete when the Status field of the release is Deployed.

4.2.2 - Upgrade to v0.9.13

This page describes how to upgrade the United Manufacturing Hub to version 0.9.13

This page describes how to upgrade the United Manufacturing Hub to version 0.9.13. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

Add Helm repo in UMHLens / OpenLens

Check if the UMH Helm repository is added in UMHLens / OpenLens. To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,). Click on the Kubernetes tab and check if the Helm Chart section contains the https://repo.umh.app repository.

If it doesn’t, click the Add Custom Helm Repo button and fill in the following values:

Then click Add.

Clear Workloads

Some workloads need to be deleted before upgrading. This process does not delete any data, but it will cause downtime. If a workload is missing, it means that it was not enabled in your cluster, therefore you can skip it.

To delete a resource, you can select it using the box on the left of the resource name and click the - button on the bottom right corner.

  1. Open the Workloads tab.
  2. From the Deployment section, delete the following deployments:
    • united-manufacturing-hub-barcodereader
    • united-manufacturing-hub-factoryinsight-deployment
    • united-manufacturing-hub-kafkatopostgresql
    • united-manufacturing-hub-mqttkafkabridge
    • united-manufacturing-hub-iotsensorsmqtt
    • united-manufacturing-hub-opcuasimulator-deployment
  3. From the StatefulSet section, delete the following statefulsets:
    • united-manufacturing-hub-mqttbridge
    • united-manufacturing-hub-hivemqce
    • united-manufacturing-hub-nodered
    • united-manufacturing-hub-sensorconnect

Upgrade Helm Chart

Now everything is ready to upgrade the Helm chart.

  1. Navigate to the Helm > Releases tab.
  2. Select the united-manufacturing-hub release and click Upgrade.
  3. In the Helm Upgrade window, make sure that the Upgrade version field contains the version you want to upgrade to.
  4. You can also change the values of the Helm chart, if needed.
  5. Click Upgrade.

The upgrade process can take a few minutes. The upgrade is complete when the Status field of the release is Deployed.

4.2.3 - Upgrade to v0.9.12

This page describes how to upgrade the United Manufacturing Hub to version 0.9.12

This page describes how to upgrade the United Manufacturing Hub to version 0.9.12. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

Add Helm repo in UMHLens / OpenLens

Check if the UMH Helm repository is added in UMHLens / OpenLens. To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,). Click on the Kubernetes tab and check if the Helm Chart section contains the https://repo.umh.app repository.

If it doesn’t, click the Add Custom Helm Repo button and fill in the following values:

Then click Add.

Backup RBAC configuration for MQTT Broker

This step is only needed if you enabled RBAC for the MQTT Broker and changed the default password. If you did not change the default password, you can skip this step.

  1. Navigate to Config > ConfigMaps.
  2. Select the united-manufacturing-hub-hivemqce-extension ConfigMap.
  3. Copy the content of credentials.xml and save it in a safe place.

Clear Workloads

Some workloads need to be deleted before upgrading. This process does not delete any data, but it will cause downtime. If a workload is missing, it means that it was not enabled in your cluster, therefore you can skip it.

To delete a resource, you can select it using the box on the left of the resource name and click the - button on the bottom right corner.

  1. Open the Workloads tab.
  2. From the Deployment section, delete the following deployments:
    • united-manufacturing-hub-barcodereader
    • united-manufacturing-hub-factoryinsight-deployment
    • united-manufacturing-hub-kafkatopostgresql
    • united-manufacturing-hub-mqttkafkabridge
    • united-manufacturing-hub-iotsensorsmqtt
    • united-manufacturing-hub-opcuasimulator-deployment
  3. From the StatefulSet section, delete the following statefulsets:
    • united-manufacturing-hub-mqttbridge
    • united-manufacturing-hub-hivemqce
    • united-manufacturing-hub-nodered
    • united-manufacturing-hub-sensorconnect

Remove MQTT Broker extension PVC

In this version we reduced the size of the MQTT Broker extension PVC. To do so, we need to delete the old PVC and create a new one. This process will set the credentials of the MQTT Broker to the default ones. If you changed the default password, you can restore them after the upgrade.

  1. Navigate to Storage > Persistent Volume Claims.
  2. Select the united-manufacturing-hub-hivemqce-claim-extensions PVC and click Delete.

Upgrade Helm Chart

Now everything is ready to upgrade the Helm chart.

  1. Navigate to the Helm > Releases tab.

  2. Select the united-manufacturing-hub release and click Upgrade.

  3. In the Helm Upgrade window, make sure that the Upgrade version field contains the version you want to upgrade to.

  4. There are some incompatible changes in this version. To avoid errors, you need to change the following values:

    • Remove property console.console.config.kafka.tls.passphrase:

      console:
        console:
          config:
            kafka:
              tls:
                passphrase: "" # <- remove this line
      
    • console.extraContainers: remove the property and its content.

      console:
        extraContainers: {} # <- remove this line
      
    • console.extraEnv: remove the property and its content.

      console:
        extraEnv: "" # <- remove this line
      
    • console.extraEnvFrom: remove the property and its content.

      console:
        extraEnvFrom: ""  # <- remove this line
      
    • console.extraVolumeMounts: remove the |- characters right after the property name. It should look like this:

      console:
        extraVolumeMounts: # <- remove the `|-` characters in this line
          - name: united-manufacturing-hub-kowl-certificates
            mountPath: /SSL_certs/kafka
            readOnly: true
      
    • console.extraVolumes: remove the |- characters right after the property name. It should look like this:

      console:
        extraVolumes: # <- remove the `|-` characters in this line
          - name: united-manufacturing-hub-kowl-certificates
            secret:
              secretName: united-manufacturing-hub-kowl-secrets
      
    • Change the console.service property to the following:

      console:
        service:
          type: LoadBalancer
          port: 8090
          targetPort: 8080
      
    • Change the Redis URI in factoryinsight.redis:

      factoryinsight:
        redis:
          URI: united-manufacturing-hub-redis-headless:6379
      
    • Set the following values in the kafka section to true, or add them if they are missing:

      kafka:
        externalAccess:
          autoDiscovery:
            enabled: true
          enabled: true
        rbac:
          create: true
      
    • Change redis.architecture to standalone:

      redis:
        architecture: standalone
      
    • redis.sentinel: remove the property and its content.

      redis:
        sentinel: {} # <- remove all the content of this section
      
    • Remove the property redis.master.command:

      redis:
        master:
        command: /run.sh # <- remove this line
      
    • timescaledb-single.fullWalPrevention: remove the property and its content.

      timescaledb-single:
        fullWalPrevention:              # <- remove this line
          checkFrequency: 30            # <- remove this line
          enabled: false                # <- remove this line
          thresholds:                   # <- remove this line
            readOnlyFreeMB: 64          # <- remove this line
            readOnlyFreePercent: 5      # <- remove this line
            readWriteFreeMB: 128        # <- remove this line
            readWriteFreePercent: 8     # <- remove this line
      
    • timescaledb-single.loadBalancer: remove the property and its content.

      timescaledb-single:
        loadBalancer:          # <- remove this line
          annotations:         # <- remove this line
            service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "4000" # <- remove this line
          enabled: true        # <- remove this line
          port: 5432           # <- remove this line
      
    • timescaledb-single.replicaLoadBalancer: remove the property and its content.

      timescaledb-single:
        replicaLoadBalancer:
          annotations:         # <- remove this line
            service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "4000" # <- remove this line
          enabled: false       # <- remove this line
          port: 5432           # <- remove this line
      
    • timescaledb-single.secretNames: remove the property and its content.

      timescaledb-single:
        secretNames: {} # <- remove this line 
      
    • timescaledb-single.unsafe: remove the property and its content.

      timescaledb-single:
        unsafe: false # <- remove this line
      
    • Change the value of the timescaledb-single.service.primary.type property to LoadBalancer:

      timescaledb-single:
        service:
          primary:
            type: LoadBalancer
      
  5. Click Upgrade.

The upgrade process can take a few minutes. The upgrade is complete when the Status field of the release is Deployed.

4.2.4 - Upgrade to v0.9.11

This page describes how to upgrade the United Manufacturing Hub to version 0.9.11

This page describes how to upgrade the United Manufacturing Hub to version 0.9.11. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

Add Helm repo in UMHLens / OpenLens

Check if the UMH Helm repository is added in UMHLens / OpenLens. To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,). Click on the Kubernetes tab and check if the Helm Chart section contains the https://repo.umh.app repository.

If it doesn’t, click the Add Custom Helm Repo button and fill in the following values:

Then click Add.

Clear Workloads

Some workloads need to be deleted before upgrading. This process does not delete any data, but it will cause downtime. If a workload is missing, it means that it was not enabled in your cluster, therefore you can skip it.

To delete a resource, you can select it using the box on the left of the resource name and click the - button on the bottom right corner.

  1. Open the Workloads tab.
  2. From the Deployment section, delete the following deployments:
    • united-manufacturing-hub-barcodereader
    • united-manufacturing-hub-factoryinsight-deployment
    • united-manufacturing-hub-kafkatopostgresql
    • united-manufacturing-hub-mqttkafkabridge
    • united-manufacturing-hub-iotsensorsmqtt
    • united-manufacturing-hub-opcuasimulator-deployment
  3. From the StatefulSet section, delete the following statefulsets:
    • united-manufacturing-hub-mqttbridge
    • united-manufacturing-hub-hivemqce
    • united-manufacturing-hub-nodered
    • united-manufacturing-hub-sensorconnect

Upgrade Helm Chart

Now everything is ready to upgrade the Helm chart.

  1. Navigate to the Helm > Releases tab.
  2. Select the united-manufacturing-hub release and click Upgrade.
  3. In the Helm Upgrade window, make sure that the Upgrade version field contains the version you want to upgrade to.
  4. You can also change the values of the Helm chart, if needed.
  5. Click Upgrade.

The upgrade process can take a few minutes. The upgrade is complete when the Status field of the release is Deployed.

4.2.5 - Upgrade to v0.9.10

This page describes how to upgrade the United Manufacturing Hub to version 0.9.10

This page describes how to upgrade the United Manufacturing Hub to version 0.9.10. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

Add Helm repo in UMHLens / OpenLens

Check if the UMH Helm repository is added in UMHLens / OpenLens. To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,). Click on the Kubernetes tab and check if the Helm Chart section contains the https://repo.umh.app repository.

If it doesn’t, click the Add Custom Helm Repo button and fill in the following values:

Then click Add.

Grafana plugins

In this release, the Grafana version has been updated from 8.5.9 to 9.3.1. Check the release notes for further information about the changes.

Additionally, the way default plugins are installed has changed. Unfortunatly, it is necesary to manually install all the plugins that were previously installed.

If you didn’t install any plugin other than the default ones, you can skip this section.

Follow these steps to see the list of plugins installed in your cluster:

  1. Open the browser and go to the Grafana dashboard.

  2. Navigate to the Configuration > Plugins tab.

  3. Select the Installed filter.

    Show installed grafana plugins
    Show installed grafana plugins

  4. Write down all the plugins that you manually installed. You can recognize them by not having the Core tag.

    Image of core and signed plugins
    Image of core and signed plugins

    The following ones are installed by default, therefore you can skip them:

    • ACE.SVG by Andrew Rodgers
    • Button Panel by UMH Systems Gmbh
    • Button Panel by CloudSpout LLC
    • Discrete by Natel Energy
    • Dynamic Text by Marcus Olsson
    • FlowCharting by agent
    • Pareto Chart by isaozler
    • Pie Chart (old) by Grafana Labs
    • Timepicker Buttons Panel by williamvenner
    • UMH Datasource by UMH Systems Gmbh
    • Untimely by factry
    • Worldmap Panel by Grafana Labs

Clear Workloads

Some workloads need to be deleted before upgrading. This process does not delete any data, but it will cause downtime. If a workload is missing, it means that it was not enabled in your cluster, therefore you can skip it.

To delete a resource, you can select it using the box on the left of the resource name and click the - button on the bottom right corner.

  1. Open the Workloads tab.
  2. From the Deployment section, delete the following deployments:
    • united-manufacturing-hub-barcodereader
    • united-manufacturing-hub-factoryinsight-deployment
    • united-manufacturing-hub-grafana
    • united-manufacturing-hub-kafkatopostgresql
    • united-manufacturing-hub-mqttkafkabridge
    • united-manufacturing-hub-iotsensorsmqtt
    • united-manufacturing-hub-opcuasimulator-deployment
  3. From the StatefulSet section, delete the following statefulsets:
    • united-manufacturing-hub-mqttbridge
    • united-manufacturing-hub-hivemqce
    • united-manufacturing-hub-nodered
    • united-manufacturing-hub-sensorconnect

Upgrade Helm Chart

Now everything is ready to upgrade the Helm chart.

  1. Navigate to the Helm > Releases tab.

  2. Select the united-manufacturing-hub release and click Upgrade.

  3. In the Helm Upgrade window, make sure that the Upgrade version field contains the version you want to upgrade to.

  4. You can also change the values of the Helm chart, if needed.

    • In the grafana section, find the extraInitContainers field and change its value to the following:

          - image: unitedmanufacturinghub/grafana-umh:1.1.2
            name: init-plugins
            imagePullPolicy: IfNotPresent
            command: ['sh', '-c', 'cp -r /plugins /var/lib/grafana/']
            volumeMounts:
              - name: storage
                mountPath: /var/lib/grafana
      
    • Make these changes in the kafka section:

      • Set the value of the heapOpts field to -Xmx2048m -Xms2048m.

      • Replace the content of the resources section with the following:

            limits:
              cpu: 1000m
              memory: 4Gi
            requests:
              cpu: 100m
              memory: 2560Mi
        
  5. Click Upgrade.

The upgrade process can take a few minutes. The upgrade is complete when the Status field of the release is Deployed.

Afterwards, you can reinstall the additional Grafana plugins.

Replace VerneMQ with HiveMQ

In this upgrade we switched from using VerneMQ to HiveMQ as our MQTT Broker (you can read the blog article about it).

While this process is fully backwards compatible, we suggest to update NodeRed flows and any other additional service that uses MQTT, to use the new service broker called united-manufacturing-hub-mqtt. The old united-manufacturing-hub-vernemq is still functional and, despite the name, also points to HiveMQ, but in future upgrades will be removed.

Additionally, for production environments, we recommend to enable RBAC for the MQTT Broker.

Please double-check if all of your services can connect to the new MQTT broker. It might be needed for them to be restarted, so that they can resolve the DNS name and get the new IP. Also, it can happen with tools like chirpstack, that you need to specify the client-id as the automatically generated ID worked with VerneMQ, but is now declined by HiveMQ.

Troubleshooting

Some microservices can’t connect to the new MQTT broker

If you are using the united-manufacturing-hub-mqtt service, but some microservice can’t connect to it, restarting the microservice might solve the issue. To do so, you can delete the Pod of the microservice and let Kubernetes recreate it.

ChirpStack can’t connect to the new MQTT broker

ChirpStack uses a generated client-id to connect to the MQTT broker. This client-id is not accepted by HiveMQ. To solve this issue, you can set the client_id field in the integration.mqtt section of the chirpstack configuration file to a fixed value:

[integration]
...
  [integration.mqtt]
  client_id="chirpstack"

4.2.6 - Upgrade to v0.9.9

This page describes how to upgrade the United Manufacturing Hub to version 0.9.9

This page describes how to upgrade the United Manufacturing Hub to version 0.9.9. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

Add Helm repo in UMHLens / OpenLens

Check if the UMH Helm repository is added in UMHLens / OpenLens. To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,). Click on the Kubernetes tab and check if the Helm Chart section contains the https://repo.umh.app repository.

If it doesn’t, click the Add Custom Helm Repo button and fill in the following values:

Then click Add.

Clear Workloads

Some workloads need to be deleted before upgrading. This process does not delete any data, but it will cause downtime. If a workload is missing, it means that it was not enabled in your cluster, therefore you can skip it.

To delete a resource, you can select it using the box on the left of the resource name and click the - button on the bottom right corner.

  1. Open the Workloads tab.
  2. From the Deployment section, delete the following deployments:
    • united-manufacturing-hub-barcodereader
    • united-manufacturing-hub-factoryinsight-deployment
    • united-manufacturing-hub-kafkatopostgresql
    • united-manufacturing-hub-mqttkafkabridge
    • united-manufacturing-hub-iotsensorsmqtt
    • united-manufacturing-hub-opcuasimulator-deployment
  3. From the StatefulSet section, delete the following statefulsets:
    • united-manufacturing-hub-mqttbridge
    • united-manufacturing-hub-hivemqce
    • united-manufacturing-hub-nodered
    • united-manufacturing-hub-sensorconnect

Upgrade Helm Chart

Now everything is ready to upgrade the Helm chart.

  1. Navigate to the Helm > Releases tab.
  2. Select the united-manufacturing-hub release and click Upgrade.
  3. In the Helm Upgrade window, make sure that the Upgrade version field contains the version you want to upgrade to.
  4. You can also change the values of the Helm chart, if needed. In the grafana section, find the extraInitContainers field and change the value of the image field to unitedmanufacturinghub/grafana-plugin-extractor:0.1.4.
  5. Click Upgrade.

The upgrade process can take a few minutes. The upgrade is complete when the Status field of the release is Deployed.

4.2.7 - Upgrade to v0.9.8

This page describes how to upgrade the United Manufacturing Hub to version 0.9.8

This page describes how to upgrade the United Manufacturing Hub to version 0.9.8. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

Add Helm repo in UMHLens / OpenLens

Check if the UMH Helm repository is added in UMHLens / OpenLens. To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,). Click on the Kubernetes tab and check if the Helm Chart section contains the https://repo.umh.app repository.

If it doesn’t, click the Add Custom Helm Repo button and fill in the following values:

Then click Add.

Clear Workloads

Some workloads need to be deleted before upgrading. This process does not delete any data, but it will cause downtime. If a workload is missing, it means that it was not enabled in your cluster, therefore you can skip it.

To delete a resource, you can select it using the box on the left of the resource name and click the - button on the bottom right corner.

  1. Open the Workloads tab.
  2. From the Deployment section, delete the following deployments:
    • united-manufacturing-hub-barcodereader
    • united-manufacturing-hub-factoryinsight-deployment
    • united-manufacturing-hub-kafkatopostgresql
    • united-manufacturing-hub-mqttkafkabridge
    • united-manufacturing-hub-iotsensorsmqtt
    • united-manufacturing-hub-opcuasimulator-deployment
  3. From the StatefulSet section, delete the following statefulsets:
    • united-manufacturing-hub-mqttbridge
    • united-manufacturing-hub-hivemqce
    • united-manufacturing-hub-nodered
    • united-manufacturing-hub-sensorconnect

Upgrade Helm Chart

Now everything is ready to upgrade the Helm chart.

  1. Navigate to the Helm > Releases tab.
  2. Select the united-manufacturing-hub release and click Upgrade.
  3. In the Helm Upgrade window, make sure that the Upgrade version field contains the version you want to upgrade to.
  4. You can also change the values of the Helm chart, if needed.
  5. Click Upgrade.

The upgrade process can take a few minutes. The upgrade is complete when the Status field of the release is Deployed.

4.2.8 - Upgrade to v0.9.7

This page describes how to upgrade the United Manufacturing Hub to version 0.9.7

This page describes how to upgrade the United Manufacturing Hub to version 0.9.7. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

Add Helm repo in UMHLens / OpenLens

Check if the UMH Helm repository is added in UMHLens / OpenLens. To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,). Click on the Kubernetes tab and check if the Helm Chart section contains the https://repo.umh.app repository.

If it doesn’t, click the Add Custom Helm Repo button and fill in the following values:

Then click Add.

Clear Workloads

Some workloads need to be deleted before upgrading. This process does not delete any data, but it will cause downtime. If a workload is missing, it means that it was not enabled in your cluster, therefore you can skip it.

To delete a resource, you can select it using the box on the left of the resource name and click the - button on the bottom right corner.

  1. Open the Workloads tab.
  2. From the Deployment section, delete the following deployments:
    • united-manufacturing-hub-barcodereader
    • united-manufacturing-hub-factoryinsight-deployment
    • united-manufacturing-hub-kafkatopostgresql
    • united-manufacturing-hub-mqttkafkabridge
    • united-manufacturing-hub-iotsensorsmqtt
    • united-manufacturing-hub-opcuasimulator-deployment
  3. From the StatefulSet section, delete the following statefulsets:
    • united-manufacturing-hub-mqttbridge
    • united-manufacturing-hub-hivemqce
    • united-manufacturing-hub-nodered
    • united-manufacturing-hub-sensorconnect

Upgrade Helm Chart

Now everything is ready to upgrade the Helm chart.

  1. Navigate to the Helm > Releases tab.
  2. Select the united-manufacturing-hub release and click Upgrade.
  3. In the Helm Upgrade window, make sure that the Upgrade version field contains the version you want to upgrade to.
  4. You can also change the values of the Helm chart, if needed.
    • Make these changes in the grafana section:

      • Replace the content of datasources with the following:

            datasources.yaml:
              apiVersion: 1
              datasources:
              - access: proxy
                editable: false
                isDefault: true
                jsonData:
                  apiKey: $FACTORYINSIGHT_PASSWORD
                  apiKeyConfigured: true
                  customerId: $FACTORYINSIGHT_CUSTOMERID
                  serverURL: http://united-manufacturing-hub-factoryinsight-service/
                name: umh-datasource
                orgId: 1
                type: umh-datasource
                url: http://united-manufacturing-hub-factoryinsight-service/
                version: 1
              - access: proxy
                editable: false
                isDefault: false
                jsonData:
                  apiKey: $FACTORYINSIGHT_PASSWORD
                  apiKeyConfigured: true
                  baseURL: http://united-manufacturing-hub-factoryinsight-service/
                  customerID: $FACTORYINSIGHT_CUSTOMERID
                name: umh-v2-datasource
                orgId: 1
                type: umh-v2-datasource
                url: http://united-manufacturing-hub-factoryinsight-service/
                version: 1
        
      • Replace the content of env with the following:

            GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS: umh-datasource,umh-factoryinput-panel,umh-v2-datasource
        
      • Replace the content of extraInitContainers with the following:

          - name: init-umh-datasource
            image: unitedmanufacturinghub/grafana-plugin-extractor:0.1.3
            volumeMounts:
            - name: storage
              mountPath: /var/lib/grafana
            imagePullPolicy: IfNotPresent
        
    • In the timescaledb-single section, make sure that the image.tag field is set to pg13.8-ts2.8.0-p1.

  5. Click Upgrade.

The upgrade process can take a few minutes. The upgrade is complete when the Status field of the release is Deployed.

Change Factoryinsight API version

The Factoryinsight API version has changed from v1 to v2. To make sure that you are using the new version, click on any Factoryinsight Pod and check that the VERSION environment variable is set to 2.

If it’s not, follow these steps:

  1. Navigate to the Workloads > Deployments tab.
  2. Select the united-manufacturing-hub-factoryinsight-deployment deployment.
  3. Click the Edit button to open the deployment’s configuration.

    Lens deployment Edit
    Lens deployment Edit

  4. Find the spec.template.spec.containers[0].env field.
  5. Set the value field of the VERSION variable to 2.

4.2.9 - Upgrade to v0.9.6

This page describes how to upgrade the United Manufacturing Hub to version 0.9.6

This page describes how to upgrade the United Manufacturing Hub to version 0.9.6. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

Add Helm repo in UMHLens / OpenLens

Check if the UMH Helm repository is added in UMHLens / OpenLens. To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,). Click on the Kubernetes tab and check if the Helm Chart section contains the https://repo.umh.app repository.

If it doesn’t, click the Add Custom Helm Repo button and fill in the following values:

Then click Add.

Add new index to the database

In this version, a new index has been added to the processValueTabe table, allowing to speed up the queries.

Open a shell in the database

  1. From the Pod section in UMHLens / OpenLens, click on united-manufacturing-hub-timescaledb-0 to open the details page.

  2. Click the Pod Shell button to open a shell in the container.

    Lens Pod Shell
    Lens Pod Shell

  3. Enter the postgres shell:

    psql
    
  4. Connect to the database:

    \c factoryinsight
    

Create the index

Execute the following query:

CREATE INDEX ON processvaluetable(valuename, asset_id) WITH (timescaledb.transaction_per_chunk);
REINDEX TABLE processvaluetable;

This command could take a while to complete, especially on larger tables.

Type exit to close the shell.

Clear Workloads

Some workloads need to be deleted before upgrading. This process does not delete any data, but it will cause downtime. If a workload is missing, it means that it was not enabled in your cluster, therefore you can skip it.

To delete a resource, you can select it using the box on the left of the resource name and click the - button on the bottom right corner.

  1. Open the Workloads tab.
  2. From the Deployment section, delete the following deployments:
    • united-manufacturing-hub-barcodereader
    • united-manufacturing-hub-factoryinsight-deployment
    • united-manufacturing-hub-kafkatopostgresql
    • united-manufacturing-hub-mqttkafkabridge
    • united-manufacturing-hub-iotsensorsmqtt
    • united-manufacturing-hub-opcuasimulator-deployment
  3. From the StatefulSet section, delete the following statefulsets:
    • united-manufacturing-hub-mqttbridge
    • united-manufacturing-hub-hivemqce
    • united-manufacturing-hub-nodered
    • united-manufacturing-hub-sensorconnect

Upgrade Helm Chart

Now everything is ready to upgrade the Helm chart.

  1. Navigate to the Helm > Releases tab.
  2. Select the united-manufacturing-hub release and click Upgrade.
  3. In the Helm Upgrade window, make sure that the Upgrade version field contains the version you want to upgrade to.
  4. You can also change the values of the Helm chart, if needed.
  5. Click Upgrade.

The upgrade process can take a few minutes. The upgrade is complete when the Status field of the release is Deployed.

4.2.10 - Upgrade to v0.9.5

This page describes how to upgrade the United Manufacturing Hub to version 0.9.5

This page describes how to upgrade the United Manufacturing Hub to version 0.9.5. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

Add Helm repo in UMHLens / OpenLens

Check if the UMH Helm repository is added in UMHLens / OpenLens. To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,). Click on the Kubernetes tab and check if the Helm Chart section contains the https://repo.umh.app repository.

If it doesn’t, click the Add Custom Helm Repo button and fill in the following values:

Then click Add.

Alter ordertable constraint

In this version, one of the constraints of the ordertable table has been modified.

Make sure to backup the database before exectuing the following steps.

Open a shell in the database

  1. From the Pod section in UMHLens / OpenLens, click on united-manufacturing-hub-timescaledb-0 to open the details page.

  2. Click the Pod Shell button to open a shell in the container.

    Lens Pod Shell
    Lens Pod Shell

  3. Enter the postgres shell:

    psql
    
  4. Connect to the database:

    \c factoryinsight
    

Alter the table

  1. Check for possible conflicts in the ordertable table:

    SELECT order_name, asset_id, count(*) FROM ordertable GROUP BY order_name, asset_id HAVING count(*) > 1;
    

    If the result is empty, you can skip the next step.

  2. Delete the duplicates:

    DELETE FROM ordertable ox USING (
         SELECT MIN(CTID) as ctid, order_name, asset_id
         FROM ordertable
         GROUP BY order_name, asset_id HAVING count(*) > 1
         ) b
    WHERE ox.order_name = b.order_name AND ox.asset_id = b.asset_id
    AND ox.CTID <> b.ctid;
    

    If the data cannot be deleted, you have to manually update each duplicate order_names to a unique value.

  3. Get the name of the constraint:

    SELECT conname FROM pg_constraint WHERE conrelid = 'ordertable'::regclass AND contype = 'u';
    
  4. Drop the constraint:

    ALTER TABLE ordertable DROP CONSTRAINT ordertable_asset_id_order_id_key;
    
  5. Add the new constraint:

    ALTER TABLE ordertable ADD CONSTRAINT ordertable_asset_id_order_name_key UNIQUE (asset_id, order_name);
    

Now you can close the shell by typing exit and continue with the upgrade process.

Clear Workloads

Some workloads need to be deleted before upgrading. This process does not delete any data, but it will cause downtime. If a workload is missing, it means that it was not enabled in your cluster, therefore you can skip it.

To delete a resource, you can select it using the box on the left of the resource name and click the - button on the bottom right corner.

  1. Open the Workloads tab.
  2. From the Deployment section, delete the following deployments:
    • united-manufacturing-hub-barcodereader
    • united-manufacturing-hub-factoryinsight-deployment
    • united-manufacturing-hub-kafkatopostgresql
    • united-manufacturing-hub-mqttkafkabridge
    • united-manufacturing-hub-iotsensorsmqtt
    • united-manufacturing-hub-opcuasimulator-deployment
  3. From the StatefulSet section, delete the following statefulsets:
    • united-manufacturing-hub-mqttbridge
    • united-manufacturing-hub-hivemqce
    • united-manufacturing-hub-nodered
    • united-manufacturing-hub-sensorconnect

Upgrade Helm Chart

Now everything is ready to upgrade the Helm chart.

  1. Navigate to the Helm > Releases tab.

  2. Select the united-manufacturing-hub release and click Upgrade.

  3. In the Helm Upgrade window, make sure that the Upgrade version field contains the version you want to upgrade to.

  4. You can also change the values of the Helm chart, if needed.

    • Enable the startup probe for the Kafka Broker by adding the following into the kafka section:

      startupProbe:
        enabled: true
        failureThreshold: 600
        periodSeconds: 10
        timeoutSeconds: 10
      
  5. Click Upgrade.

The upgrade process can take a few minutes. The upgrade is complete when the Status field of the release is Deployed.

Changes to the messages

Some messages have been modified in this version. You need to update some payolads in your Node-RED flows.

  • modifyState:
    • start_time_stamp has been renamed to timestamp_ms
    • end_time_stamp has been renamed to timestamp_ms_end
  • modifyProducedPieces:
    • start_time_stamp has been renamed to timestamp_ms
    • end_time_stamp has been renamed to timestamp_ms_end
  • deleteShiftByAssetIdAndBeginTimestamp and deleteShiftById have been removed. Use the deleteShift message instead.

4.2.11 - Upgrade to v0.9.4

This page describes how to upgrade the United Manufacturing Hub to version 0.9.4

This page describes how to upgrade the United Manufacturing Hub to version 0.9.4. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

Add Helm repo in UMHLens / OpenLens

Check if the UMH Helm repository is added in UMHLens / OpenLens. To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,). Click on the Kubernetes tab and check if the Helm Chart section contains the https://repo.umh.app repository.

If it doesn’t, click the Add Custom Helm Repo button and fill in the following values:

Then click Add.

Clear Workloads

Some workloads need to be deleted before upgrading. This process does not delete any data, but it will cause downtime. If a workload is missing, it means that it was not enabled in your cluster, therefore you can skip it.

To delete a resource, you can select it using the box on the left of the resource name and click the - button on the bottom right corner.

  1. Open the Workloads tab.
  2. From the Deployment section, delete the following deployments:
    • united-manufacturing-hub-barcodereader
    • united-manufacturing-hub-factoryinsight-deployment
    • united-manufacturing-hub-kafkatopostgresql
    • united-manufacturing-hub-mqttkafkabridge
    • united-manufacturing-hub-iotsensorsmqtt
    • united-manufacturing-hub-opcuasimulator-deployment
  3. From the StatefulSet section, delete the following statefulsets:
    • united-manufacturing-hub-mqttbridge
    • united-manufacturing-hub-hivemqce
    • united-manufacturing-hub-nodered
    • united-manufacturing-hub-sensorconnect

Upgrade Helm Chart

Now everything is ready to upgrade the Helm chart.

  1. Navigate to the Helm > Releases tab.

  2. Select the united-manufacturing-hub release and click Upgrade.

  3. In the Helm Upgrade window, make sure that the Upgrade version field contains the version you want to upgrade to.

  4. You can also change the values of the Helm chart, if needed.

    • If you have enabled the Kafka Bridge, find the section _000_commonConfig.kafkaBridge.topicmap and set the value to the following:

      - bidirectional: false
        name: HighIntegrity
        send_direction: to_remote
        topic: ^ia\.(([^r.](\d|-|\w)*)|(r[b-z](\d|-|\w)*)|(ra[^w]))\.(\d|-|\w|_)+\.(\d|-|\w|_)+\.((addMaintenanceActivity)|(addOrder)|(addParentToChild)|(addProduct)|(addShift)|(count)|(deleteShiftByAssetIdAndBeginTimestamp)|(deleteShiftById)|(endOrder)|(modifyProducedPieces)|(modifyState)|(productTag)|(productTagString)|(recommendation)|(scrapCount)|(startOrder)|(state)|(uniqueProduct)|(scrapUniqueProduct))$
      - bidirectional: false
        name: HighThroughput
        send_direction: to_remote
        topic: ^ia\.(([^r.](\d|-|\w)*)|(r[b-z](\d|-|\w)*)|(ra[^w]))\.(\d|-|\w|_)+\.(\d|-|\w|_)+\.(process[V|v]alue).*$
      

      For more information, see the Kafka Bridge configuration

    • If you have enabled Barcodereader, find the barcodereader section and set the following values, adding the missing ones and updating the already existing ones:

      enabled: false
      image:
        pullPolicy: IfNotPresent
      resources:
        requests:
          cpu: "2m"
          memory: "30Mi"
        limits:
          cpu: "10m"
          memory: "60Mi"
      scanOnly: false # Debug mode, will not send data to kafka
      
  5. Click Upgrade.

The upgrade process can take a few minutes. The process is complete when the Status field of the release is Deployed.

4.3 - Administration

This section describes how to manage and configure the United Manufacturing Hub cluster.

In this section, you will find information about how to manage and configure the United Manufacturing Hub cluster, from customizing the cluster to access the different services.

4.3.1 - Access the Database

This page describes how to access the United Manufacturing Hub database to perform SQL operations using a database client, the CLI or Grafana.

There are multiple ways to access the database. If you want to just visualize data, then using Grafana or a database client is the easiest way. If you need to also perform SQL commands, then using a database client or the CLI are the best options.

Generally, using a database client gives you the most flexibility, since you can both visualize the data and manipulate the database. However, it requires you to install a database client on your machine.

Using the CLI gives you more control over the database, but it requires you to have a good understanding of SQL.

Grafana, on the other hand, is for visualizing data. It is a good option if you just want to see the data in a dashboard and don’t need to manupulate it.

Before you begin

You need to have a UMH cluster. If you do not already have a cluster, you can create one by using the Management Console.

Get the database credentials

If you are not using the CLI, you need to know the database credentials. You can find them in the timescale-post-init-pw Secret. By default, the username is factoryinsight and the password is changeme.

...
ALTER USER factoryinsight WITH PASSWORD 'changeme';
...

Access the database using a database client

There are many database clients that you can use to access the database. Here’s a list of some of the most popular database clients:

Database clients
NameFree or PaidPlatforms
pgAdminFreeWindows, macOS, Linux
DataGripPaidWindows, macOS, Linux
DBeaverBothWindows, macOS, Linux

For the sake of this tutorial, pgAdmin will be used as an example, but other clients have similar functionality. Refer to the specific client documentation for more information.

Forward the database port to your local machine

  1. From the Pods section in UMHLens / OpenLens, find the united-manufacturing-hub-timescaledb-0 Pod.
  2. In the Pod Details window, click the Forward button next to the postgresql:5432/TCP port.
  3. Enter a port number, such as 5432, and click Start. You can disable the Open in browser option if you don’t want to open the port in your browser.

Using pgAdmin

You can use pgAdmin to access the database. To do so, you need to install the pgAdmin client on your machine. For more information, see the pgAdmin documentation.

  1. Once you have installed the client, you can add a new server from the main window.

    pgAdmin main window
    pgAdmin main window

  2. In the General tab, give the server a meaningful name. In the Connection tab, enter the database credentials:

    • The Host name/address is localhost.
    • The Port is the port you forwarded.
    • The Maintenance database is postgres.
    • The Username and Password are the ones you found in the Secret.
  3. Click Save to save the server.

    pgAdmin connection window
    pgAdmin connection window

You can now connect to the database by double-clicking the server.

Use the side menu to navigate through the server. The tables are listed under the Schemas > public > Tables section of the factoryinsight database.

Refer to the pgAdmin documentation for more information on how to use the client to perform database operations.

Access the database using the command line interface

You can access the database from the command line using the psql command directly from the united-manufacturing-hub-timescaledb-0 Pod.

You will not need credentials to access the database from the Pod’s CLI.

Open a shell in the database Pod

  1. From the Pod section in UMHLens / OpenLens, click on united-manufacturing-hub-timescaledb-0 to open the details page.

  2. Click the Pod Shell button to open a shell in the container.

    Lens Pod Shell
    Lens Pod Shell

  3. Enter the postgres shell:

    psql
    
  4. Connect to the database:

    \c factoryinsight
    

Perform SQL commands

Once you have a shell in the database, you can perform SQL commands.

  1. For example, to create an index on the processValueTable:

    CREATE INDEX ON processvaluetable (valuename);
    
  2. When you are done, exit the postgres shell:

     exit
    

Access the database using Grafana

You can use Grafana to visualize data from the database.

Add PostgreSQL as a data source

  1. Open the Grafana dashboard in your browser.

  2. From the Configuration (gear) icon, select Data Sources.

  3. Click Add data source and select PostgreSQL.

  4. Configure the connection to the database:

    • The Host is united-manufacturing-hub.united-manufacturing-hub.svc.cluster.local:5432.
    • The Database is factoryinsight.
    • The User and Password are the ones you found in the Secret.
    • Set TLS/SSL Mode to require.
    • Enable TimescaleDB.

    Everything else can be left as the default.

    Grafana PostgreSQL data source
    Grafana PostgreSQL data source

  5. Click Save & Test to save the data source.

  6. Now click on Explore to start querying the database.

  7. You can also create dashboards using the newly created data source.

What’s next

4.3.2 - Access Services From Within the Cluster

This page describes how to access services from within the cluster.

All the services deployed in the cluster are visible to each other. That makes it easy to connect them together.

Before you begin

You need to have a UMH cluster. If you do not already have a cluster, you can create one by using the Management Console.

Connect to a service from another service

To connect to a service from another service, you can use the service name as the host name.

To get a list of available services and related ports you can open UMHLens / OpenLens and go to Network > Services.

All of them are available from within the cluster. The ones of type LoadBalancer are also available from outside the cluster using the node IP.

Example

The most common use case is to connect to the MQTT Broker from Node-RED.

To do that, when you create the MQTT node, you can use the service name united-manufacturing-hub-mqtt as the host name and one the ports listed in the Ports column.

The MQTT service name has changed since version 0.9.10. If you are using an older version, use united-manufacturing-hub-vernemq instead of united-manufacturing-hub-mqtt.

What’s next

4.3.3 - Access Services Outside the Cluster

This page describe how to access services from outside the cluster.

Some of the microservices in the United Manufacturing Hub are exposed outside the cluster with a LoadBalancer service. A LoadBalancer is a service that exposes a set of Pods on the same network as the cluster, but not necessarily to the entire internet. The LoadBalancer service provides a single IP address that can be used to access the Pods.

Before you begin

You need to have a UMH cluster. If you do not already have a cluster, you can create one by using the Management Console.

Accessing the services

The LoadBalancer service provides a single IP address that can be used to access the Pods. To find the IP address, open UMHLens / OpenLens and navigate to Network > Services. The IP address is listed in the External IP column.

To access the services, use the IP address and the port number of the service, e.g. http://192.168.1.100:8080.

If you installed the United Manufacturing Hub on your local machine, either using the Management Console or the command line, the services are accessible at localhost:<port-number>.

Services with LoadBalancer by default

The following services are exposed outside the cluster with a LoadBalancer service by default:

To access Node-RED, you need to use the /node-red path, e.g. http://192.168.1.100:1880/node-red.

Services without a LoadBalancer

Some of the microservices in the United Manufacturing Hub are exposed via a ClusterIP service. That means that they are only accessible from within the cluster itself. To access them from outside the cluster, you need to create a LoadBalancer service.

Create a LoadBalancer service

If you are looking to expose th Kafka broker, follow the instructions in the Access Kafka outside the cluster page.

For any other microservice, follow these steps to enable the LoadBalancer service:

  1. Open UMHLens / OpenLens and navigate to Network > Services.

  2. Select the service and click the Edit button.

  3. Scroll down to the status.loadBalancer section and change it to the following:

    status:
      loadBalancer:
        ingress:
        - ip: <external-ip>
    

    Replace <external-ip> with the external IP address of the node.

  4. Scroll to the spec.type section and change the value from ClusterIP to LoadBalancer.

  5. Click Save to apply the changes.

If you installed the United Manufacturing Hub on your local machine, either using the Management Console or the command line, you also need to map the port exposed by the k3d cluster to a port on your local machine. To do that, run the following command:

k3d cluster edit united-manufacturing-hub --port-add "<local-port>:<cluster-port>@server:0"

Replace <local-port> with a free port number on your local machine, and <cluster-port> with the port number of the service.

Port forwarding in UMHLens / OpenLens

If you don’t want to create a LoadBalancer service, effectively exposing the microservice to anyone that has access to the host IP address, you can use UMHLens / OpenLens to forward the port to your local machine.

  1. Open UMHLens / OpenLens and navigate to Network > Services.
  2. Select the service that you want to access.
  3. Scroll down to the Connection section and click the Forward… button.
  4. From the dialog, you can choose a port on your local machine to forward the cluster port from, or you can leave it empty to use a random port.
  5. Click Forward to apply the changes.
  6. If you left the checkbox Open in browser checked, then the service will open in your default browser.

You can see and manage the forwarded ports of your cluster in the Network > Port Forwarding section.

Port forwarding can be unstable, especially if the connection to the cluster is slow. If you are experiencing issues, try to create a LoadBalancer service instead.

Security considerations

MQTT broker

There are some security considerations to keep in mind when exposing the MQTT broker.

By default, the MQTT broker is configured to allow anonymous connections. This means that anyone can connect to the broker without providing any credentials. This is not recommended for production environments.

To secure the MQTT broker, you can configure it to require authentication. For that, you can either enable RBAC or set up HiveMQ PKI (recommended for production environments).

If you are using a version of the United Manufacturing Hub older than 0.9.10, then you need to change the ACL configuration to allow your MQTT client to connect to the broker.

Troubleshooting

LoadBalancer service stuck in Pending state

If the LoadBalancer service is stuck in the Pending state, it probably means that the host port is already in use. To fix this, edit the service and change the section spec.ports.port to a different port number.

What’s next

4.3.4 - Access Kafka Outside the Cluster

This page describes how to access Kafka from outside the cluster.

By default the Kafka broker is only available from within the cluster, therefore you cannot access it from external applications.

You can enable external access from the Kafka configuration.

Before you begin

You need to have a UMH cluster. If you do not already have a cluster, you can create one by using the Management Console.

Enable external access from Kafka configuration

  1. From UMHLens / OpenLens, go to Helm > Releases.

  2. Click on the Upgrade button.

  3. Search for the kafka section and edit the following values:

    ...
    kafka:
    ...
      externalAccess:
        autoDiscovery:
          enabled: true
        ...
        enabled: true
      ...
      rbac:
        create: true
    ...
    
  4. Click Upgrade.

To verify that the LoadBalancer service is created, go to Network > Services and search for united-manufacturing-hub-kafka-external.

Now you can connect to Kafka from external applications using the node IP and the port 9094.

What’s next

4.3.5 - Expose Grafana to the Internet

This page describes how to expose Grafana to the Internet.

This page describes how to expose Grafana to the Internet so that you can access it from outside the Kubernetes cluster.

Before you begin

You need to have a UMH cluster. If you do not already have a cluster, you can create one by using the Management Console.

Enable the ingress

To expose Grafana to the Internet, you need to enable the ingress.

  1. Open UMHLens / OpenLens and go to the Helm > Releases page.
  2. Click the Upgrade button and search for Grafana.
  3. Scroll down to the ingress section
  4. Set the enabled field to true.
  5. Add you domain name to the hosts field.
  6. Click Upgrade to apply the changes.

Remember to add a DNS record for your domain name that points to the external IP address of the Kubernetes host. You can find the external IP address of the Kubernetes host on the Nodes page in UMHLens / OpenLens.

What’s next

4.3.6 - Install Custom Drivers in NodeRed

This page describes how to install custom drivers in NodeRed.

NodeRed is running on Alpine Linux as non-root user. This means that you can’t install packages with apk. This tutorial shows you how to install packages with proper security measures.

Before you begin

You need to have a UMH cluster. If you do not already have a cluster, you can create one by using the Management Console.

Change the security context

  1. From the StatefulSets section in UMHLens / OpenLens, click on united-manufacturing-hub-nodered to open the details page.

  2. Click the Edit button to open the StatefulSet’s configuration.

    Lens StatefulSet Edit
    Lens StatefulSet Edit

  3. Press Ctrl+F and search for securityContext.

  4. Set the values of the runAsUser field to 0, of fsGroup to 0, and of runAsNonRoot to false.

    ...
           securityContext:
             runAsUser: 0
       restartPolicy: Always
       terminationGracePeriodSeconds: 30
       dnsPolicy: ClusterFirst
       securityContext:
         runAsUser: 0
         runAsNonRoot: false
         fsGroup: 0
    ...
    
  5. Click Save.

Install the packages

  1. From the Pods section in UMHLens / OpenLens, click on united-manufacturing-hub-nodered-0 to open the details page.

  2. Click the Pod Shell button to open a shell in the container.

    Lens Pod Shell
    Lens Pod Shell

  3. Install the packages with apk:

    apk add <package>
    

    For example, to install unixodbc:

    apk add unixodbc
    

    You can find the list of available packages here.

  4. Exit the shell.

Revert the security context

For security reasons, you should revert the security context after you install the packages.

  1. From the StatefulSets section in UMHLens / OpenLens, click on united-manufacturing-hub-nodered to open the details page.

  2. Click the Edit button to open the StatefulSet’s configuration.

    Lens StatefulSet Edit
    Lens StatefulSet Edit

  3. Set the values of the runAsUser field to 1000, of fsGroup to 1000, and of runAsNonRoot to true.

    ...
           securityContext:
             runAsUser: 1000
       restartPolicy: Always
       terminationGracePeriodSeconds: 30
       dnsPolicy: ClusterFirst
       securityContext:
         runAsUser: 1000
         runAsNonRoot: true
         fsGroup: 1000
    ...
    
  4. Click Save.

What’s next

4.3.7 - Execute Kafka Shell Scripts

This page describes how to execute Kafka shell scripts.

When working with Kafka, you may need to execute shell scripts to perform administrative tasks. This page describes how to execute Kafka shell scripts.

Before you begin

You need to have a UMH cluster. If you do not already have a cluster, you can create one by using the Management Console.

Open a shell in the Kafka container

  1. From the Pod section in UMHLens / OpenLens, click on united-manufacturing-hub-kafka-0 to open the details page.

  2. Click the Pod Shell button to open a shell in the container.

    Lens Pod Shell
    Lens Pod Shell

  3. Navigate to the Kafka bin directory:

    cd /opt/bitnami/kafka/bin
    
  4. Execute any Kafka shell scripts. For example, to list all topics:

    ./kafka-topics.sh --list --zookeeper zookeeper:2181
    
  5. To exit the shell:

    exit
    

What’s next

4.3.8 - Reduce database size

This page describes how to reduce the size of the United Manufacturing Hub database.

Over time, time-series data can consume a large amount of disk space. To reduce the amount of disk space used by time-series data, there are three options:

  • Enable data compression. This reduces the required disk space by applying mathematical compression to the data. This compression is lossless, so the data is not changed in any way. However, it will take more time to compress and decompress the data. For more information, see how TimescaleDB compression works.
  • Enable data retention. This deletes old data that is no longer needed, by setting policies that automatically delete data older than a specified time. This can be beneficial for managing the size of the database, as well as adhering to data retention regulations. However, by definition, data loss will occur. For more information, see how TimescaleDB data retention works.
  • Downsampling. This is a method of reducing the amount of data stored by aggregating data points over a period of time. For example, you can aggregate data points over a 30-minute period, instead of storing each data point. If exact data is not required, downsampling can be useful to reduce database size. However, data may be less accurate.

Before you begin

You need to have a UMH cluster. If you do not already have a cluster, you can create one by using the Management Console.

Open the database shell

  1. From the Pod section in UMHLens / OpenLens, click on united-manufacturing-hub-timescaledb-0 to open the details page.

  2. Click the Pod Shell button to open a shell in the container.

    Lens Pod Shell
    Lens Pod Shell

  3. Enter the postgres shell:

    psql
    
  4. Connect to the database:

    \c factoryinsight
    

Enable data compression

To enable data compression, you need to execute the following SQL command from the database shell:

SELECT add_retention_policy('processvaluetable', INTERVAL '7 days');

This command will set a retention policy on the processvaluetable table, which will delete data older than 7 days.

Enable data retention

To enable data retention, you need to execute the following SQL command from the database shell:

SELECT add_compression_policy('processvaluetable', INTERVAL '7 days');

This command will set a compression policy on the processvaluetable table, which will compress data older than 7 days.

What’s next

4.3.9 - Delete Assets from the Database

This task shows you how to delete assets from the database.

This is useful if you have created assets by mistake, or to delete the ones that are no longer needed.

This task deletes data from the database. Make sure you have a backup of the database before you proceed.

Before you begin

You need to have a UMH cluster. If you do not already have a cluster, you can create one by using the Management Console.

Also, make sure to backup the database before you proceed. For more information, see Backing Up and Restoring the Database.

Open the database shell

  1. From the Pod section in UMHLens / OpenLens, click on united-manufacturing-hub-timescaledb-0 to open the details page.

  2. Click the Pod Shell button to open a shell in the container.

    Lens Pod Shell
    Lens Pod Shell

  3. Enter the postgres shell:

    psql
    
  4. Connect to the database:

    \c factoryinsight
    

Choose the assets to delete

You have multiple choices to delete assets, like deleting a single asset, or deleting all assets in a location, or deleting all assets with a specific name.

To do so, you can customize the SQL command using different filters. Specifically, a combination of the following filters:

  • assetname
  • location
  • customer

To filter an SQL command, you can use the WHERE clause. For example, using all of the filters:

WHERE assetname = <my-asset> AND location = <my-location> AND customer = <my-customer>;

You can use any combination of the filters, even just one of them.

Here are some examples:

  • Delete all assets with the same name from any location and any customer:

    WHERE assetname = '<asset-name>'
    
  • Delete all assets in a specific location:

     WHERE location = '<location-name>'
    
  • Delete all assets with the same name in a specific location:

    WHERE assetname = '<asset-name>' AND location = '<location-name>'
    
  • Delete all assets with the same name in a specific location for a single customer:

    WHERE assetname = 'my-asset' AND location = 'my-location' AND customer = 'customer'
    

Delete the assets

Once you know the filters you want to use, you can use the following SQL commands to delete assets:

BEGIN;

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>)
DELETE FROM shifttable WHERE asset_id IN (SELECT id FROM assets_to_be_deleted);

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>)
DELETE FROM counttable WHERE asset_id IN (SELECT id FROM assets_to_be_deleted);

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>)
DELETE FROM ordertable WHERE asset_id IN (SELECT id FROM assets_to_be_deleted);

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>)
DELETE FROM processvaluestringtable WHERE asset_id IN (SELECT id FROM assets_to_be_deleted);

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>)
DELETE FROM processvaluetable WHERE asset_id IN (SELECT id FROM assets_to_be_deleted);

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>)
DELETE FROM producttable WHERE asset_id IN (SELECT id FROM assets_to_be_deleted);

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>)
DELETE FROM shifttable WHERE asset_id IN (SELECT id FROM assets_to_be_deleted);

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>)
DELETE FROM statetable WHERE asset_id IN (SELECT id FROM assets_to_be_deleted);

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>)
DELETE FROM assettable WHERE id IN (SELECT id FROM assets_to_be_deleted);

COMMIT;

Optionally, you can add the following code before the last WITH statement if you used the track&trace feature:

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>), uniqueproducts_to_be_deleted AS (SELECT uniqueproductid FROM uniqueproducttable WHERE asset_id IN (SELECT id FROM assets_to_be_deleted))
DELETE FROM producttagtable WHERE product_uid IN (SELECT uniqueproductid FROM uniqueproducts_to_be_deleted);

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>), uniqueproducts_to_be_deleted AS (SELECT uniqueproductid FROM uniqueproducttable WHERE asset_id IN (SELECT id FROM assets_to_be_deleted))
DELETE FROM producttagstringtable WHERE product_uid IN (SELECT uniqueproductid FROM uniqueproducts_to_be_deleted);

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>), uniqueproducts_to_be_deleted AS (SELECT uniqueproductid FROM uniqueproducttable WHERE asset_id IN (SELECT id FROM assets_to_be_deleted))
DELETE FROM productinheritancetable WHERE parent_uid IN (SELECT uniqueproductid FROM uniqueproducts_to_be_deleted) OR child_uid IN (SELECT uniqueproductid FROM uniqueproducts_to_be_deleted);

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>)
DELETE FROM uniqueproducttable WHERE asset_id IN (SELECT id FROM assets_to_be_deleted);

What’s next

4.3.10 - Change the Language in Factoryinsight

This page describes how to change the language in Factoryinsight, in order to display the returned text in a different language.

You can change the language in Factoryinsight if you want to localize the returned text, like stop codes, to a different language.

Before you begin

You need to have a UMH cluster. If you do not already have a cluster, you can create one by using the Management Console.

Access the database shell

  1. From the Pod section in UMHLens / OpenLens, click on united-manufacturing-hub-timescaledb-0 to open the details page.

  2. Click the Pod Shell button to open a shell in the container.

    Lens Pod Shell
    Lens Pod Shell

  3. Enter the postgres shell:

    psql
    
  4. Connect to the database:

    \c factoryinsight
    

Change the language

Execute the following command to change the language:

INSERT INTO configurationtable (customer, languagecode) VALUES ('factoryinsight', <code>) ON CONFLICT(customer) DO UPDATE SET languagecode=<code>;

where <code> is the language code. For example, to change the language to German, use 0.

Supported languages

Factoryinsight supports the following languages:

Supported languages
LanguageCode
German0
English1
Turkish2

What’s next

4.3.11 - Explore Cached Data

This page shows how to explore cached data in the United Manufacturing Hub.

When working with the United Manufacturing Hub, you might want to visualize information about the cached data. This page shows how you can access the cache and explore the data.

Before you begin

You need to have a UMH cluster. If you do not already have a cluster, you can create one by using the Management Console.

Open a shell in the cache Pod

  1. Open UMHLens / OpenLens and navigate to the Config > Secrets page.

  2. Get the cache password from the Secret redis-secret.

  3. From the Pods section click on united-manufacturing-hub-redis-master-0 to open the details page.

    If you have multiple cache Pods, you can select any of them.
  4. Click the Pod Shell button to open a shell in the container.

    Lens Pod Shell
    Lens Pod Shell

  5. Enter the shell:

    redis-cli -a <cache-password>
    
  6. Now you can execute any command. For example, to get the number of keys in the cache, run:

    KEYS *
    

    Or, to get the cache size, run:

    DBSIZE
    

For more information about Redis commands, see the Redis documentation.

What’s next

4.3.12 - Optimize Time Consuming Queries

This page shows how to optimize the database in order to reduce the time needed to execute queries.

When you have a large database, it is possible that some queries take a long time to execute. This especially shows when you are using Grafana and the dropdown menu in the datasource takes a long time to load or does not load at all.

Before you begin

You need to have a UMH cluster. If you do not already have a cluster, you can create one by using the Management Console.

Your United Manufacturing Hub must be at or later than version 0.9.4. To check the United Manufacturing Hub version, open UMHLens / OpenLens and go to Helm > Releases. The version is listed in the Version column.

Open a shell in the database container

  1. From the Pod section in UMHLens / OpenLens, click on united-manufacturing-hub-timescaledb-0 to open the details page.

  2. Click the Pod Shell button to open a shell in the container.

    Lens Pod Shell
    Lens Pod Shell

  3. Enter the postgres shell:

    psql
    
  4. Connect to the database:

    \c factoryinsight
    

Create an index

Indexes are used to speed up queries. Run this query to create an index on the processvaluetable table:

CREATE INDEX ON processvaluetable(valuename, asset_id) WITH (timescaledb.transaction_per_chunk);

Rollback factoryinsight

If you have already created an index, you can rollback the factoryinsight deployment to version 0.9.4. This way it will use a less optimized but faster query, significantly reducing the execution time.

  1. From the Deployments section in UMHLens / OpenLens, click on united-manufacturing-hub-factoryinsight-deployment to open the details page.
  2. Click the Edit button to open the deployment’s configuration.

    Lens deployment Edit
    Lens deployment Edit

  3. Scroll down to the spec.containers section and change the image value to unitedmanufacturinghub/factoryinsight:0.9.4.
  4. Click Save.

What’s next

4.3.13 - Optimize Database Datatypes

This page describes how to change the datatype of some columns in the database in order to optimize the performance.

In version 0.9.5 and prior, some tables in the database were created with the varchar data type. This data type is not optimal for storing large amounts of data. In version 0.9.6, the data type of some columns was changed from varchar to text. This migration optimizes the database, by changing the data type of some columns from varchar to text.

Before you begin

You need to have a UMH cluster. If you do not already have a cluster, you can create one by using the Management Console.

  1. From the Pod section in UMHLens / OpenLens, click on united-manufacturing-hub-timescaledb-0 to open the details page.

  2. Click the Pod Shell button to open a shell in the container.

    Lens Pod Shell
    Lens Pod Shell

  3. Enter the postgres shell:

    psql
    
  4. Connect to the database:

    \c factoryinsight
    

Alter the tables

Execute the following SQL statements:

ALTER TABLE assettable ALTER COLUMN assetid TYPE text;
ALTER TABLE assettable ALTER COLUMN location TYPE text;
ALTER TABLE assettable ALTER COLUMN customer TYPE text;
ALTER TABLE producttable ALTER COLUMN product_name TYPE text;
ALTER TABLE ordertable ALTER COLUMN order_name TYPE text;
ALTER TABLE configurationtable ALTER COLUMN customer TYPE text;
ALTER TABLE componenttable ALTER COLUMN componentname TYPE text;

Then confirm the changes by using the following SQL statements:

SELECT COLUMN_NAME, DATA_TYPE FROM information_schema.columns WHERE TABLE_NAME = 'assettable';
SELECT COLUMN_NAME, DATA_TYPE FROM information_schema.columns WHERE TABLE_NAME = 'producttable';
SELECT COLUMN_NAME, DATA_TYPE FROM information_schema.columns WHERE TABLE_NAME = 'ordertable';
SELECT COLUMN_NAME, DATA_TYPE FROM information_schema.columns WHERE TABLE_NAME = 'configurationtable';
SELECT COLUMN_NAME, DATA_TYPE FROM information_schema.columns WHERE TABLE_NAME = 'componenttable';

4.4 - Backup & Recovery

This section contains information about how to backup and recover various components of the United Manufacturing Hub.

4.4.1 - Backup and Restore the United Manufacturing Hub

This page describes how to backup and restore the entire United Manufacturing Hub.

This page describes how to back up the following:

  • All Node-RED flows
  • All Grafana dashboards
  • The Helm values used for installing the united-manufacturing-hub release
  • All the contents of the United Manufacturing Hub database

It does not back up:

  • Additional databases other than the United Manufacturing Hub default database
  • TimescaleDB continuous aggregates: Follow the official documentation to learn how.
  • TimescaleDB policies: Follow the official documentation to learn how.
  • Everything else not included in the previous list

This procedure only works on Windows.

Before you begin

Download the backup scripts and extract the content in a folder of your choice.

For this task, you need to have PostgreSQL installed on your machine.

You also need to have enough space on your machine to store the backup. To check the size of the database, follow the steps below:

  1. From the Pod section in UMHLens / OpenLens, click on united-manufacturing-hub-timescaledb-0 to open the details page.

  2. Click the Pod Shell button to open a shell in the container.

    Lens Pod Shell
    Lens Pod Shell

  3. Enter the postgres shell:

    psql
    
  4. Connect to the database:

    \c factoryinsight
    
  1. Run the following command to get the size of the database:

    SELECT pg_size_pretty(pg_database_size('factoryinsight'));
    

Backup

Generate Grafana API Key

Create a Grafana API Token for an admin user by following these steps:

  1. Open the Grafana UI in your browser and log in with an admin user.
  2. Click on the Settings icon in the left sidebar and select API Keys.
  3. Give the API key a name and change its role to Admin.
  4. Optionally set an expiration date.
  5. Click Add.
  6. Copy the generated API key and save it for later.

Stop workloads

To prevent data inconsistencies, you need to temporarily stop the MQTT and Kafka Brokers.

  1. In UMHLens / OpenLens go to the Workloads > StatefulSets tab.
  2. Select the united-manufacturing-hub-kafka StatefulSet
  3. Click the Scale button to select the number of replicas for the StatefulSet.

    Lens StatefulSet Scale
    Lens StatefulSet Scale

  4. Set the number of replicas to 0 and click Scale.
  5. Repeat the process for the united-manufacturing-hub-hivemqce StatefulSet.

Backup using the script

The backup script is located inside the folder you downloaded earlier.

  1. Open a terminal and navigate inside the folder.

    cd <FOLDER_PATH>
    
  2. Run the script:

    .\backup.ps1 -IP <IP_OF_THE_SERVER> -GrafanaToken <GRAFANA_API_KEY> -KubeconfigPath <PATH_TO_KUBECONFIG>
    

    You can find a list of all available parameters down below.

    If OutputPath is not set, the backup will be stored in the current folder.

This script might take a while to finish, depending on the size of your database and your connection speed.

If the connection is interrupted, there is currently no option to resume the process, therefore you will need to start again.

Here is a list of all available parameters:

Available parameters
ParameterDescriptionRequiredDefault value
GrafanaTokenGrafana API keyYes
IPIP of the cluster to backupYes
KubeconfigPathPath to the kubeconfig fileYes
DatabaseDatabaseName of the databse to backupNofactoryinsight
DatabasePasswordPassword of the database userNochangeme
DatabasePortPort of the databaseNo5432
DatabaseUserDatabase userNofactoryinsight
DaysPerJobNumber of days worth of data to backup in each parallel jobNo31
EnableGpgEncryptionSet to true if you want to encrypt the backupNofalse
EnableGpgSigningSet to true if you want to sign the backupNofalse
GpgEncryptionKeyIdID of the GPG key used for encryptionNo
GpgSigningKeyIdID of the GPG key used for signingNo
GrafanaPortExternal port of the Grafana serviceNo8080
OutputPathPath to the folder where the backup will be storedNoCurrent folder
ParallelJobsNumber of parallel job backups to runNo4
SkipDiskSpaceCheckSkip checking available disk spaceNofalse
SkipGpgQuestionsSet to true if you want to sign or encrypt the backupNofalse

Restore

Each component of the United Manufacturing Hub can be restored separately, in order to allow for more flexibility and to reduce the damage in case of a failure.

Cluster configuration

To restore the Kubernetes cluster, execute the .\restore-helm.ps1 script with the following parameters:

.\restore-helm.ps1 -KubeconfigPath <PATH_TO_KUBECONFIG> -BackupPath <PATH_TO_BACKUP_FOLDER>

Verify that the cluster is up and running by opening UMHLens / OpenLens and checking if the workloads are running.

Grafana dashboards

To restore the Grafana dashboards, you first need to create a Grafana API Key for an admin user in the new cluster by following these steps:

  1. Open the Grafana UI in your browser and log in with an admin user.
  2. Click on the Settings icon in the left sidebar and select API Keys.
  3. Give the API key a name and change its role to Admin.
  4. Optionally set an expiration date.
  5. Click Add.
  6. Copy the generated API key and save it for later.

Then, on your local machine, execute the .\restore-grafana.ps1 script with the following parameters:

.\restore-grafana.ps1 -FullUrl http://<IP_OF_THE_SERVER>:8080 -Token <GRAFANA_API_KEY> -BackupPath <PATH_TO_BACKUP_FOLDER>

Restore Node-RED flows

To restore the Node-RED flows, execute the .\restore-nodered.ps1 script with the following parameters:

.\restore-nodered.ps1 -KubeconfigPath <PATH_TO_KUBECONFIG> -BackupPath <PATH_TO_BACKUP_FOLDER>

Restore the database

To restore the database, execute the .\restore-timescale.ps1 script with the following parameters:

.\restore-timescale.ps1 -Ip <IP_OF_THE_SERVER> -BackupPath <PATH_TO_BACKUP_FOLDER> -PatroniSuperUserPassword <DATABASE_PASSWORD>

What’s next

4.4.2 - Backup and Restore Database

This page describes how to backup and restore the database.

Before you begin

For this task, you need to have PostgreSQL installed on your machine.

You also need to have enough space on your machine to store the backup. To check the size of the database, follow the steps below:

  1. From the Pod section in UMHLens / OpenLens, click on united-manufacturing-hub-timescaledb-0 to open the details page.

  2. Click the Pod Shell button to open a shell in the container.

    Lens Pod Shell
    Lens Pod Shell

  3. Enter the postgres shell:

    psql
    
  4. Connect to the database:

    \c factoryinsight
    
  1. Run the following command to get the size of the database:

    SELECT pg_size_pretty(pg_database_size('factoryinsight'));
    

Backing up the database

Follow these steps to create a backup of the factoryinsight database on your machine:

  1. Open a terminal, and using the cd command, navigate to the folder where you want to store the backup. For example:

    
       cd C:\Users\user\backups
       

    
       cd /Users/user/backups
       

    
       cd /home/user/backups
       

    If the folder does not exist, you can create it using the mkdir command or your file manager.

  2. Run the following command:

    pg_dump -h <REMOTE_HOST> -p 5432 -U factoryinsight -Fc -f <BACKUP_NAME>.bak factoryinsight
    
    • <REMOTE_HOST> is the IP of the server where the database is running. Use localhost if you installed the United Manufacturing Hub using k3d.
    • <BACKUP_NAME> is the name of the backup file.

Grafana database

If you want to backup the Grafana database, you can follow the same steps as above, but you need to replace any occurence of factoryinsight with grafana.

Additionally, you also need to write down the credentials in the grafana-secret Secret, as they will be needed to access the dashboard after restoring the database.

Restoring the database

This section is untested. Please report any issues you encounter.

For this section, we assume that you are restoring the data to a fresh United Manufacturing Hub installation with an empty database.

Copy the backup file to the database pod

  1. Open UMHLens / OpenLens.

  2. Launch a new terminal sesstion by clicking on the + button in the bottom-left corner of the window.

  3. Run the following command to copy the backup file to the database pod:

    kubectl cp /path/to/local/backup.bak united-manufacturing-hub-timescaledb-0:/tmp/backup.bak
    

    Replace /path/to/local/backup.bak with the path to the backup file on your machine.

This step could take a while depending on the size of the backup file.

Temporarly disable kafkatopostrgesql

  1. Navigate to Workloads > Deployments.
  2. Select the united-manufacturing-hub-kafkatopostgresql Deployment.
  3. Click the Scale button to select the number of replicas for the Deployment.

    Lens Deployment Scale
    Lens Deployment Scale

  4. Scale the number of replicas to 0.

Open a shell in the database pod

  1. From the Pod section in UMHLens / OpenLens, click on united-manufacturing-hub-timescaledb-0 to open the details page.

  2. Click the Pod Shell button to open a shell in the container.

    Lens Pod Shell
    Lens Pod Shell

  3. Enter the postgres shell:

    psql
    
  4. Connect to the database:

    \c factoryinsight
    

Restore the database

  1. Drop the existing database:

    DROP DATABASE factoryinsight;
    
  2. Create a new database:

    CREATE DATABASE factoryinsight;
    \c factoryinsight
    CREATE EXTENSION IF NOT EXISTS timescaledb;
    
  3. Put the database in maintenance mode:

    SELECT timescaledb_pre_restore();
    
  4. Restore the database:

    \! pg_restore -Fc -d factoryinsight /tmp/backup.bak
    
  5. Take the database out of maintenance mode:

    SELECT timescaledb_post_restore();
    

Enable kafkatopostgresql

  1. Navigate to Workloads > Deployments.
  2. Select the united-manufacturing-hub-kafkatopostgresql Deployment.
  3. Click the Scale button to select the number of replicas for the Deployment.

    Lens Deployment Scale
    Lens Deployment Scale

  4. Scale the number of replicas to the original value, usually 1.

What’s next

4.4.3 - Import and Export Node-RED Flows

This page describes how to import and export Node-RED flows.

Export Node-RED Flows

To export Node-RED flows, please follow the steps below:

  1. Access Node-RED by navigating to http://<CLUSTER-IP>:1880/nodered in your browser. Replace <CLUSTER-IP> with the IP address of your cluster, or localhost if you are running the cluster locally.

  2. From the top-right menu, select Export.

  3. From the Export dialog, select wich nodes or flows you want to export.

  4. Click Download to download the exported flows, or Copy to clipboard to copy the exported flows to the clipboard.

    ExportWindow
    ExportWindow

The credentials of the connector nodes are not exported. You will need to re-enter them after importing the flows.

Import Node-RED Flows

To import Node-RED flows, please follow the steps below:

  1. Access Node-RED by navigating to http://<CLUSTER-IP>:1880/nodered in your browser. Replace <CLUSTER-IP> with the IP address of your cluster, or localhost if you are running the cluster locally.

  2. From the top-right menu, select Import.

  3. From the Import dialog, select the file containing the exported flows, or paste the exported flows from the clipboard.

  4. Click Import to import the flows.

    ImportWindow
    ImportWindow

4.5 - Security

This section contains information about how to secure the United Manufacturing Hub.

4.5.1 - Change VerneMQ ACL Configuration

This page describes how to change the ACL configuration to allow more users to publish to the MQTT broker

Change VerneMQ ACL configuration

  1. Open UMHLens / OpenLens

  2. Navigate to Helm > Releases.

  3. Select the united-manufacturing-hub release and click Upgrade.

  4. Find the _000_commonConfig.infrastrucutre.mqtt section.

  5. Update the AclConfig value to allow unrestricted access, for example:

    AclConfig: |
      pattern # allow all  
    
  6. Click Upgrade to apply the changes.

What’s next

4.5.2 - Enable RBAC for the MQTT Broker

This page describes how to enable Role-Based Access Control (RBAC) for the MQTT broker.

Enable RBAC

  1. Open UMHLens / OpenLens
  2. Navigate to Helm > Releases.
  3. Select the united-manufacturing-hub release and click Upgrade.
  4. Find the mqtt_broker section.
  5. Locate the rbacEnabled parameter and change its value from false to true.
  6. Click Upgrade.

Now all MQTT connections require password authentication with the following defaults:

  • Username: node-red
  • Password: INSECURE_INSECURE_INSECURE

Change default credentials

  1. Open UMHLens / OpenLens

  2. Navigate to Workloads > Pods.

  3. Select the united-manufacturing-hub-hivemqce-0 Pod.

  4. Click the Pod Shell button to open a shell in the container.

    Lens Pod Shell
    Lens Pod Shell

  5. Navigate to the installation directory of the RBAC extension.

    cd extensions/hivemq-file-rbac-extension/
    
  6. Generate a password hash with this command.

    java -jar hivemq-file-rbac-extension-<version>.jar -p <password>
    
    • Replace <version> with the version of the HiveMQ CE extension. If you are not sure which version is installed, you can press Tab after typing java -jar hivemq-file-rbac-extension- to autocomplete the version.
    • Replace <password> with your desired password. Do not use any whitespaces.
  7. Copy the output of the command. It should look similar to this:

    $2a$10$Q8ZQ8ZQ8ZQ8ZQ8ZQ8ZQ8Zu
    
  8. Navigate to Config > ConfigMaps.

  9. Select the united-manufacturing-hub-hivemqce-extension ConfigMap.

  10. Click the Edit button to open the ConfigMap editor.

  11. In the data.credentials.xml section, replace the strings inbetween the <password> tags with the password hash generated in step 7.

    You can use a different password for each different microservice. Just remember that you will need to update the configuration in each one to use the new password.
  12. Click Save to apply the changes.

  13. Go back to Workloads > Pods and select the united-manufacturing-hub-hivemqce-0 Pod.

  14. Click the Delete button to delete the Pod.

    Lens Pod Delete
    Lens Pod Delete

What’s next

4.5.3 - Setup PKI for the MQTT Broker

This page describes how to setup the Public Key Infrastructure (PKI) for the MQTT broker.

If you want to use MQTT over TLS (MQTTS) or Secure Web Socket (WSS) you need to setup a Public Key Infrastructure (PKI).

Read the blog article about secure communication in IoT to learn more about encryption and certificates.

Structure overview

The Public Key Infrastructure for HiveMQ consists of two Java Key Stores (JKS):

  • Keystore: The Keystore contains the HiveMQ certificate and private keys. This store must be confidential, since anyone with access to it could generate valid client certificates and read or send messages in your MQTT infrastructure.
  • Truststore: The Truststore contains all the clients public certificates. HiveMQ uses it to verify the authenticity of the connections.

Before you begin

You need to have the following tools installed:

  • OpenSSL. If you are using Windows, you can install it with Chocolatey.
  • Java

Create a Keystore

Open a terminal and run the following command:

keytool -genkey -keyalg RSA -alias hivemq -keystore hivemq.jks -storepass <password> -validity <days> -keysize 4096 -dname "CN=united-manufacturing-hub-mqtt" -ext "SAN=IP:127.0.0.1"

Replace the following placeholders:

  • <password>: The password for the keystore. You can use any password you want.
  • <days>: The number of days the certificate should be valid.

The command runs for a few minutes and generates a file named hivemq.jks in the current directory, which contains the HiveMQ certificate and private key.

If you want to explore the contents of the keystore, you can use Keystore Explorer.

Generate client certificates

Open a terminal and create a directory for the client certificates:

mkdir pki

Follow these steps for each client you want to generate a certificate for.

  1. Create a new key pair:

    openssl req -new -x509 -newkey rsa:4096 -keyout "pki/<servicename>-key.pem" -out "pki/<servicename>-cert.pem" -nodes -days <days> -subj "/CN=<servicename>"
    
  2. Convert the certificate to the correct format:

    openssl x509 -outform der -in "pki/<servicename>-cert.pem" -out "pki/<servicename>.crt"
    
  3. Import the certificate into the Truststore:

    keytool -import -file "pki/<servicename>.crt" -alias "<servicename>" -keystore hivemq-trust-store.jks -storepass <password>
    

Replace the following placeholders:

  • <servicename> with the name of the client. Use the service name from the Network > Services tab in UMHLens / OpenLens.
  • <days> with the number of days the certificate should be valid.
  • <password> with the password for the Truststore. You can use any password you want.

Import the PKI into the United Manufacturing Hub

First you need to encode in base64 the Keystore, the Truststore and all the PEM files. Use the following script to encode everything automatically:

Get-ChildItem .\ -Recurse -Include *.jks,*.pem | ForEach-Object {
    $FileContent = Get-Content $_ -Raw
    $fileContentInBytes = [System.Text.Encoding]::UTF8.GetBytes($FileContent)
    $fileContentEncoded = [System.Convert]::ToBase64String($fileContentInBytes)
    $fileContentEncoded > $_".b64"
    Write-Host $_".b64 File Encoded Successfully!"
}

find ./ -regex '.*\.jks\|.*\.pem' -exec openssl base64 -A -in {} -out {}.b64 \;

You could also do it manually with the following command:

openssl base64 -A -in <filename> -out <filename>.b64

Now you can import the PKI into the United Manufacturing Hub. To do so:

  1. Open UMHLens / OpenLens.
  2. Navigate to Helm > Releases.
  3. Select the united-manufacturing-hub release.
  4. Click the Upgrade button.
  5. Find the _000_commonConfig.infrastructure.mqtt.tls section.
  6. Update the value of the keystoreBase64 field with the content of the hivemq.jks.b64 file and the value of the keystorePassword field with the password you used for the keystore.
  7. Update the value of the truststoreBase64 field with the content of the hivemq-trust-store.jks.b64 file and the value of the truststorePassword field with the password you used for the truststore.
  8. Update the value of the <servicename>.cert field with the content of the <servicename>-cert.pem.b64 file and the value of the <servicename>.key field with the content of the <servicename>-key.pem.b64 file.
  9. Click the Upgrade button to apply the changes.

What’s next

5 - What's new

This section contains information about the new features and changes in the United Manufacturing Hub.

For release highlights, deprecations, and breaking changes in the United Manufacturing Hub, refer to these “What’s new” pages for each version.

For a complete list of every change, with links to pull requests and related issues, refer to the release notes.

United Manufacturing Hub 0.9

5.1 - What's New in Version 0.9.14

This section contains information about the new features and changes in the United Manufacturing Hub introduced in version 0.9.14.

Welcome to United Manufacturing Hub version 0.9.14! In this release we changed the Kafka broker from Apache Kafka to RedPanda, which is a Kafka-compatible event streaming platform. We also started migrating to a different kafka library in our micoservices, which will allow full ARM support in the future. Finally, we tweaked the overall resource usage of the United Manufacturing Hub to improve performance and efficiency, along with some bug fixes.

For a complete list of changes, refer to the release notes.

RedPanda

RedPanda is a Kafka-compatible event streaming platform. It is built with modern hardware in mind and utilizes multi-core CPUs efficiently, which can result in better performance compared to Kafka. RedPanda also offers lower latency and higher throughput, making it a better fit for real-time use cases in IIoT applications. Additionally, RedPanda has a simpler setup and management process compared to Kafka, which can save time and resources for development teams. Finally, RedPanda is fully compatible with Kafka’s API, allowing for a seamless transition for existing Kafka users.

Overall, Redpanda can provide improved performance and efficiency for IIoT applications that require real-time data processing and management with a lower setup and management cost.

Sarama Kafka Library

We started migrating our microservices to use the Sarama Kafka library. This library is written in Go and is fully compatible with RedPanda. This change will allow us to support ARM-based devices in the future, which will be useful for edge computing use cases. Addedd bonus is that Sarama is faster and requires less memory than the previous library.

For now we only migrated the following microservices:

  • barcodereader
  • kafka-init (used as an init container for components that communicate with Kafka)
  • mqtt-kafka-bridge

Resources tweaking

With this release we tweaked the resource requests of each default component of the United Manufacturing Hub to respect the minimum requirements of 4 cores and 8GB of RAM. This allowed us to increase the memory allocated for the MQTT broker, resulting in solving the common Out Of Memory issue that caused the broker to restart.

Be sure to follow the upgrade guide to adjust your resources accordingly.

The following table shows the new resource requests and limits when deploying the United Manufacturing Hub with the default configuration or with all the components enabled. CPU values are expressed in millicores and memory values are expressed in mebibytes.

resources
ResourceRequestsLimits
CPU (default values)1080m (27%)1890m (47%)
Memory (default values)1650Mi (21%)2770Mi (35%)
CPU (all components)2002m (50%)2730m (68%)
Memory (all components)2873Mi (36%)3578Mi (45%)

The requested resources are the ones immediately allocated to the container when it starts, and the limits are the maximum amount of resources that the container can (but is not forced to) use. For more information about Kubernetes resources, refer to the official documentation.

Container registry

We moved our container registry from Docker Hub to GitHub Container Registry. This change won’t affect the way you deploy the United Manufacturing Hub, but it will allow us to better manage our container images and provide a better experience for our developers. For the time being, we will continue to publish our images to Docker Hub, but we will eventually deprecate the old registry.

Others

  • Implemented a new test build to detect race conditions in the codebase. This will help us to improve the stability of the United Manufacturing Hub.
  • All our custom images now run as non-root by default, except for the ones that require root privileges.
  • The custom microservices now allow to change the type of Service used to expose them by setting serviceType field.
  • Added an SQL trigger function that deletes duplicate records from the statetable table after insertion.
  • Enhanced the environment variables validation in the codebase.
  • Added possibility to set the aggregation interval when calculating the throughput of an asset.
  • Various dependencies has been updated to their latest version.

5.2 - What's New in Version 0.9.13

This section contains information about the new features and changes in the United Manufacturing Hub introduced in version 0.9.13.

Welcome to United Manufacturing Hub version 0.9.13! This is a minor release that only updates the new metrics feature.

For a complete list of changes, refer to the release notes.

5.3 - What's New in Version 0.9.12

This section contains information about the new features and changes in the United Manufacturing Hub introduced in version 0.9.12.

Welcome to United Manufacturing Hub version 0.9.12! Read on to learn about the new features of the UMH Datasource V2 plugin for Grafana, Redis running in standalone mode, and more.

For a complete list of changes, refer to the release notes.

Grafana

New Grafana version

Grafana has been upgraded to version 9.4.3. This introduces new search and navigation features, a redesigned details section of the logs, and a new data source connection page.

Head over to the Grafana release notes to learn more about the new features.

New Node-RED version

We have upgraded Node-RED to version 3.0.2. Checkout the Node-RED release notes for more information.

UMH Datasource V2 plugin

The latest update to the datasource has incorporated typesafe JSON parsing, significantly enhancing the overall performance and dependability of the plugin. This implementation ensures that the parsing process strictly adheres to predefined data types, eliminating the possibility of unexpected errors or data corruption that can occur with loosely-typed JSON parsing.

Redis in standalone mode

Redis, the service used for caching, is now deployed in standalone mode. This change introduces these benefits:

  • Simplicity: Running Redis in standalone mode is simpler than using a master-replica topology with Sentinel. With standalone mode, there is only one Redis instance to manage, whereas with master-replica, you need to manage multiple Redis instances and the Sentinel process. This simplicity can reduce complexity and make it easier to manage Redis instances.
  • Lower Overhead: Standalone mode has lower overhead than using a master-replica topology with Sentinel. In a master-replica topology, there is a communication overhead between the master and the replicas, and Sentinel adds additional overhead for monitoring and failover management. In contrast, standalone mode does not have this overhead.
  • Better Performance: Since standalone mode does not have the overhead of master-replica topology with Sentinel, it can provide better performance. Standalone mode provides faster response times and can handle more requests per second than a master-replica topology with Sentinel.

That being said, it’s important to note that a master-replica topology with Sentinel provides higher availability and failover capabilities than standalone mode.

All basic services are now exposed by a LoadBalancer Service

The MQTT Broker, Kafka Broker, and Kafka Console are now exposed by a LoadBalancer Service, along with the Database, Grafana and Node-RED. This change makes it easier to access these services from outside the cluster, as they are now accessible via the IP address of the cluster.

When installing the United Manufacturing Hub locally, the cluster ports are automatically mapped to the host ports. This means that you can access the services from your browser by using localhost and the port number.

Read more about connecting to the services from outside the cluster in the related documentation.

Metrics

We introduced an optional microservice that can be used to collect metrics about the system, like OS, CPU, memory, hostname and average load. These metrics are then sent to our server for analysis, and are completely anonymous. This microservice is enabled by default, but can be disabled by setting the _000_commonConfig.metrics.enabled value to false in the values.yaml file.

Click to see an example metric
{
   "OS":"linux",
   "Arch":"amd64",
   "Memory":{
      "total":16435666944,
      "available":11555106816,
      "used":4404510720,
      "usedPercent":26.798490958761544,
      "free":574394368,
      "active":3613691904,
      "inactive":10843209728,
      "wired":0,
      "laundry":0,
      "buffers":588361728,
      "cached":10868400128,
      "writeback":0,
      "dirty":122880,
      "writebacktmp":0,
      "shared":155168768,
      "slab":978030592,
      "sreclaimable":766824448,
      "sunreclaim":211206144,
      "pagetables":32157696,
      "swapcached":17887232,
      "commitlimit":12512800768,
      "committedas":16789483520,
      "hightotal":0,
      "highfree":0,
      "lowtotal":0,
      "lowfree":0,
      "swaptotal":4294967296,
      "swapfree":4165865472,
      "mapped":1214676992,
      "vmalloctotal":35184372087808,
      "vmallocused":60112896,
      "vmallocchunk":0,
      "hugepagestotal":0,
      "hugepagesfree":0,
      "hugepagesize":2097152
   },
   "CPUInfo":[
      {
         "cpu":0,
         "vendorId":"AuthenticAMD",
         "family":"25",
         "model":"80",
         "stepping":0,
         "physicalId":"0",
         "coreId":"0",
         "cores":1,
         "modelName":"AMD Ryzen 9 5900HX with Radeon Graphics",
         "mhz":3293.73,
         "cacheSize":512,
         "flags":[
            "fpu",
            "vme",
            "de",
            "pse",
            "tsc",
            "msr",
            "pae",
            "mce",
            "cx8",
            "apic",
            "sep",
            "mtrr",
            "pge",
            "mca",
            "cmov",
            "pat",
            "pse36",
            "clflush",
            "mmx",
            "fxsr",
            "sse",
            "sse2",
            "ht",
            "syscall",
            "nx",
            "mmxext",
            "fxsr_opt",
            "pdpe1gb",
            "rdtscp",
            "lm",
            "constant_tsc",
            "rep_good",
            "nopl",
            "tsc_reliable",
            "nonstop_tsc",
            "cpuid",
            "extd_apicid",
            "pni",
            "pclmulqdq",
            "ssse3",
            "fma",
            "cx16",
            "sse4_1",
            "sse4_2",
            "movbe",
            "popcnt",
            "aes",
            "xsave",
            "avx",
            "f16c",
            "rdrand",
            "hypervisor",
            "lahf_lm",
            "cmp_legacy",
            "svm",
            "cr8_legacy",
            "abm",
            "sse4a",
            "misalignsse",
            "3dnowprefetch",
            "osvw",
            "topoext",
            "perfctr_core",
            "ssbd",
            "ibrs",
            "ibpb",
            "stibp",
            "vmmcall",
            "fsgsbase",
            "bmi1",
            "avx2",
            "smep",
            "bmi2",
            "erms",
            "invpcid",
            "rdseed",
            "adx",
            "smap",
            "clflushopt",
            "clwb",
            "sha_ni",
            "xsaveopt",
            "xsavec",
            "xgetbv1",
            "xsaves",
            "clzero",
            "xsaveerptr",
            "arat",
            "npt",
            "nrip_save",
            "tsc_scale",
            "vmcb_clean",
            "flushbyasid",
            "decodeassists",
            "pausefilter",
            "pfthreshold",
            "v_vmsave_vmload",
            "umip",
            "vaes",
            "vpclmulqdq",
            "rdpid",
            "fsrm"
         ],
         "microcode":"0xffffffff"
      },
      {
         "cpu":1,
         "vendorId":"AuthenticAMD",
         "family":"25",
         "model":"80",
         "stepping":0,
         "physicalId":"0",
         "coreId":"0",
         "cores":1,
         "modelName":"AMD Ryzen 9 5900HX with Radeon Graphics",
         "mhz":3293.73,
         "cacheSize":512,
         "flags":[
            "fpu",
            "vme",
            "de",
            "pse",
            "tsc",
            "msr",
            "pae",
            "mce",
            "cx8",
            "apic",
            "sep",
            "mtrr",
            "pge",
            "mca",
            "cmov",
            "pat",
            "pse36",
            "clflush",
            "mmx",
            "fxsr",
            "sse",
            "sse2",
            "ht",
            "syscall",
            "nx",
            "mmxext",
            "fxsr_opt",
            "pdpe1gb",
            "rdtscp",
            "lm",
            "constant_tsc",
            "rep_good",
            "nopl",
            "tsc_reliable",
            "nonstop_tsc",
            "cpuid",
            "extd_apicid",
            "pni",
            "pclmulqdq",
            "ssse3",
            "fma",
            "cx16",
            "sse4_1",
            "sse4_2",
            "movbe",
            "popcnt",
            "aes",
            "xsave",
            "avx",
            "f16c",
            "rdrand",
            "hypervisor",
            "lahf_lm",
            "cmp_legacy",
            "svm",
            "cr8_legacy",
            "abm",
            "sse4a",
            "misalignsse",
            "3dnowprefetch",
            "osvw",
            "topoext",
            "perfctr_core",
            "ssbd",
            "ibrs",
            "ibpb",
            "stibp",
            "vmmcall",
            "fsgsbase",
            "bmi1",
            "avx2",
            "smep",
            "bmi2",
            "erms",
            "invpcid",
            "rdseed",
            "adx",
            "smap",
            "clflushopt",
            "clwb",
            "sha_ni",
            "xsaveopt",
            "xsavec",
            "xgetbv1",
            "xsaves",
            "clzero",
            "xsaveerptr",
            "arat",
            "npt",
            "nrip_save",
            "tsc_scale",
            "vmcb_clean",
            "flushbyasid",
            "decodeassists",
            "pausefilter",
            "pfthreshold",
            "v_vmsave_vmload",
            "umip",
            "vaes",
            "vpclmulqdq",
            "rdpid",
            "fsrm"
         ],
         "microcode":"0xffffffff"
      },
      {
         "cpu":2,
         "vendorId":"AuthenticAMD",
         "family":"25",
         "model":"80",
         "stepping":0,
         "physicalId":"0",
         "coreId":"1",
         "cores":1,
         "modelName":"AMD Ryzen 9 5900HX with Radeon Graphics",
         "mhz":3293.73,
         "cacheSize":512,
         "flags":[
            "fpu",
            "vme",
            "de",
            "pse",
            "tsc",
            "msr",
            "pae",
            "mce",
            "cx8",
            "apic",
            "sep",
            "mtrr",
            "pge",
            "mca",
            "cmov",
            "pat",
            "pse36",
            "clflush",
            "mmx",
            "fxsr",
            "sse",
            "sse2",
            "ht",
            "syscall",
            "nx",
            "mmxext",
            "fxsr_opt",
            "pdpe1gb",
            "rdtscp",
            "lm",
            "constant_tsc",
            "rep_good",
            "nopl",
            "tsc_reliable",
            "nonstop_tsc",
            "cpuid",
            "extd_apicid",
            "pni",
            "pclmulqdq",
            "ssse3",
            "fma",
            "cx16",
            "sse4_1",
            "sse4_2",
            "movbe",
            "popcnt",
            "aes",
            "xsave",
            "avx",
            "f16c",
            "rdrand",
            "hypervisor",
            "lahf_lm",
            "cmp_legacy",
            "svm",
            "cr8_legacy",
            "abm",
            "sse4a",
            "misalignsse",
            "3dnowprefetch",
            "osvw",
            "topoext",
            "perfctr_core",
            "ssbd",
            "ibrs",
            "ibpb",
            "stibp",
            "vmmcall",
            "fsgsbase",
            "bmi1",
            "avx2",
            "smep",
            "bmi2",
            "erms",
            "invpcid",
            "rdseed",
            "adx",
            "smap",
            "clflushopt",
            "clwb",
            "sha_ni",
            "xsaveopt",
            "xsavec",
            "xgetbv1",
            "xsaves",
            "clzero",
            "xsaveerptr",
            "arat",
            "npt",
            "nrip_save",
            "tsc_scale",
            "vmcb_clean",
            "flushbyasid",
            "decodeassists",
            "pausefilter",
            "pfthreshold",
            "v_vmsave_vmload",
            "umip",
            "vaes",
            "vpclmulqdq",
            "rdpid",
            "fsrm"
         ],
         "microcode":"0xffffffff"
      },
      {
         "cpu":3,
         "vendorId":"AuthenticAMD",
         "family":"25",
         "model":"80",
         "stepping":0,
         "physicalId":"0",
         "coreId":"1",
         "cores":1,
         "modelName":"AMD Ryzen 9 5900HX with Radeon Graphics",
         "mhz":3293.73,
         "cacheSize":512,
         "flags":[
            "fpu",
            "vme",
            "de",
            "pse",
            "tsc",
            "msr",
            "pae",
            "mce",
            "cx8",
            "apic",
            "sep",
            "mtrr",
            "pge",
            "mca",
            "cmov",
            "pat",
            "pse36",
            "clflush",
            "mmx",
            "fxsr",
            "sse",
            "sse2",
            "ht",
            "syscall",
            "nx",
            "mmxext",
            "fxsr_opt",
            "pdpe1gb",
            "rdtscp",
            "lm",
            "constant_tsc",
            "rep_good",
            "nopl",
            "tsc_reliable",
            "nonstop_tsc",
            "cpuid",
            "extd_apicid",
            "pni",
            "pclmulqdq",
            "ssse3",
            "fma",
            "cx16",
            "sse4_1",
            "sse4_2",
            "movbe",
            "popcnt",
            "aes",
            "xsave",
            "avx",
            "f16c",
            "rdrand",
            "hypervisor",
            "lahf_lm",
            "cmp_legacy",
            "svm",
            "cr8_legacy",
            "abm",
            "sse4a",
            "misalignsse",
            "3dnowprefetch",
            "osvw",
            "topoext",
            "perfctr_core",
            "ssbd",
            "ibrs",
            "ibpb",
            "stibp",
            "vmmcall",
            "fsgsbase",
            "bmi1",
            "avx2",
            "smep",
            "bmi2",
            "erms",
            "invpcid",
            "rdseed",
            "adx",
            "smap",
            "clflushopt",
            "clwb",
            "sha_ni",
            "xsaveopt",
            "xsavec",
            "xgetbv1",
            "xsaves",
            "clzero",
            "xsaveerptr",
            "arat",
            "npt",
            "nrip_save",
            "tsc_scale",
            "vmcb_clean",
            "flushbyasid",
            "decodeassists",
            "pausefilter",
            "pfthreshold",
            "v_vmsave_vmload",
            "umip",
            "vaes",
            "vpclmulqdq",
            "rdpid",
            "fsrm"
         ],
         "microcode":"0xffffffff"
      },
      {
         "cpu":4,
         "vendorId":"AuthenticAMD",
         "family":"25",
         "model":"80",
         "stepping":0,
         "physicalId":"0",
         "coreId":"2",
         "cores":1,
         "modelName":"AMD Ryzen 9 5900HX with Radeon Graphics",
         "mhz":3293.73,
         "cacheSize":512,
         "flags":[
            "fpu",
            "vme",
            "de",
            "pse",
            "tsc",
            "msr",
            "pae",
            "mce",
            "cx8",
            "apic",
            "sep",
            "mtrr",
            "pge",
            "mca",
            "cmov",
            "pat",
            "pse36",
            "clflush",
            "mmx",
            "fxsr",
            "sse",
            "sse2",
            "ht",
            "syscall",
            "nx",
            "mmxext",
            "fxsr_opt",
            "pdpe1gb",
            "rdtscp",
            "lm",
            "constant_tsc",
            "rep_good",
            "nopl",
            "tsc_reliable",
            "nonstop_tsc",
            "cpuid",
            "extd_apicid",
            "pni",
            "pclmulqdq",
            "ssse3",
            "fma",
            "cx16",
            "sse4_1",
            "sse4_2",
            "movbe",
            "popcnt",
            "aes",
            "xsave",
            "avx",
            "f16c",
            "rdrand",
            "hypervisor",
            "lahf_lm",
            "cmp_legacy",
            "svm",
            "cr8_legacy",
            "abm",
            "sse4a",
            "misalignsse",
            "3dnowprefetch",
            "osvw",
            "topoext",
            "perfctr_core",
            "ssbd",
            "ibrs",
            "ibpb",
            "stibp",
            "vmmcall",
            "fsgsbase",
            "bmi1",
            "avx2",
            "smep",
            "bmi2",
            "erms",
            "invpcid",
            "rdseed",
            "adx",
            "smap",
            "clflushopt",
            "clwb",
            "sha_ni",
            "xsaveopt",
            "xsavec",
            "xgetbv1",
            "xsaves",
            "clzero",
            "xsaveerptr",
            "arat",
            "npt",
            "nrip_save",
            "tsc_scale",
            "vmcb_clean",
            "flushbyasid",
            "decodeassists",
            "pausefilter",
            "pfthreshold",
            "v_vmsave_vmload",
            "umip",
            "vaes",
            "vpclmulqdq",
            "rdpid",
            "fsrm"
         ],
         "microcode":"0xffffffff"
      },
      {
         "cpu":5,
         "vendorId":"AuthenticAMD",
         "family":"25",
         "model":"80",
         "stepping":0,
         "physicalId":"0",
         "coreId":"2",
         "cores":1,
         "modelName":"AMD Ryzen 9 5900HX with Radeon Graphics",
         "mhz":3293.73,
         "cacheSize":512,
         "flags":[
            "fpu",
            "vme",
            "de",
            "pse",
            "tsc",
            "msr",
            "pae",
            "mce",
            "cx8",
            "apic",
            "sep",
            "mtrr",
            "pge",
            "mca",
            "cmov",
            "pat",
            "pse36",
            "clflush",
            "mmx",
            "fxsr",
            "sse",
            "sse2",
            "ht",
            "syscall",
            "nx",
            "mmxext",
            "fxsr_opt",
            "pdpe1gb",
            "rdtscp",
            "lm",
            "constant_tsc",
            "rep_good",
            "nopl",
            "tsc_reliable",
            "nonstop_tsc",
            "cpuid",
            "extd_apicid",
            "pni",
            "pclmulqdq",
            "ssse3",
            "fma",
            "cx16",
            "sse4_1",
            "sse4_2",
            "movbe",
            "popcnt",
            "aes",
            "xsave",
            "avx",
            "f16c",
            "rdrand",
            "hypervisor",
            "lahf_lm",
            "cmp_legacy",
            "svm",
            "cr8_legacy",
            "abm",
            "sse4a",
            "misalignsse",
            "3dnowprefetch",
            "osvw",
            "topoext",
            "perfctr_core",
            "ssbd",
            "ibrs",
            "ibpb",
            "stibp",
            "vmmcall",
            "fsgsbase",
            "bmi1",
            "avx2",
            "smep",
            "bmi2",
            "erms",
            "invpcid",
            "rdseed",
            "adx",
            "smap",
            "clflushopt",
            "clwb",
            "sha_ni",
            "xsaveopt",
            "xsavec",
            "xgetbv1",
            "xsaves",
            "clzero",
            "xsaveerptr",
            "arat",
            "npt",
            "nrip_save",
            "tsc_scale",
            "vmcb_clean",
            "flushbyasid",
            "decodeassists",
            "pausefilter",
            "pfthreshold",
            "v_vmsave_vmload",
            "umip",
            "vaes",
            "vpclmulqdq",
            "rdpid",
            "fsrm"
         ],
         "microcode":"0xffffffff"
      },
      {
         "cpu":6,
         "vendorId":"AuthenticAMD",
         "family":"25",
         "model":"80",
         "stepping":0,
         "physicalId":"0",
         "coreId":"3",
         "cores":1,
         "modelName":"AMD Ryzen 9 5900HX with Radeon Graphics",
         "mhz":3293.73,
         "cacheSize":512,
         "flags":[
            "fpu",
            "vme",
            "de",
            "pse",
            "tsc",
            "msr",
            "pae",
            "mce",
            "cx8",
            "apic",
            "sep",
            "mtrr",
            "pge",
            "mca",
            "cmov",
            "pat",
            "pse36",
            "clflush",
            "mmx",
            "fxsr",
            "sse",
            "sse2",
            "ht",
            "syscall",
            "nx",
            "mmxext",
            "fxsr_opt",
            "pdpe1gb",
            "rdtscp",
            "lm",
            "constant_tsc",
            "rep_good",
            "nopl",
            "tsc_reliable",
            "nonstop_tsc",
            "cpuid",
            "extd_apicid",
            "pni",
            "pclmulqdq",
            "ssse3",
            "fma",
            "cx16",
            "sse4_1",
            "sse4_2",
            "movbe",
            "popcnt",
            "aes",
            "xsave",
            "avx",
            "f16c",
            "rdrand",
            "hypervisor",
            "lahf_lm",
            "cmp_legacy",
            "svm",
            "cr8_legacy",
            "abm",
            "sse4a",
            "misalignsse",
            "3dnowprefetch",
            "osvw",
            "topoext",
            "perfctr_core",
            "ssbd",
            "ibrs",
            "ibpb",
            "stibp",
            "vmmcall",
            "fsgsbase",
            "bmi1",
            "avx2",
            "smep",
            "bmi2",
            "erms",
            "invpcid",
            "rdseed",
            "adx",
            "smap",
            "clflushopt",
            "clwb",
            "sha_ni",
            "xsaveopt",
            "xsavec",
            "xgetbv1",
            "xsaves",
            "clzero",
            "xsaveerptr",
            "arat",
            "npt",
            "nrip_save",
            "tsc_scale",
            "vmcb_clean",
            "flushbyasid",
            "decodeassists",
            "pausefilter",
            "pfthreshold",
            "v_vmsave_vmload",
            "umip",
            "vaes",
            "vpclmulqdq",
            "rdpid",
            "fsrm"
         ],
         "microcode":"0xffffffff"
      },
      {
         "cpu":7,
         "vendorId":"AuthenticAMD",
         "family":"25",
         "model":"80",
         "stepping":0,
         "physicalId":"0",
         "coreId":"3",
         "cores":1,
         "modelName":"AMD Ryzen 9 5900HX with Radeon Graphics",
         "mhz":3293.73,
         "cacheSize":512,
         "flags":[
            "fpu",
            "vme",
            "de",
            "pse",
            "tsc",
            "msr",
            "pae",
            "mce",
            "cx8",
            "apic",
            "sep",
            "mtrr",
            "pge",
            "mca",
            "cmov",
            "pat",
            "pse36",
            "clflush",
            "mmx",
            "fxsr",
            "sse",
            "sse2",
            "ht",
            "syscall",
            "nx",
            "mmxext",
            "fxsr_opt",
            "pdpe1gb",
            "rdtscp",
            "lm",
            "constant_tsc",
            "rep_good",
            "nopl",
            "tsc_reliable",
            "nonstop_tsc",
            "cpuid",
            "extd_apicid",
            "pni",
            "pclmulqdq",
            "ssse3",
            "fma",
            "cx16",
            "sse4_1",
            "sse4_2",
            "movbe",
            "popcnt",
            "aes",
            "xsave",
            "avx",
            "f16c",
            "rdrand",
            "hypervisor",
            "lahf_lm",
            "cmp_legacy",
            "svm",
            "cr8_legacy",
            "abm",
            "sse4a",
            "misalignsse",
            "3dnowprefetch",
            "osvw",
            "topoext",
            "perfctr_core",
            "ssbd",
            "ibrs",
            "ibpb",
            "stibp",
            "vmmcall",
            "fsgsbase",
            "bmi1",