This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

The OSS blueprint for the Industrial IoT

The United Manufacturing Hub is an Open-Source Helm Chart for Kubernetes, which combines state-of -the-art IT / OT tools & technologies and brings them into the hands of the engineer.

Bringing the worlds best IT and OT tools into the hands of the engineer

Why start from scratch when you can leverage a proven open-source blueprint? Kafka, MQTT, Node-RED, TimescaleDB and Grafana with the press of a button - tailored for manufacturing and ready-to-go



What can you do with it?


Everything That You Need To Do To Generate Value On The Shopfloor

Prevent Vendor Lock-In and Customize to Your Needs

  • The only requirement is Kubernetes, which is available in various flavors, including k3s, bare-metal k8s, and Kubernetes-as-a-service offerings like AWS EKS or Azure AKS
  • Swap components with other options at any time. Not a fan of Node-RED? Replace it with Kepware. Prefer a different MQTT broker? Use it!
  • Leverage existing systems and add only what you need.

Get Started Immediately

Connect with Like-Minded People

  • Tap into our community of experts and ask anything. No need to depend on external consultants or system integrators.
  • Leverage community content, from tutorials and Node-RED flows to Grafana dashboards. Although not all content is enterprise-supported, starting with a working solution saves you time and resources.
  • Get honest answers in a world where many companies spend millions on advertising.

How does it work?

Only requirement: a Kubernetes cluster (and we'll even help you with that!). You only need to install the United Manufacturing Hub Helm Chart on that cluster and configure it.

The United Manufacturing Hub will then generate all the required files for Kubernetes, including auto-generated secrets, various microservices like bridges between MQTT / Kafka, datamodels and configurations. From there on, Kubernetes will take care of all the container management.



FAQ

Yes - the United Manufacturing Hub is targeting specifically people and companies, who do not have the budget and/or knowledge to work on their own / develop everything from scratch.

With our extensive documentation, guides and knowledge sections you can learn everything that you need.

The United Manufacturing Hub abstracts these tools and technologies so that you can leverage all advantages, but still focus on what really matters: digitizing your production.

With our commercial Management Console you can manage your entire IT / OT infrastructure and work with Grafana / Node-RED without the need to ever touch or understand Kubernetes, Docker, Firewalls, Networking or similar.

Additionally, you can get support licenses providing unlimited support during development and maintenance of the system. Take a look at our website if you want to get more information on this.
Because very often these solutions do not target the actual pains of an engineer: implementation and maintenance. And then companies struggle in rolling out IIoT as the projects take much longer and cost way more than originally proposed.

In the United Manufacturing Hub, implementation and maintenance of the system are the first priority. We've had these pains too often ourselves and therefore incorporated and developed tools & technologies to avoid them.

For example, with sensorconnect we can retrofit production machines where it is impossible at the moment to extract data. Or, with our modular architecture we can fit the security needs of all IT departments - from integration into a demilitarized zone to on-premise and private cloud. With Apache Kafka we solve the pain of corrupted or missing messages when scaling out the system

How to proceed?

1 - Get Started!

You want to get started right away? Go ahead and jump into the action!

Great to see you’re ready to start! This guide has 5 steps: Installation, Managing the System, Data Acquisition & Manipulation, and Moving to Production.

Contact Us!

Do you still have questions on how to get started? Message us on our Discord Server.

1.1 - 1. Installation

Install the United Manufacturing Hub together with all required tools on a Linux Operating System.

The United Manufacturing Hub (UMH) can be deployed on various external devices, including edge devices and virtual machines (VMs). For initial installations or for development purposes, it is recommended to use a VM.

Software Requirements

The UMH installation requires one of the following Operaing System on your server:

  • Flatcar version current-2023 or higher (3510.3.1). It is recommended that you have full control over the operating system. To install Flatcar on your server, follow this guide.
  • Red Hat Enterprise Linux (RHEL) 9.0 and higher. Recommended when you can choose only out of a small amount of potential Operating System in your large enterprise
  • Community Supported: Ubuntu 22.04.4 LTS. This approach is useful when you’re for example trying to install the UMH on a cloud instance like AWS EC2 and struggle to install Flatcar or RHEL there.

While UMH is optimized for RHEL and Flatcar, it can theoretically run on other Linux distributions. However, support is not guaranteed. For Windows, you could try running one of the above described Operating Systems in a VM (e.g., Hyper-V). If you experiment with other systems, we encourage sharing your experiences on our Discord channel.

Hardware Requirements

  • CPU: Minimum 4 cores
  • Memory: 16 GB RAM
  • Disk Space: 32 GB available

Note: Systems at the edge of these requirements may experience longer installation times. Close other programs during installation for optimal performance.

Network Requirements

Before proceeding with the installation, ensure your system meets the necessary network requirements.

To learn about configuring firewall and network rules for your UMH instances, please refer to our dedicated Firewall Rules page.

Installation Steps

  1. Open the Management Console in the browser.

  2. When you are finished with the creation of your account, enter your information and click on SIGN IN.

    Sign in page
    Sign in page

  3. If you are not a member, continue with sign up. Register your information and click on SIGN UP.

    Sign up page
    Sign up page

  4. Click on +Add Instance button.

    Add Instance button
    Add Instance button

  5. Select Install UMH Only.

    Install UMH Only button
    Install UMH Only button

  6. Enter your instace name and then click on Create my command.

    Instance name input
    Instance name input

  7. You should be able to see a create command. Copy and paste the following command into your server’s terminal (via ssh).

    Create command button
    Create command button

  8. The installation script runs a lot of checking and setup. For example, it checks your operating system, installation of required tools, and internet connection. After the check phase, kubectl and Helm will be installed. The script shall show you what actions will happen to your system in the next step. If you want to proceed, type Y and press enter key.

    Installation checks
    Installation checks

  9. In this step, k3s will be installed. Then, it installs the UMH Helm Chart in Kubernetes. After that, the Management Companion will be installed into Kubernetes. Until everything is set up, it can take a while.

    Installation logs
    Installation logs

  10. After successful installation, you should be able to see messages like in the picture below.

    Installation success message
    Installation success message

  11. Go back to the Management Console and click on Let’s Go!

    Lets Go button
    Lets Go button

  12. Now, you should be able to see your instance on the dashboard.

    Instance Overview
    Instance Overview

Do you need more technical background information?

Here are some links to get you started:

What’s next?

Once you installed UMH, you can continue with the next page to learn how to manage the system, for example, access to the microservices.

1.2 - 2. Managing the System

Learn how to manage your UMH instance with the Management Console.

In this chapter, you will learn how to monitor, manage and configure your UMH instance with the Management Console.

At this stage, you should have already installed the UMH on your device. If you have not done so, please follow the steps in the Installation chapter first.

A Few Words About the Communication

Now that you have connected a UMH instance to the Management Console, you might be curious about how the Management Console communicates with the instance.

The Management Companion, serving as an agent within each UMH instance, provides a secure link to the Management Console. It enables comprehensive and secure monitoring and management of the UMH, ensuring system health and streamlined configuration, all while acting as a vigilant watchdog over system components and connected devices.

The diagram below illustrates the communication flow between the Management Console and the instance:

Communication between the Management Console and the instance
Communication between the Management Console and the instance

For more information, visit the architecture

Overview of Your Instances

Instance overview
Instance overview

On the left side of the Management Console, you can view the list of your instances. If you have just installed the UMH, you should see only one instance in the list.

The status of your instance is indicated by color: green means everything is working properly, while yellow indicates that there may be a connection issue.

The Messages Received statistic shows the number of messages received by you from the instance since you opened the Management Console. It is usually a good indicator of the health of the connection to the Companion. If the number is not increasing for 10 seconds, the instance is considered disconnected.

Monitoring the Instance’s Status

From the instance dashboard, in the overview tab, you can view the status of your instance. There are multiple interfaces that display the status of each component of the system.

Modules

A Module refers to a group of workloads in the United Manufacturing Hub responsible for specific tasks. For example, the Historian & Analytics module represents the microservices, storage, and connections that are responsible for storing and analyzing data.

In the Modules tab, you can view the status of each module. If a module is not healthy, it means that one or more of its components are not functioning properly.

System

In the System tab, you can view the resource usage of your device, as well as some system’s information.

If there is an overload on the device, you can view it here. An overloaded device is unable to handle the workload, and you should consider upgrading the device.

Connection Management

In the Connection Management tab, you can view a brief overview of the data infrastructure, which includes the rate of messages going through the message broker and the database. Any unhealthy data sources or connections are also listed here.

Kubernetes

In the Kubernetes tab, you can check for any error events in the Kubernetes cluster. Any errors suggest that the cluster is not operating correctly.

Additionally, this tab displays the version of the United Manufacturing Hub and the Management Companion currently installed on your device.

Manage the Instance

Before you begin, ensure that you are connected to the same network as the instance for accessing the various services and features discussed below.

While a graphical user interface for managing the instance is not yet available, you can still manage it via the command line.

Access the Command Line

Access your device’s shell either directly or via SSH. Note: Root user access is required for the following commands.

In UMH’s current version, add --kubeconfig /etc/rancher/k3s/k3s.yaml to each kubectl command. Root privileges are needed to access it. The installation path of kubectl might vary (e.g., /usr/local/bin/kubectl on RHEL/Linux, /opt/bin/kubectl on flatcar). These paths may not be in the root user’s PATH, so the commands below might appear complex.

Interact with the Instance

First, set this environment variable:

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

You can bypass this by adding –kubeconfig /etc/rancher/k3s/k3s.yaml to your commands. All instructions in this chapter will include this flag.

Then, to get a list of pods, run:

sudo $(which kubectl) get pods -n united-manufacturing-hub  --kubeconfig /etc/rancher/k3s/k3s.yaml

For a comprehensive list of commands, refer to the Kubernetes documentation.

Always specify the namespace when running a command by adding -n united-manufacturing-hub.

Access Node-RED

Node-RED is used in UMH for creating data flows. Access it via:

http://<instance-ip-address>:1880/nodered

Access Grafana

UMH uses Grafana for dashboard displays. Get your credentials:

sudo $(which kubectl) get secret grafana-secret --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -o jsonpath="{.data.adminuser}" | base64 --decode; echo
sudo $(which kubectl) get secret grafana-secret --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -o jsonpath="{.data.adminpassword}" | base64 --decode; echo

Then, access Grafana here:

http://<instance-ip-address>:8080

Use the retrieved credentials to log in.

Access the RedPanda Console

Manage the Kafka broker via the RedPanda Console:

http://<instance-ip-address>:8090

Interact with the Database

UMH uses TimescaleDB. Open a psql session:

sudo $(which kubectl) exec -it $(sudo $(which kubectl) get pods --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -l app.kubernetes.io/component=timescaledb -o jsonpath="{.items[0].metadata.name}") --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -- psql -U postgres

This command will open a psql shell connected to the default postgres database.

Run SQL queries as needed. For an overview of the database schema, refer to the Data Model documentation.

Connect MQTT to MQTT Explorer

Use MQTT Explorer for a structured overview of MQTT topics. Connect using the instance’s IP and port 1883.

Troubleshooting

Error: You must be logged in to the server while using the kubectl Command

If you encounter the error below while using the kubectl command:

E1121 13:05:52.772843  218533 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
error: You must be logged in to the server (the server has asked for the client to provide credentials)

This issue can be resolved by setting the KUBECONFIG environment variable. Run the following command:

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

Alternatively, use the --kubeconfig flag to specify the configuration file path:

sudo $(which kubectl) --kubeconfig /etc/rancher/k3s/k3s.yaml get pods -n united-manufacturing-hub

“Permission Denied” Error with kubectl Command

Encountering the error below while using the kubectl command:

error: error loading config file "/etc/rancher/k3s/k3s.yaml": open /etc/rancher/k3s/k3s.yaml: permission denied

Indicates the need for root access. Run the command with sudo, or log in as the root user.

kubectl: command not found error

If you encounter the error below while using the kubectl command:

kubectl: command not found

The solution is to use the full path to the kubectl binary. You can do this by prefixing the command with /usr/local/bin/ (for RHEL and other Linux systems), or /opt/bin/ (for flatcar) or by adding it to your PATH environment variable:

/usr/local/bin/kubectl get pods -n united-manufacturing-hub

# or

export PATH=$PATH:/usr/local/bin

Viewing Pod Logs for Troubleshooting

Logs are essential for diagnosing and understanding the behavior of your applications and infrastructure. Here’s how to view logs for key components:

  • Management Companion Logs: To view the real-time logs of the Management Companion, use the following command. This can be helpful for monitoring the Companion’s activities or troubleshooting issues.

    sudo $(which kubectl) logs -f mgmtcompanion-0 -n mgmtcompanion --kubeconfig /etc/rancher/k3s/k3s.yaml
    
  • TimescaleDB Logs: For real-time logging of the TimescaleDB, execute this command. It’s useful for tracking database operations and identifying potential issues.

    sudo $(which kubectl) logs -f united-manufacturing-hub-timescaledb-0 -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml
    

Restarting a Pod for Troubleshooting

Sometimes, the most straightforward troubleshooting method is to restart a problematic pod. Here’s how to restart specific pods:

  • Restart Management Companion: If you encounter issues with the Management Companion, restart it with this command:

    sudo $(which kubectl) delete pod mgmtcompanion-0 -n mgmtcompanion --kubeconfig /etc/rancher/k3s/k3s.yaml
    
  • Restart TimescaleDB: Should TimescaleDB exhibit unexpected behavior, use the following command to restart it:

    sudo $(which kubectl) delete pod united-manufacturing-hub-timescaledb-0 -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml
    

Troubleshooting Redpanda / Kafka

For insights into your Kafka streams managed by Redpanda, these commands are invaluable:

  • List All Topics: To get an overview of all topics in your Redpanda cluster:

    sudo $(which kubectl) exec -it --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub  united-manufacturing-hub-kafka-0 -- rpk topic list
    
  • Describe a Specific Topic: For detailed information about a specific topic, such as umh.v1.e2e-enterprise.aachen.packaging, use:

    sudo $(which kubectl) exec -it --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub united-manufacturing-hub-kafka-0 -- rpk topic describe umh.v1.e2e-enterprise.aachen.packaging
    
  • Consume Messages from a Topic: To view messages from a topic like umh.v1.e2e-enterprise.aachen.packaging, this command is useful for real-time data observation:

    sudo $(which kubectl) exec -it --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub united-manufacturing-hub-kafka-0 -- rpk topic consume umh.v1.e2e-enterprise.aachen.packaging
    

What’s next?

Now that you have learned how to monitor, manage and configure your UMH instance with the Management Console, you can start creating your first data flow. To learn how to do this, proceed to the Data Acquisition and Manipulation chapter.

1.3 - 3. Data Acquisition and Manipulation

Learn how to connect various data sources to the UMH and format data into the UMH data model.

The United Manufacturing Hub excels in its ability to integrate diverse data sources and standardize data into a unified model, enabling seamless integration of existing data infrastructure for analysis and processing.

Currently, data sources can be connected to the UMH through Benthos for OPC UA and Node-RED for other types.

The UMH includes 3 pre-configured data simulators for testing connections:

Connect OPC UA Data Sources

OPC UA, often complex, can be streamlined using our Benthos-based OPC UA connector accessible from the Management Console.

Create a Connection with the Management Console

After logging into the Management Console and selecting your instance, navigate to the Connection Management tab, where you’ll find all your connections alongside their status.

Connection Management
Connection Management

Uninitialized Connections are established but not yet configured as data sources, while Initialized Connections are fully configured.

The health status reflects the UMH-data source connection, not data transmission status.

To add a new connection, click Add Connection. Currently, we only provide two type of connections:

  • OPC-UA Server: represents a connection to an OPC-UA server.
  • n/a: represents a generic asset (useful for connections we don’t support yet).

Enter the required server details, which include the unique name and address with the format ip:port. Optionally, you can also attach some notes to the connection, which can be useful for documentation purposes.

For testing with the OPC UA simulator, select the OPC-UA Server type and use the following address:

united-manufacturing-hub-opcuasimulator-service:46010

Connection Details
Connection Details

Test the connection, and if successful, click Add Connection to save and deploy it.

Initialize the Connection

Back at Connection Management, your new connection should be listed in the table, and surely you’ll notice that it’s health is reported as Not configured.

At this point, it’s worth discussing what initializing a connection means and why it’s important.

New connections are created in an “uninitialized” state, meaning they are not yet configured as data sources, hence the Not configured health status. So for them to be actually useful, they need to be initialized, which will fully configure them as data sources and create a new Benthos deployment for data publishing to the UMH Kafka broker.

Initialize the connection by pressing the “play” button under the Actions column.

Initialize Connection
Initialize Connection

Enter authentication details (use Anonymous for no authentication, as with the OPC UA simulator).

Specify OPC UA nodes to subscribe to in a yaml file, following the ISA95 standard:

  nodes:
    - opcuaID: ns=2;s=Pressure
      enterprise: pharma-genix
      site: aachen
      area: packaging
      line: packaging_1
      workcell: blister
      originID: PLC13
      tagName: machineState
      schema: _historian

Mandatory fields are opcuaID, enterprise, tagName and schema.

Learn more about Data Modeling in the Unified Namespace in the Learning Hub.

Review and confirm the nodes, then proceed with initialization. Successful initialization will be indicated by a green message.

The connection’s health status should now be marked as Healthy and display the current message rate. You can also check the tooltip for more details.

Connection Management
Connection Management

Connect MQTT Servers

There are a lot of options to connect an MQTT server to the UMH. For this guide, we’ll use Node-RED to connect to the MQTT simulator and format data into the UMH data model.

To access Node-RED’s web interface, navigate to:

http://<instance-ip-address>:1880/nodered

Replace <instance-ip-address> with your UMH instance’s IP. Ensure you’re on the same network for access.

Add the MQTT Connection

In Node-RED, find the mqtt-in node from the node palette and drag it into your flow. Double-click to configure and click the pencil button next to the Server field.

Enter your MQTT broker’s details:

  • Server: united-manufacturing-hub-mqtt
  • Port: 1883

For the purpose of this guide, we’ll use the UMH MQTT broker, even though the data coming from it is already bridged to Kafka by the MQTT Kafka Bridge. Since the simulated data is using the old Data Model, we’ll use Node-RED to convert it to the new Data Model.

Click Add to save.

Connect MQTT to Node-RED
Connect MQTT to Node-RED

Define the subscription topic. For example, ia/raw/development/ioTSensors/Temperature is used by the MQTT Simulator.

To test, link a debug node to the mqtt-in node and deploy. Open the debug pane by clicking on the bug icon on the top right of the screen to view messages from the broker.

MQTT Debug Connection
MQTT Debug Connection

Explore Unified Namespace for details on topic structuring.

Format Incoming Messages

Use a function node to format raw data. Connect it to the mqtt-in node and paste this script:

msg.payload = {
  timestamp_ms: Date.now(),
  temperature: msg.payload,
};
return msg;

Finalize with Done.

Then, connect a JSON node to the function node to parse the object into a string.

This function transforms the payload into the correct format for the UMH data model.

Send Formatted Data to Kafka

For this guide, we’ll send data to the UMH Kafka broker.

Ensure you have node-red-contrib-kafkajs installed. If not, see How to Get Missing Plugins in Node-RED.

Add a kafka-producer node, connecting it to the JSON node. Configure as follows:

  1. Open the configuration menu by double-click on the kafka-producer node. After that, click on the edit button.

    Node-RED Kafka Producer
    Node-RED Kafka Producer

  2. Change the fields of Brokers and Client ID as follows:

    • Brokers: united-manufacturing-hub-kafka:9092
    • Client ID: nodered

    Node-RED Kafka Broker Configuration
    Node-RED Kafka Broker Configuration

    Click on Update to save.

  3. Structure Kafka topics according to UMH data model, following the ISA95 standard:

    umh.v1.<enterprise>.<site>.<area>.<line>.<workcell>.<originID>.<schema>.<tagName>
    
    • umh.v1: obligatory versioning prefix
    • enterprise: The company’s name
    • site: The facility’s location
    • area: The specific production’s area
    • line: The production line
    • workcell: The workcell in the production line
    • originID: The data source ID
    • schema: The schema of your data
    • tagName: Arbitrary tags dependent context

    The enterprise and schema fields are required. To learn more about the UMH data-model, read the documentation.

    For example, if you want to structure a topic for the temperature in celsius from the PLC, which

    • is running in a factory of Pharma-Genix in Aachen.
    • is running in blister workcell in the packaging line 1 in the packaging area.
    • has the ID PLC13.

    and you want to use _historian schema, then the topic should look like

    msg.topic = umh.v1.pharma-genix.aachen.packaging.packaging_1.blister.PLC13._historian.temperatureCelsius
    

    Add this topic to the script in the function node, which created in Format Incoming Messages section.

    Node-RED Kafka Topic
    Node-RED Kafka Topic

    Alternatively, you can set the topic to the kafka-producer node directly.

  4. Click Done and deploy.

Optional: Add a debug node for output visualization.

Node-RED MQTT to Kafka
Node-RED MQTT to Kafka

Connect Kafka Data Sources

Kafka data sources can be integrated with UMH exclusively through Node-RED.

To access Node-RED’s web interface, navigate to:

http://<instance-ip-address>:1880/nodered

Replace <instance-ip-address> with your UMH instance’s IP, ensuring you’re on the same network for access.

Before proceeding, make sure the node-red-contrib-kafkajs plugin is installed. For installation guidance, see How to Get Missing Plugins in Node-RED.

Add the Kafka Connection

In Node-RED, locate the kafka-consumer node and drag it into your flow. Double-click to configure and click the pencil button beside the Server field.

If you have followed the guide, the kafka client should already be configured and automatically selected.

Enter your Kafka broker’s details:

  • Brokers: united-manufacturing-hub-kafka:9092
  • Client ID: nodered

Click Add to save.

Connect Kafka to Node-RED
Connect Kafka to Node-RED

Set the subscription topic. For demonstration, we’ll use the topic created earlier:

umh.v1.pharma-genix.aachen.packaging.packaging_1.blister.PLC13._historian.temperatureCelsius

Link a debug node to the kafka-consumer node, deploy, and observe messages in the debug pane.

Kafka Debug Connection
Kafka Debug Connection

For topic structuring guidelines, refer to Unified Namespace.

Format Incoming Messages

Since the data is already processed from the previous step, use a function node to convert the temperature from Celsius to Fahrenheit. Connect it to the kafka-consumer node and paste the following script:

const payloadObj = JSON.parse(msg.payload.value);
const celsius = payloadObj.temperature;
const fahrenheit = (celsius * 9) / 5 + 32;

msg.payload = {
  timestamp_ms: Date.now(),
  temperature: fahrenheit,
};

return msg;

Finalize with Done.

Then, connect a JSON node to the function node to parse the object into a string.

Send Formatted Data Back to Kafka

Now, we’ll route the transformed data back to the Kafka broker, in a different topic.

Add a kafka-producer node, connecting it to the JSON node. Use the same Kafka client as earlier, and the same topic for output:

umh.v1.pharma-genix.aachen.packaging.packaging_1.blister.PLC13._historian.temperatureFahrenheit

For more on UMH data modeling, consult the documentation.

Press Done and deploy.

Consider adding a debug node for visualizing output data.

Node-RED Kafka to Kafka
Node-RED Kafka to Kafka

Tag Browser

Returning to the Management Console, you’ll find the output data displayed within the Tag Browser, which offers a user-friendly tree structure for browsing the tag hierarchy we defined in previous steps.

Tag Browser
Tag Browser

What’s next

Next, we’ll dive into Data Visualization, where you’ll learn to create Grafana dashboards using your newly configured data sources. This next chapter will help you visualize and interpret your data effectively.

1.4 - 4. Data Visualization

Build a simple Grafana dashboard with the gathered data.

In the following step, we will delve into the process of visualizing the data. This chapter focuses on the construction of dashboards using Grafana. The dashboard will be crafted around the OPC-UA data source and the Node-RED flow, both of which were established in the previous chapter.

Creating a Grafana dashboard

  1. If you haven’t done so already, open and log in to Grafana by following the instructions given in the Acess Grafana section of chapter 2.

  2. Once logged in, hover over the fourth icon in the left menu, dashboards, and click on + New dashboard.

    Untitled
    Untitled

  3. Click on Add a new panel, which will redirect you to the edit panel view.

  4. Next, we’ll retrieve OPC-UA data from TimescaleDB. Before moving forward, ensure that the UMH TimescaleDB data source is selected; it should be the default choice.

    Untitled
    Untitled

  5. We’ll show you how to run queries with both the Builder mode (a graphical query builder) and the Code mode (a code editor to write RAW SQL). Let’s begin with the graphical approach.

  6. Let’s query all value, timestamp and name columns from the tag table. For some guidance, refer to the image below.

    Untitled
    Untitled

  7. Click on the Run Query button located next to the Builder/Code modes switcher.

  8. You should now see a time series graph based on the query you just ran. The Builder mode is a great way to get started, but it has its limitations. For more complex use cases, we recommend using the Code mode, which we’ll cover in the next steps.

  9. Open the code editor by switching from Builder to Code.

  10. Now we’ll run a slightly more complex query. We’ll retrieve the same columns as before, but this time only for a specific asset.

    SELECT name, value, time_bucket('$__interval', timestamp) AS time
    FROM tag
    WHERE asset_id = get_asset_id(
    	'pharma-genix',
    	'aachen',
    	'packaging',
    	'packaging_1',
    	'blister',
    	'PLC13'
    )
    AND $__timeFilter(timestamp)
    GROUP BY time, name, value
    ORDER BY time DESC;
    

    There are a few things to unpack here, so let’s break it down:

    • time_bucket is a TimescaleDB function that groups data into time intervals. The first argument is the interval, which is set to $__interval to match the time range selected in the Grafana dashboard (1m, 6h, 7d, etc). The second argument is the column to group by, which is timestamp, as defined in our data model.
    • The table we’re querying is tag, this varies depending on the tag’s data type, find more information in the data model linked above.
    • The asset_id is retrieved using the get_asset_id function, which is a custom plpgsql function we provide to simplify the process of querying tag data from a specific asset. Click here for more examples.
    • $__timeFilter is a Grafana function that filters the data based on a given time range. It receives one argument, which is the column to filter by, in our case timestamp.
    • Finally, we group the data by time (timestamp alias), name, and value, and order it by time in descending order to display the most recent data first.

You can also select the desired tag in the Tag Browser of the Management Console, and directly copy the provided SQL query from there.

  1. Same as before, click on the Run Query button to execute the query. If you’ve been following along, you won’t see any noticeable changes, since we only have one asset in our database.

  2. Feel free to experiment with different queries to get a better feel for the data model.

  3. Next, you can customize your dashboard. On the right side, you’ll find various options, such as specifying units or setting thresholds. Playaround until it suits your needs.

  4. Once you’re done making adjustments, click on the blue Apply button in the top right-hand corner to save the panel and return to the overview.

  5. Congratulations, you have created your first Grafana dashboard, and for now it should look similar to the one below.

    Untitled
    Untitled

What’s next?

The next topic is Moving to Production where we will explain what it means to move the umh to a manufacturing environment. Click here to proceed.

1.5 - 5. Moving to Production

Move the United Manufacturing Hub to production.

This chapter involves deploying the United Manufacturing Hub (UMH) on a virtual machine or an edge device, allowing you to connect with your production assets. However, we recognize the importance of familiarizing yourself with the United Manufacturing Hub beforehand. Feel free to delve deeper into our product, explore the specifics of local installation, or proceed with the production deployment. The guide below will kickstart your UMH journey in a production setting.

Check out our community

We are quite active on GitHub and Discord.

Feel free to join our community, introduce yourself and share your best-practices and experiences.

Learn more about the United Manufacturing Hub

If you like reading more about its features and architecture, check out the following chapters:

  • Features to understand the capabilities of the United Manufacturing Hub and learn how to use them.
  • Architecture to learn what is behind the United Manufacturing Hub and how everything works together.

Ready to transition to production? Continue reading to discover how to install UMH and seamlessly connect multiple machines to your instance.

Set up your first instance and connect to a few machines

If you want to get a first impression of the UMH in a production environment, connecting to machines on your shop floor, follow these steps:

Before starting the installation process, decide whether to use virtual machine (VM), or a generic server or edge device. For ease of setup, we recommend using a VM. Ensure that the selected device has network access to the machines.

2. Select Machines with OPC UA for Testing

For testing purposes, it’s recommended to use machines with OPC UA. If your machines use other protocols, consider Node-RED as an alternative for data connection. Check the list of supported protocols and how to connect them to Node-RED.

3. Installation Process

The installation is well documented in the first chapter. But here’s a quick overview:

  • Click on the + Add Instance button in the instance dashboard of the Management Console, redirecting you to the installation process page.

  • Select the Install UMH Only option, redirecting you to the install command generation page.

  • Finally, follow the provided instructions to set up your instance. If everything went well, there should be a button at the bottom right corner of the page, redirecting you back to the instance dashboard.

4. Network Configuration

Once your UMH instance is up and running, ensure it is placed in the same network as your machines. Additionally, verify that the device running the Management Console is also within the same network.

While basic management and monitoring with the Management Console don’t necessarily demand extensive network configuration, it’s important to note that various open-source tools integrated into UMH do. Therefore, to take full advantage of the UMH, ensure that you can reach the IP of the server to access Grafana or Node-RED, and that the connection is not blocked by a firewall.

5. Configure Connections and Data Sources

The connections and data sources setup is documented in detail in the third chapter. But here’s a quick overview for OPC UA:

  • Assuming you have a selected instance running, navigate to the Data Connections tab and click on the + Add Connection button.
  • Click on the OPC UA Server option, redirecting you to the OPC UA connection setup page.
  • Follow the provided instructions, test the connection, and if succesful you’ll be able to deploy it.
  • Once successfully deployed and back to the Data Connections tab, you’ll see the new connection under Uninitialized Connections.
  • To initialize it, navigate to Data Sources > Uninitialized Connections, redirecting you to the data source setup page.
  • Again, follow the provided instructions. If you’re having trouble, refer to the more detailed guide.
  • Once successfully deployed and back to the Data Sources tab, you’ll see it under Data Sources.
  • Congratulations! Your OPC UA connection and data source are now configured, and your UMH instance is ready to gather valuable insights from your machines.

6. Alternative Protocols

The previous step exclusively covered OPC UA. If your machines use protocols other than OPC UA, we recommend exploring Node-RED as a versatile solution for connecting and gathering data.

You can use it to generate a new dataflow by using your machine’s data as input. This collected data can even be used in other tools, such as building a dashboard in Grafana.

If you encounter any issues, feel free to ask for help on our Discord channel.

Play around with it locally

If you want to get a first impression of the UMH in a local environment, we recommend checking out the following topics:

Grafana Canvas

If you’re interested in creating visually appealing Grafana dashboards, you might want to try Grafana-Canvas. In our previous blog article, we explained why Grafana-Canvas is a valuable addition to your standard Grafana dashboard. If you’d like to learn how to build one, check out our tutorial.

Untitled
Untitled

OPC/UA-Simulator

If you want to get a good overview of how the OPC/UA protocol works and how to connect it to the UMH, the OPC/UA-simulator is a useful tool. Detailed instructions can be found in this guide.

Untitled
Untitled

PackML-Simulator

For those looking to get started with PackML, the PackML Simulator is another helpful simulator. Check out our tutorial on how to create a Node-RED flow with PackML data.

Untitled
Untitled

Benthos

Benthos is a highly scalable data manipulation and IT connection tool. If you’re interested in learning more about it, check out our tutorial.

Untitled
Untitled

Kepware

At times, you may need to connect different, older protocols. In such cases, KepwareServerEx can help bridge the gap between these older protocols and the UMH. If you’re interested in learning more, check out our tutorial.

Deployment to production

Ready to go to production? Go install it!

Follow our step-by-step tutorial on how to install the UMH on an edge device or an virtual machine using Flatcar. We’ve also written a blog article explaining why we use Flatcar as the operating system for the industrial IoT, which you can find here.

Make sure to check out our advanced production guides, which include detailed instructions on how to secure your setup and how to best integrate with your infrastructure.

2 - Features

Do you want to understand the capabilities of the United Manufacturing Hub, but do not want to get lost in technical architecture diagrams? Here you can find all the features explained on few pages.

2.1 - Connectivity

Introduction to IIoT Connections and Data Sources Management in the United Manufacturing Hub.

In IIoT infrastructures, sometimes can be challenging to extract and contextualize data from from various systems into the Unified Namespace, because there is no universal solution. It usually requires lots of different tools, each one tailored to the specific infrastructure, making it hard to manage and maintain.

With the United Manufacturing Hub and the Management Console, we aim to solve this problem by providing a simple and easy to use tool to manage all the assets in your factory.

For lack of a better term, when talking about a system that can be connected to and that provides data, we will use the term asset.

When should I use it?

Contextualizing data can present a variety of challenges, both technical and at the organization level. The Connection Management functionality aims to reduce the complexity that comes with these challenges.

Here are some common issues that can be solved with the Connection Management:

  • It is hard to get an overview of all the data sources and their connections' status, as the concepts of “connection” and “data source” are often decoupled. This leads to list the connections’ information into long spreadsheets, which are hard to maintain and troubleshoot.
  • Handling uncommon communication protocols.
  • Dealing with non-standard connections, like a 4-20 mA sensor or a USB-connected barcode reader.
  • Advanced IT tools like Apache Spark or Apache Flink may be challenging for OT personnel who have crucial domain knowledge.
  • Traditional OT tools often struggle in modern IT environments, lacking features like Docker compatibility, monitoring, automated backups, or high availability.

What can I do with it?

Connection Management
Connection Management

The Connection Management functionality in the Management Console aims to address those challenges by providing a simple and easy to use tool to manage all the assets in your factory.

You can add, delete, and most importantly, visualize the status of all your connections in a single place. For example, a periodic check is performed to measure the latency of each connection, and the status of the connection is displayed in the Management Console.

You can also add notes to each connection, so that you can keep all the documentation in a single place.

Connection Notes
Connection Notes

You can then configure a data source for each connection, to start extracting data from your assets. Once the data source is configured, specific information about its status is displayed, prompting you in case of misconfigurations, data not being received, or other any error that may occur.

How can I use it?

Add new connections from the Connection Management page of the Management Console. Then, configure a data source for each of them by choosing one of the available tools, depending on the type of connection.

The following tools come with the United Manufacturing Hub and are recommended for extracting data from your assets:

Node-RED

Node-RED is a leading open-source tool for IIoT connectivity. We recommend this tool for prototyping and integrating parts of the shop floor that demand high levels of customization and domain knowledge.

Even though it may be unreliable in high-throughput scenarios, it has a vast global community that provides a wide range of connectors for different protocols and data sources, while remaining very user-friendly with its visual programming approach.

Benthos UMH

Benthos UMH is a custom extension of the Benthos project. It allows you to connect assets that communicate via the OPC UA protocol, and it is recommended for scenarios involving the extraction of large data volumes in a standardized format.

It is a lightweight, open-source tool that is easy to deploy and manage. It is ideal for moving medium-sized data volumes more reliably then Node-RED, but it requires some technical knowledge.

Other Tools

The United Manufacturing Hub also provides tools for connecting data sources that uses other types of connections. For example, you can easily connect ifm IO-Link sensors or USB barcode readers.

Third-Party Tools

Any existing connectivity solution can be integrated with the United Manufacturing Hub, assuming it can send data to either MQTT or Kafka. Additionally, if you want to deploy those tools on the Device & Container Infrastructure, they must be available as a Docker container (developed with best-practices). Therefore, we recommend using the tools mentioned above, as they are the most tested and reliable.

What are the limitations?

Some of the tools still require some technical knowledge to be used. We are working on improving the user experience and documentation to make them more accessible.

Where to get more information?

2.1.1 - Node-RED

Connect devices on the shop floor using Node-RED with United Manufacturing Hub’s Unified Namespace. Simplify data integration across PLCs, Quality Stations, and MES/ERP systems with a user-friendly UI.

One feature of the United Manufacturing Hub is to connect devices on the shopfloor such as PLCs, Quality Stations or MES / ERP systems with the Unified Namespace using Node-RED. Node-RED has a large library of nodes, which lets you connect various protocols. It also has a user-friendly UI with little code, making it easy to configure the desired nodes.

When should I use it?

Sometimes it is necessary to connect a lot of different protocols (e.g Siemens-S7, OPC-UA, Serial, …) and node-RED can be a maintainable solution to connect all these protocols without the need for other data connectivity tools. Node-RED is largely known in the IT/OT-Community making it a familiar tool for a lot of users.

What can I do with it?

By default, there are connector nodes for common protocols:

  • connect to MQTT using the MQTT node
  • connect to HTTP using the HTTP node
  • connect to TCP using the TCP node
  • connect to IP using the UDP node

Furthermore, you can install packages to support more connection protocols. For example:

You can additionally contextualize the data, using function or other different nodes do manipulate the received data.

How can I use it?

Node-RED comes preinstalled as a microservice with the United Manufacturing Hub.

  1. To access Node-RED, simply open the following URL in your browser:
http://<instance-ip-address>:1880/nodered
  1. Begin exploring right away! If you require inspiration on where to start, we provide a variety of guides to help you become familiar with various node-red workflows, including how to process data and align it with the UMH datamodel:

What are the limitations?

  • Most packages have no enterprise support. If you encounter any errors, you need to ask the community. However, we found that these packages are often more stable than the commercial ones out there, as they have been battle tested by way more users than commercial software.
  • Having many flows without following a strict structure, leads in general to confusion.
  • One additional limitation is “the speed of development of Node-RED”. After a big Node-RED and JavaScript update dependencies most likely break, and those single community maintained nodes need to be updated.

Where to get more information?

2.1.2 - Benthos UMH

Configure OPC-UA data sources to stream data to Kafka directly in the Management Console.

Benthos is a stream processing tool that is designed to make common data engineering tasks such as transformations, integrations, and multiplexing easy to perform and manage. It uses declarative, unit-testable configuration, allowing users to easily adapt their data pipelines as requirements change. Benthos is able to connect to a wide range of sources and sinks, and can use different languages for processing and mapping data.

Benthos UMH is a custom extension of Benthos that is designed to connect to OPC-UA servers and stream data into the Unified Namespace.

When should I use it?

OPC UA is a communication protocol coming from the OT industry, so integration with IT tools is necessary to stream data from an OPC UA server. With Benthos UMH, you can easily connect to an OPC UA server, define the nodes you want to stream, and send the data to the Unified Namespace.

Furthermore, in our tests, Benthos has proven more reliable than tools like Node-RED, when it comes to handling large amounts of data.

What can I do with it?

Benthos UMH offers some benefits, including:

  • Management Console integration: Configure and deploy any number of Benthos UMH instances directly from the Management Console.
  • OPC-UA support: Connect to any OPC-UA server and stream data into the Unified Namespace.
  • Report by exception: By configuring the OPC-UA nodes in subscribe mode, you can only stream data when the value of the node changes.
  • Per-node configuration: Define the nodes you want to stream and configure them individually.
  • Broad customization: Use Benthos’ extensive configuration options to customize your data pipeline.
  • Easy deployment: Deploy Benthos UMH as a standalone Docker container or directly from the Management Console.
  • Fully open source: Benthos UMH is fully open source and available on Github.

How can I use it?

With the Management Console

The easiest way to use Benthos UMH is to deploy it directly from the Management Console

Currently, only OPC-UA data sources can be configured from the Management Console. To use other data sources, you must deploy Benthos UMH in standalone mode or use Node-RED.

You first have to add a new connection to your OPC-UA server from the Connection Management tab.

Add Connection
Add Connection

Afterwards, you can initialize the connection by pressing the Play button next to the connection. Select the correct authentication method and enter the OPC-UA nodes you want to stream.

Currently, the only method supported for configuring OPC-UA nodes is by specifying a YAML file. You can find an example file below:

nodes:
    - opcuaID: ns=2;s=Pressure
      enterprise: pharma-genix
      site: aachen
      area: packaging
      line: packaging_1
      workcell: blister
      originID: PLC13
      tagName: machineState
      schema: _historian

Mandatory fields are opcuaID, enterprise, tagName and schema. opcuaID is the NodeID in OPC-UA and can also be a folder (see README for more information). The remaining components are components of the resulting topic / ISA-95 structure (see also our datamodel). By default, the schema will always be in _historian, and tagName is the keyname.

Standalone

You can manually deploy Benthos UMH as part of the UMH stack by using the provided Docker image and following the instructions in the README.

This way, you have full control over the configuration of Benthos UMH and can use any data source or sink supported by Benthos, along with the full range of processors and other configuration options.

Read the official Benthos documentation for more information on how to use different components.

What are the limitations?

While Benthos is great at handling large amounts of data, it does not allow for the same level of flow customization as Node-RED. If you need to perform complex data transformations or integrate with other systems, you should consider using Node-RED instead.

Additionally, the Management Console currently only supports deploying Benthos UMH with OPC-UA data sources. If you want to use other data sources, you must deploy Benthos UMH in standalone mode or use Node-RED.

Where to get more information?

2.1.3 - Other Tools

2.1.3.1 - Retrofitting with ifm IO-link master and sensorconnect

Upgrade older machines with ifm IO-Link master and Sensorconnect for seamless data collection and integration. Retrofit your shop floor with plug-and-play sensors for valuable insights and improved efficiency.

Retrofitting older machines with sensors is sometimes the only-way to capture process-relevant information. In this article, we will focus on retrofitting with ifm IO-Link master and Sensorconnect, a microservice of the United Manufacturing Hub, that finds and reads out ifm IO-Link masters in the network and pushes sensor data to MQTT/Kafka for further processing.

When should I use it?

Retrofitting with ifm IO-Link master such as the AL1350 and using Sensorconnect is ideal when dealing with older machines that are not equipped with any connectable hardware to read relevant information out of the machine itself. By placing sensors on the machine and connecting them with IO-Link master, required information can be gathered for valuable insights. Sensorconnect helps to easily connect to all sensors correctly and properly capture the large amount of sensor data provided.

What can I do with it?

With ifm IO-Link master and Sensorconnect, you can collect data from sensors and make it accessible for further use. Sensorconnect offers:

  • Automatic detection of ifm IO-Link masters in the network.
  • Identification of IO-Link and alternative digital or analog sensors connected to the master using converter such as the DP2200. Digital Sensors employ a voltage range from 10 to 30V DC, producing binary outputs of true or false. In contrast, analog sensors operate at 24V DC, with a current range spanning from 4 to 20 mA. Utilizing the appropriate converter, analog outputs can be effectively transformed into digital signals.
  • Constant polling of data from the detected sensors.
  • Interpreting the received data based on a sensor database containing thousands of entries.
  • Sending data in JSON format to MQTT and Kafka for further data processing.

How can I use it?

To use ifm IO-link gateways and Sensorconnect please follow these instructions:

  1. Ensure all IO-Link gateways are in the same network or accessible from your instance of the United Manufacturing Hub.
  2. Retrofit the machines by connecting the desired sensors and establish a connection with ifm IO-Link gateways.
  3. Deploy the sensorconnect feature and configure the Sensorconnect IP-range to either match the IP address using subnet notation /32, or, in cases involving multiple masters, configure it to scan an entire range, for example /24. To deploy the feature and change the value, execute the following command with your IP range:
    sudo $(which helm) upgrade --kubeconfig /etc/rancher/k3s/k3s.yaml  -n united-manufacturing-hub united-manufacturing-hub united-manufacturing-hub/united-manufacturing-hub --set _000_commonConfig.datasources.sensorconnect.enabled=true,_000_commonConfig.datasources.sensorconnect.iprange=<ip-range> --reuse-values --version $(sudo $(which helm) ls --kubeconfig /etc/rancher/k3s/k3s.yaml  -n united-manufacturing-hub -o json | jq -r '.[0].app_version')
    
  4. Once completed, the data should be available in your Unified Namespace.

What are the limitations?

  • The current ifm firmware has a software bug, that will cause the IO-Link master to crash if it receives to many requests. To resolve this issue, you can either request an experimental firmware, which is available exclusively from ifm, or re-connect the power to the IO-Link gateway.

Where to get more information?

2.1.3.2 - Retrofitting with USB barcodereader

Integrate USB barcode scanners with United Manufacturing Hub’s barcodereader microservice for seamless data publishing to Unified Namespace. Ideal for inventory, order processing, and quality testing stations.

The barcodereader microservice enables the processing of barcodes from USB-linked scanner devices, subsequently publishing the acquired data to the Unified Namespace.

When should I use it?

When you need to connect a barcode reader or any other USB devices acting as a keyboard (HID). These cases could be to scan an order at the production machine from the accompanying order sheet. Or To scan material for inventory and track and trace.

What can I do with it?

You can connect USB devices acting as a keyboard to the Unified Namespace. It will record all inputs and send it out once a return / enter character has been detected. A lof of barcode scanners work that way. Additionally, you can also connect something like a quality testing station (we once connected a Mitutoyo quality testing station).

How can I use it?

To use the microservice barcode reader, you will need configure the helm-chart and enable it.

  1. Enable the barcodereader feature by executing the following command:
    sudo $(which helm) upgrade --kubeconfig /etc/rancher/k3s/k3s.yaml  -n united-manufacturing-hub united-manufacturing-hub united-manufacturing-hub/united-manufacturing-hub --set _000_commonConfig.datasources.barcodereader.enabled=true --reuse-values --version $(sudo $(which helm) ls --kubeconfig /etc/rancher/k3s/k3s.yaml  -n united-manufacturing-hub -o json | jq -r '.[0].app_version')
    
  2. During startup, it will show all connected USB devices. Remember yours and then change the INPUT_DEVICE_NAME and INPUT_DEVICE_PATH. Also set ASSET_ID, CUSTOMER_ID, etc. as this will then send it into the topic ia/ASSET_ID/.../barcode. You can change these values of the helm chart using helm upgrade. You find the list of parameters here. The following command should be executed, for example:
    sudo $(which helm) upgrade --kubeconfig /etc/rancher/k3s/k3s.yaml  -n united-manufacturing-hub united-manufacturing-hub united-manufacturing-hub/united-manufacturing-hub --set _000_commonConfig.datasources.barcodereader.USBDeviceName=<input-device-name>,_000_commonConfig.datasources.barcodereader.USBDevicePath=<input-device-path>,_000_commonConfig.datasources.barcodereader.machineID=<asset-id>,_000_commonConfig.datasources.barcodereader.customerID=<customer-id> --reuse-values --version $(sudo $(which helm) ls --kubeconfig /etc/rancher/k3s/k3s.yaml  -n united-manufacturing-hub -o json | jq -r '.[0].app_version')
    
  3. Scan a device, and it will be written into the topic ia/ASSET_ID/.../barcode.

Once installed, you can configure the microservice by setting the needed environment variables. The program will continuously scan for barcodes using the device and publish the data to the Kafka topic.

What are the limitations?

  • Sometimes special characters are not parsed correctly. They need to be adjusted afterward in the Unified Namespace.

Where to get more information?

2.2 - Data Infrastructure

This page describes the data infrastructure of the United Manufacturing Hub.

2.2.1 - Unified Namespace

Seamlessly connect and communicate across shopfloor equipment, IT/OT systems, and microservices.

The Unified Namespace is a centralized, standardized, event-driven data architecture that enables for seamless integration and communication across various devices and systems in an industrial environment. It operates on the principle that all data, regardless of whether there is an immediate consumer, should be published and made available for consumption. This means that any node in the network can work as either a producer or a consumer, depending on the needs of the system at any given time.

This architecture is the foundation of the United Manufacturing Hub, and you can read more about it in the Learning Hub article.

When should I use it?

In our opinion, the Unified Namespace provides the best tradeoff for connecting systems in manufacturing / shopfloor scenarios. It effectively eliminates the complexity of spaghetti diagrams and enables real-time data processing.

While data can be shared through databases, REST APIs, or message brokers, we believe that a message broker approach is most suitable for most manufacturing applications. Consequently, every piece of information within the United Manufacturing Hub is transmitted via a message broker.

Both MQTT and Kafka are used in the United Manufacturing Hub. MQTT is designed for the safe message delivery between devices and simplifies gathering data on the shopfloor. However, it is not designed for reliable stream processing. Although Kafka does not provide a simple way to collect data, it is suitable for contextualizing and processing data. Therefore, we are combining both the strengths of MQTT and Kafka. You can get more information from this article.

What can I do with it?

The Unified Namespace in the United Manufacturing Hub provides you the following functionalities and applications:

  • Seamless Integration with MQTT: Facilitates straightforward connection with modern industrial equipment using the MQTT protocol.
  • Legacy Equipment Compatibility: Provides easy integration with older systems using tools like Node-RED or Benthos UMH, supporting various protocols like Siemens S7, OPC-UA, and Modbus.
  • Real-time Notifications: Enables instant alerting and data transmission through MQTT, crucial for time-sensitive operations.
  • Historical Data Access: Offers the ability to view and analyze past messages stored in Kafka logs, which is essential for troubleshooting and understanding historical trends.
  • Scalable Message Processing: Designed to handle a large amount of data from a lot of devices efficiently, ensuring reliable message delivery even over unstable network connections. By using IT standard tools, we can theoretically process data in the measure of GB/second instead of messages/second.
  • Data Transformation and Transfer: Utilizes the Data Bridge to adapt and transmit data between different formats and systems, maintaining data consistency and reliability.

Each feature opens up possibilities for enhanced data management, real-time monitoring, and system optimization in industrial settings.

You can view the Unified Namespace by using the Management Console like in the picture below. The picture shows data under the topic umh/v1/demo-pharma-enterprise/Cologne/_historian/rainfall/isRaining, where

  • umh/v1 is a versioning prefix.
  • demo-pharma-enterprise is a sample enterprise tag.
  • Cologne is a sample site tag.
  • _historian is a schema tag. Data with this tag will be stored in the UMH’s database.
  • rainfall/isRaining is a sample schema dependent context, where rainfall is a tag group and isRaining is a tag belonging to it.

The full tag name uniquely identifies a single tag, it can be found in the Publisher & Subscriber Info table.

Tag Browser
Tag Browser

The above image showcases the Tag Browser, our main tool for navigating the Unified Namespace. It includes the following features:

  • Data Aggregation: Automatically consolidates data from all connected instances / brokers.
  • Topic Structure: Displays the hierarchical structure of topics and which data belongs to which namespace.
  • Tag Folder Structure: Facilitates browsing through tag folders or groups within a single asset.
  • Schema validation: Introduces validation for known schemas such as _historian. In case of validation failure, the corresponding errors are displayed.
  • Publisher & Subscriber Info: Provides various details, such as the origins and destinations of the data, the instance it was published from, the messages per minute to get an overview on how much data is flowing, and the full tag name to uniquely identify the selected tag.
  • Payload Visualization: Displays payloads under validated schemas in a formatted/structured manner, enhancing readability. For unknown schemas without strict validation, the raw payload is displayed instead.
  • Tag Value History: Shows the last 100 received values for the selected tag, allowing you to track the changes in the data over time. Keep in mind that this feature is only available for tags that are part of the _historian schema.
  • Example SQL Query: Generates example SQL queries based on the selected tag, which can be used to query the data in the UMH’s database or in Grafana for visualization purposes.
  • Kafka Origin: Provides information about the Kafka key, topic and the actual payload that was sent via Kafka.

It’s important to note that data displayed in the Tag Browser represent snapshots; hence, data sent at intervals shorter than 10 seconds may not be accurately reflected.

You can find more detailed information about the topic structure here.

You can also use tools like MQTT Explorer (not included in the UMH) or Redpanda Console (enabled by defualt, accessible via port 8090) to view data from a single instance (but single instance only).

How can I use it?

To effectively use the Unified Namespace in the United Manufacturing Hub, start by configuring your IoT devices to communicate with the UMH’s MQTT broker, considering the necessary security protocols. While MQTT is recommended for gathering data on the shopfloor, you can send messages to Kafka as well.

Once the devices are set up, handle the incoming data messages using tools like Node-RED or Benthos UMH. This step involves adjusting payloads and topics as needed. It’s also important to understand and follow the ISA95 standard model for data organization, using JSON as the primary format.

Additionally, the Data Bridge microservice plays a crucial role in transferring and transforming data between MQTT and Kafka, ensuring that it adheres to the UMH data model. You can configure a merge point to consolidate messages from multiple MQTT topics into a single Kafka topic. For instance, if you set a merge point of 3, the Data Bridge will consolidate messages from more detailed topics like umh/v1/plant1/machineA/temperature into a broader topic like umh/v1/plant1. This process helps in organizing and managing data efficiently, ensuring that messages are grouped logically while retaining key information for each topic in the Kafka message key.

Recommendation: Send messages from IoT devices via MQTT and then work in Kafka only.

What are the limitations?

While JSON is the only supported payload format due to its accessibility, it’s important to note that it can be more resource-intensive compared to formats like Protobuf or Avro.

Where to get more information?

2.2.2 - Historian / Data Storage

Learn how the United Manufacturing Hub’s Historian feature provides reliable data storage and analysis for your manufacturing data.

The Historian / Data Storage feature in the United Manufacturing Hub provides reliable data storage and analysis for your manufacturing data. Essentially, a Historian is just another term for a data storage system, designed specifically for time-series data in manufacturing.

When should I use it?

If you want to reliably store data from your shop floor that is not designed to fulfill any legal purposes, such as GxP, we recommend you to use the United Manufacturing Hub’s Historian feature. In our opinion, open-source databases such as TimescaleDB are superior to traditional historians in terms of reliability, scalability and maintainability, but can be challenging to use for the OT engineer. The United Manufacturing Hub fills this usability gap, allowing OT engineers to easily ingest, process, and store data permanently in an open-source database.

What can I do with it?

The Historian / Data Storage feature of the United Manufacturing Hub allows you to:

Store and analyze data

  • Store data in TimescaleDB by using either the _historian or _analytics _schemas in the topics within the Unified Namespace.
  • Data can be sent to the Unified Namespace from various sources, allowing you to store tags from your PLC and production lines reliably. Optionally, you can use tag groups to manage a large number of tags and reduce the system load. Our Data Model page assists you in learning data modeling in the Unified Namespace.
  • Conduct basic data analysis, including automatic downsampling, gap filling, and statistical functions such as Min, Max, and Avg.

Query and visualize data

  • Query data in an ISA95 compliant model, from enterprise to site, area, production line, and work cell.
  • Visualize your data in Grafana to easily monitor and troubleshoot your production processes.

More information about the exact analytics functionalities can be found in the umh-datasource-v2 documentation.

Efficiently manage data

  • Compress and retain data to reduce database size using various techniques.

How can I use it?

To store your data in TimescaleDB, simply use the _historian or _analytics _schemas in your Data Model v1 compliant topic. This can be directly done in the OPC UA data source when the data is first inserted into the stack. Alternatively, it can be handled in Node-RED, which is useful if you’re still utilizing the old data model, or if you’re gathering data from non-OPC UA sources via Node-RED or sensorconnect.

Data sent with a different _schema will not be stored in TimescaleDB.

Data stored in TimescaleDB can be viewed in Grafana. An example can be found in the Get Started guide.

In Grafana you can select tags by using SQL queries. Here, you see an example:

SELECT name, value, timestamp
FROM tag
WHERE asset_id = get_asset_id(
  'pharma-genix',
  'aachen',
  'packaging',
  'packaging_1',
  'blister'
);

get_asset_id is a custom plpgsql function that we provide to simplify the process of querying tag data from a specific asset. To learn more about our database, visit this page.

Also, you have the option to query data in your custom code by utilizing the API in factoryinsight or processing the data in the Unified Namespace.

For more information about what exactly is behind the Historian feature, check out our our architecture page.

What are the limitations?

Apart from these limitations, the United Manufacturing Hub’s Historian feature is highly performant compared to legacy Historians.

Where to get more information?

2.2.3 - Shopfloor KPIs / Analytics (v1)

The Shopfloor KPI/Analytics feature of the United Manufacturing Hub provides equipment-based KPIs, configurable dashboards, and detailed analytics for production transparency. Configure OEE calculation and track root causes of low OEE using drill-downs. Easily ingest, process, and analyze data in Grafana.

The Shopfloor KPI / Analytics feature of the United Manufacturing Hub provides a configurable and plug-and-play approach to create “Shopfloor Dashboards” for production transparency consisting of various KPIs and drill-downs.

Click on the images to enlarge them. More examples can be found in this YouTube video and in our community-repo on GitHub.

When should I use it?

If you want to create production dashboards that are highly configurable and can drill down into specific KPIs, the Shopfloor KPI / Analytics feature of the United Manufacturing Hub is an ideal choice. This feature is designed to help you quickly and easily create dashboards that provide a clear view of your shop floor performance.

What can I do with it?

The Shopfloor KPI / Analytics feature of the United Manufacturing Hub allows you to:

Query and visualize

In Grafana, you can:

  • Calculate the OEE (Overall Equipment Effectiveness) and view trends over time
    • Availability is calculated using the formula (plannedTime - stopTime) / plannedTime, where plannedTime is the duration of time for all machines states that do not belong in the Availability or Performance category, and stopTime is the duration of all machine states configured to be an availability stop.
    • Performance is calculated using the formula runningTime / (runningTime + stopTime), where runningTime is the duration of all machine states that consider the machine to be running, and stopTime is the duration of all machine states that are considered a performance loss. Note that this formula does not take into account losses caused by letting the machine run at a lower speed than possible. To approximate this, you can use the LowSpeedThresholdInPcsPerHour configuration option (see further below).
    • Quality is calculated using the formula good pieces / total pieces
  • Drill down into stop reasons (including histograms) to identify the root-causes for a potentially low OEE.
  • List all produced and planned orders including target vs actual produced pieces, total production time, stop reasons per order, and more using job and product tables.
  • See machine states, shifts, and orders on timelines to get a clear view of what happened during a specific time range.
  • View production speed and produced pieces over time.

Configure

In the database, you can configure:

  • Stop Reasons Configuration: Configure which stop reasons belong into which category for the OEE calculation and whether they should be included in the OEE calculation at all. For instance, some companies define changeovers as availability losses, some as performance losses. You can easily move them into the correct category.
  • Automatic Detection and Classification: Configure whether to automatically detect/classify certain types of machine states and stops:
    • AutomaticallyIdentifyChangeovers: If the machine state was an unspecified machine stop (UnknownStop), but an order was recently started, the time between the start of the order until the machine state turns to running, will be considered a Changeover Preparation State (10010). If this happens at the end of the order, it will be a Changeover Post-processing State (10020).
    • MicrostopDurationInSeconds: If an unspecified stop (UnknownStop) has a duration smaller than a configurable threshold (e.g., 120 seconds), it will be considered a Microstop State (50000) instead. Some companies put small unknown stops into a different category (performance) than larger unknown stops, which usually land up in the availability loss bucket.
    • IgnoreMicrostopUnderThisDurationInSeconds: In some cases, the machine can actually stop for a couple of seconds in routine intervals, which might be unwanted as it makes analysis difficult. One can set a threshold to ignore microstops that are smaller than a configurable threshold (usually like 1-2 seconds).
    • MinimumRunningTimeInSeconds: Same logic if the machine is running for a couple of seconds only. With this configurable threshold, small run-times can be ignored. These can happen, for example, during the changeover phase.
    • ThresholdForNoShiftsConsideredBreakInSeconds: If no shift was planned, an UnknownStop will always be classified as a NoShift state. Some companies move smaller NoShift’s into their category called “Break” and move them either into Availability or Performance.
    • LowSpeedThresholdInPcsPerHour: For a simplified performance calculation, a threshold can be set, and if the machine has a lower speed than this, it could be considered a LowSpeedState and could be categorized into the performance loss bucket.
  • Language Configuration: The language of the machine states can be configured using the languageCode configuration option (or overwritten in Grafana).

You can find the configuration options in the configurationTable

How can I use it?

Using it is very easy:

  1. Send messages according to the UMH datamodel to the Unified Namespace (similar to the Historian feature)
  2. Configure your OEE calculation by adjusting the configuration table
  3. Open Grafana, and checkout our tutorial on how to select the data.

For more information about what exactly is behind the Analytics feature, check out our architecture page and our datamodel

Where to get more information?

2.2.3.1 - Shopfloor KPIs / Analytics (v0)

The Shopfloor KPI/Analytics feature of the United Manufacturing Hub provides equipment-based KPIs, configurable dashboards, and detailed analytics for production transparency. Configure OEE calculation and track root causes of low OEE using drill-downs. Easily ingest, process, and analyze data in Grafana.

The Shopfloor KPI / Analytics feature of the United Manufacturing Hub provides a configurable and plug-and-play approach to create “Shopfloor Dashboards” for production transparency consisting of various KPIs and drill-downs.

When should I use it?

If you want to create production dashboards that are highly configurable and can drill down into specific KPIs, the Shopfloor KPI / Analytics feature of the United Manufacturing Hub is an ideal choice. This feature is designed to help you quickly and easily create dashboards that provide a clear view of your shop floor performance.

What can I do with it?

The Shopfloor KPI / Analytics feature of the United Manufacturing Hub allows you to:

Query and visualize

In Grafana, you can:

  • Calculate the OEE (Overall Equipment Effectiveness) and view trends over time
    • Availability is calculated using the formula (plannedTime - stopTime) / plannedTime, where plannedTime is the duration of time for all machines states that do not belong in the Availability or Performance category, and stopTime is the duration of all machine states configured to be an availability stop.
    • Performance is calculated using the formula runningTime / (runningTime + stopTime), where runningTime is the duration of all machine states that consider the machine to be running, and stopTime is the duration of all machine states that are considered a performance loss. Note that this formula does not take into account losses caused by letting the machine run at a lower speed than possible. To approximate this, you can use the LowSpeedThresholdInPcsPerHour configuration option (see further below).
    • Quality is calculated using the formula good pieces / total pieces
  • Drill down into stop reasons (including histograms) to identify the root-causes for a potentially low OEE.
  • List all produced and planned orders including target vs actual produced pieces, total production time, stop reasons per order, and more using job and product tables.
  • See machine states, shifts, and orders on timelines to get a clear view of what happened during a specific time range.
  • View production speed and produced pieces over time.

Configure

In the database, you can configure:

  • Stop Reasons Configuration: Configure which stop reasons belong into which category for the OEE calculation and whether they should be included in the OEE calculation at all. For instance, some companies define changeovers as availability losses, some as performance losses. You can easily move them into the correct category.
  • Automatic Detection and Classification: Configure whether to automatically detect/classify certain types of machine states and stops:
    • AutomaticallyIdentifyChangeovers: If the machine state was an unspecified machine stop (UnknownStop), but an order was recently started, the time between the start of the order until the machine state turns to running, will be considered a Changeover Preparation State (10010). If this happens at the end of the order, it will be a Changeover Post-processing State (10020).
    • MicrostopDurationInSeconds: If an unspecified stop (UnknownStop) has a duration smaller than a configurable threshold (e.g., 120 seconds), it will be considered a Microstop State (50000) instead. Some companies put small unknown stops into a different category (performance) than larger unknown stops, which usually land up in the availability loss bucket.
    • IgnoreMicrostopUnderThisDurationInSeconds: In some cases, the machine can actually stop for a couple of seconds in routine intervals, which might be unwanted as it makes analysis difficult. One can set a threshold to ignore microstops that are smaller than a configurable threshold (usually like 1-2 seconds).
    • MinimumRunningTimeInSeconds: Same logic if the machine is running for a couple of seconds only. With this configurable threshold, small run-times can be ignored. These can happen, for example, during the changeover phase.
    • ThresholdForNoShiftsConsideredBreakInSeconds: If no shift was planned, an UnknownStop will always be classified as a NoShift state. Some companies move smaller NoShift’s into their category called “Break” and move them either into Availability or Performance.
    • LowSpeedThresholdInPcsPerHour: For a simplified performance calculation, a threshold can be set, and if the machine has a lower speed than this, it could be considered a LowSpeedState and could be categorized into the performance loss bucket.
  • Language Configuration: The language of the machine states can be configured using the languageCode configuration option (or overwritten in Grafana).

You can find the configuration options in the configurationTable

How can I use it?

Using it is very easy:

  1. Send messages according to the UMH datamodel to the Unified Namespace (similar to the Historian feature)
  2. Configure your OEE calculation by adjusting the configuration table
  3. Open Grafana, select your equipment and select the analysis you want to have. More information can be found in the umh-datasource-v2.

For more information about what exactly is behind the Analytics feature, check out our our architecture page and our datamodel

What are the limitations?

At the moment, the limitations are:

  • Speed losses in Performance are not calculated and can only be approximated using the LowSpeedThresholdInPcsPerHour configuration option
  • There is no way of tracking losses through reworked products. Either a product is scrapped or not.

Where to get more information?

2.2.4 - Alerting

Monitor and maintain your manufacturing processes with real-time Grafana alerts from the United Manufacturing Hub. Get notified of potential issues and reduce downtime by proactively addressing problems.

The United Manufacturing Hub utilizes a TimescaleDB database, which is based on PostgreSQL. Therefore, you can use the PostgreSQL plugin in Grafana to implement and configure alerts and notifications.

Why should I use it?

Alerts based on real-time data enable proactive problem detection. For example, you will receive a notification if the temperature of machine oil or an electrical component of a production line exceeds limitations. By utilizing such alerts, you can schedule maintenance, enhance efficiency, and reduce downtime in your factories.

What can I do with it?

Grafana alerts help you keep an eye on your production and manufacturing processes. By setting up alerts, you can quickly identify problems, ensuring smooth operations and high-quality products. An example of using alerts is the tracking of the temperature of an industrial oven. If the temperature goes too high or too low, you will get an alert, and the responsible team can take action before any damage occurs. Alerts can be configured in many different ways, for example, to set off an alarm if a maximum is reached once or if it exceeds a limit when averaged over a time period. It is also possible to include several values to create an alert, for example if a temperature surpasses a limit and/or the concentration of a component is too low. Notifications can be sent simultaneously across many services like Discord, Mail, Slack, Webhook, Telegram, or Microsoft Teams. It is also possible to forward the alert with SMS over a personal Webhook. A complete list can be found on the Grafana page about alerting.

How can I use it?

Follow this tutorial to set up an alert.

Alert Rule

When creating an alert, you first have to set the alert rule in Grafana. Here you set a name, specify which values are used for the rule, and when the rule is fired. Additionally, you can add labels for your rules, to link them to the correct contact points. You have to use SQL to select the desired values.

  1. To add a new rule, hover over the bell symbol on the left and click on Alert rules. Then click on the blue Create alert rule button.

    Untitled
    Untitled

  2. Choose a name for your rule.

  3. In the next step, you need to select and manipulate the value that triggers your alert and declare the function for the alert.

  • Subsection A is, by default the selection of your values: You can use the Grafana builder for this, but it is not useful, as it cannot select a time interval even though there is a selector for it. If you choose, for example, the last 20 seconds, your query will select values from hours ago. Therefore, it is necessary to use SQL directly. To add command manually, switch to Code in the right corner of the section.

    • First, you must select the value you want to create an alert for. In the United Manufacturing Hub’s data structure, a process value is stored in the table tag. Unfortunately Grafana cannot differentiate between different values of the same sensor; if you select the ConcentrationNH3 value from the example and more than one of the selected values violates your rule in the selected time interval, it will trigger multiple alerts. Because Grafana is not able to tell the alerts apart, this results in errors. To solve this, you need to add the value "timestamp" to the Select part. So the first part of the SQL command is: SELECT value, "timestamp".
    • The source is tag, so add FROM tag at the end.
    • The different values are distinguished by the variable name in the tag, so add WHERE name = '<key-name>' to select only the value you need. If you followed Get Started guide, you can use temperature as the name.
    • Since the selection of the time interval in Grafana is not working, you must add this manually as an addition to the WHERE command: AND "timestamp" > (NOW() - INTERVAL 'X seconds'). X is the number of past seconds you want to query. It’s not useful to set X to less than 10 seconds, as this is the fastest interval Grafana can check your rule, and you might miss values.

    The complete command is:

    SELECT value, "timestamp" FROM tag WHERE name = 'temperature' AND "timestamp" > (NOW() - INTERVAL '10 seconds')
    
  • In subsection B, you need to reduce the values to numbers, Grafana can work with. By default, Reduce will already be selected. However, you can change it to a different option by clicking the pencil icon next to the letter B. For this example, we will create an upper limit. So selecting Max as the Function is the best choice. Set Input as A (the output of the first section) and choose Strict for the Mode. So subsection B will output the maximum value the query in A selects as a single number.

  • In subsection C, you can establish the rule. If you select Math, you can utilize expressions like $B > 120 to trigger an alert when a value from section B ($B means the output from section B) exceeds 50. In this case, only the largest value selected in A is passed through the reduce function from B to C. A simpler way to set such a limit is by choosing Threshold instead of Math.

    Untitled
    Untitled

    To add more queries or expressions, find the buttons at the end of section two and click on the desired option. You can also preview the results of your queries and functions by clicking on Preview and check if they function correctly and fire an alert.

  1. Define the rule location, the time interval for rule checking, and the duration for which the rule has to be broken before an alert is triggered.

    • Select a name for your rule’s folder or add it to an existing one by clicking the arrow. Find all your rules grouped in these folders on the Alert rules page under Alerting.

    • An Evaluation group is a grouping of rules, which are checked after the same time interval. Creating a new group requires setting a time interval for rule checking. The minimum interval from Grafana is ten seconds.

    • Specify the duration the rule must be violated before triggering the alert. For example, with a ten-second check interval and a 20-second duration, the rule must be broken twice in a row before an alert is fired.

      Untitled
      Untitled

  2. Add details and descriptions for your rule.

    Untitled
    Untitled

  3. In the next step, you will be required to assign labels to your alert, ensuring it is directed to the appropriate contacts. For example, you may designate a label team with alertrule1: team = operator and alertrule2: team = management. It can be helpful to use labels more than once, like alertrule3: team = operator, to link multiple alerts to a contact point at once.

    Untitled
    Untitled

Your rule is now completed; click on Save and Exit on the right upper corner, next to section one.

Contact Point

In a contact point you create a collection of addresses and services that should be notified in case of an alert. This could be a Discord channel or Slack for example. When a linked alert is triggered, everyone within the contact point receives a message. The messages can be preconfigured and are specific to every service or contact. The following steps shall be done to create a contact point.

  1. Navigate to Contact points, located at the top of the Grafana alerting page.

  2. Click on the blue + Add contact point button.

  3. Now, you should be able to see setting page. Choose a name for your contact point.

    Untitled
    Untitled

  4. Pick the receiving service; in this example, Discord.

  5. Generate a new Webhook in your Discord server (Server Settings ⇒ Integrations ⇒ View Webhooks ⇒ New Webhook or create Webhook). Assign a name to the Webhook and designate the messaging channel. Copy the Webhook URL from Discord and insert it into the corresponding field in Grafana. Customize the message to Discord under Optional Discord settings if desired.

  6. If you need, add more services to the contact point, by clicking + Add contact point integration.

  7. Save the contact point; you can see it in the Contact points list, below the grafana-default-email contact point.

Notification Policies

In a notification policy, you establish the connection of a contact point with the desired alerts. To add the notification policy, you need to do the following steps.

  1. Go to the Notification policies section in the Grafana alerting page, next to the Contact points.

  2. Select + New specific policy to create a new policy, followed by + Add matcher to choose the label and value from the alert (for example team = operator). In this example, both alert1 and alert3 will be forwarded to the associated contact point. You can include multiple labels in a single notification policy.

  3. Choose the contact point designated to receive the alert notifications. Now, the inputs should be like in the picture.

    Untitled
    Untitled

  4. Press Save policy to finalize your settings. Your new policy will now be displayed in the list.

Mute Timing

In case you do not want to receive messages during a recurring time period, you can add a mute timing to Grafana. You can set up a mute timing in the Notification policies section.

  1. Select + Add mute timing below the notification policies.

  2. Choose a name for the mute timing.

  3. Specify the time during which notifications should not be forwarded.

    • Time has to be given in UTC time and formatted as HH:MM. Use 06:00 instead of 6:00 to avoid an error in Grafana.
  4. You can combine several time intervals into one mute timing by clicking on the + Add another time interval button at the end of the page.

  5. Click Submit to save your settings.

  6. To apply the mute timing to a notification policy, click Edit on the right side of the notification policy, and then select the desired mute timing from the drop-down menu at the bottom of the policy. Click on Save Policy to apply the change.

    Untitled
    Untitled

Silence

You can also add silences for a specific time frame and labels, in case you only want to mute alerts once. To add a silence, switch to the Silences section, next to Notification policies.

  1. Click on + Add Silence.

  2. Specify the beginning for the silence and its duration.

  3. Select the labels and their values you want silenced.

  4. If you need, you can add a comment to the silence.

  5. Click the Submit button at the bottom of the page.

    Untitled
    Untitled

What are the limitations?

It can be complicated to select and manipulate the desired values to create the correct function for your application. Grafana cannot differentiate between data points of the same source. For example, you want to make a temperature threshold based on a single sensor. If your query selects the last three values and two of them are above the threshold, Grafana will fire two alerts which it cannot tell apart. This results in errors. You have to configure the rule to reduce the selected values to only one per source to avoid this. It can be complicated to create such a specific rule with this limitation, and it requires some testing.

Another thing to keep in mind is that the alerts can only work with data from the database. It also does not work with the machine status; these values only exist in a raw, unprocessed form in TimescaleDB and are not processed through an API like process values.

Where to get more information?

2.3 - Device & Container Infrastructure

This page describes the device and container infrastructure features of the United Manufacturing Hub.

2.3.1 - Provisioning

Discover how to provision both the Data and the Device & Container Infrastructures.

The Management Console simplifies the deployment of the Data Infrastructure on any existing system. You can also provision the entire Device & Container Infrastructure, with a little manual interaction.

When should I use it?

Whether you have a bare metal server, and edge device, or a virtual machine, you can easily provision the whole United Manufacturing Hub. Choose to deploy only the Data Infrastructure on an existing OS, or provision the entire Device & Container Infrastructure, OS included.

What can I do with it?

You can leverage our custom iPXE bootstrapping process to install the flatcar operating system, along with the Device & Container Infrastructure and the Data Infrastructure.

If you already have an operating system installed, you can use the Management Console to provision the Data Infrastructure on top of it. You can also choose to use an existing UMH installation and only connect it to the Management Console.

Provisioning from the Management Console
Provisioning from the Management Console

How can I use it?

If you need to install the operating system from scratch, you can follow the Flatcar Installation guide, which will help you to deploy the default version of the United Manufacturing Hub.

Contact our Sales Team to get help on customizing the installation process in order to fit your enterprise needs.

If you already have an operating system installed, you can follow the Getting Started guide to provision the Data Infrastructure and setup the Management Companion agent on your system.

What are the limitations?

  • Provisioning the Device & Container Infrastructure requires manual interaction and is not yet available from the Management Console.
  • ARM systems are not supported.

Where to get more information?

2.3.2 - Monitoring & Management

Monitor and manage both the Data and the Device & Container Infrastructures using the Management Console.

The Management Console supports you to monitor and manage the Data Infrastructure and the Device & Container Infrastructure.

When should I use it?

Once initial deployment of the United Manufacturing Hub is completed, you can monitor and manage it using the Management Console. If you have not deployed yet, navigate to the Get Started! guide.

What can I do with it?

You can monitor the statuses of the following items using the Management Console:

  • Modules: A Module refer to a grouped set of related Kubernetes components like pods, statefulsets, and services. It provides a way to monitor and manage these components as a single unit.
  • System:
    • Resource Utilization: CPU, RAM, and DISK usages.
    • OS information: the used operating system, kernel version, and instruction set architecture.
  • Datastream: the rate of Kafka/TimescaleDB messages per second, the health of both connections and data sources.
  • Kubernetes: the number of error events and the deployed management companion’s and UMH’s versions.

In addtion, you can check the topic structure used by data sources and the corresponding payloads.

Moreover, you can create a new connection and initilize the created connection to deploy a data source.

How can I use it?

After logging in, the Instance Dashboard page shows the Overview tab. You can click and open each status on this tab.

Instance Overview
Instance Overview

The Connection Management tab shows the status of all the instance’s connections and their associated data sources. Moreover, you can create a new connection, as well as initialize them. Read more about the Connection Management in the Connectivity section.

Connection Management
Connection Management

The Tag Browser provides a comprehensive view of the tag structure, allowing automation engineers to manage and navigate through all their tags without concerning themselves with underlying technical complexities, such as topics, keys or payload structures.

Tags typically represent variables associated with devices in an ISA-95 model. For instance, it could represent a temperature reading from a specific sensor or a status indication from a machine component. These tags are transported through various technical methods across the Unified Namespace (UNS) into the database. This includes organizing them within a folder structure or embedding them as JSON objects within the message payload. Tags can be sent into the same topic or utilizing various sub-topics. Due to the nature of MQTT and Kafka, the topics may differ, but the following formula applies:

MQTT Topic = Kafka topic + Kafka Key

The kafka topic and key depend on the configured merge point, read more about it here.

Read more about the Tag Browser in the Unified Namespace section.

Tag Browser
Tag Browser

What are the limitations?

Presently, removing a UMH instance from the Management Console is not supported. After overwriting an instance, the old one will display an offline status.

Where to get more information?

2.3.3 - Layered Scaling

Efficiently scale your United Manufacturing Hub deployment across edge devices and servers using Layered Scaling.

Layered Scaling is an architectural approach in the United Manufacturing Hub that enables efficient scaling of your deployment across edge devices and servers. It is part of the Plant centric infrastructure , by dividing the processing workload across multiple layers or tiers, each with a specific set of responsibilities, Layered Scaling allows for better management of resources, improved performance, and easier deployment of software components. Layered Scaling follows the standard IoT infrastructure, by additionally connection a lot of IoT-devices typically via MQTT.

When should I use it?

Layered Scaling is ideal when:

  • You need to process and exchange data close to the source for latency reasons and independence from internet and network outages. For example, if you are taking pictures locally, analyzing them using machine learning, and then scrapping the product if the quality is poor. In this case, you don’t want the machine to be idle if something happens in the network. Also, it would not be acceptable for a message to arrive a few hundred milliseconds later, as the process is quicker than that.
  • High-frequency data might be useful to not send to the “higher” instance and store there. It can put unnecessary stress on those instances. You have an edge device that takes care of it. For example, you are taking and processing images (e.g., for quality reasons) or using an accelerometer and microphone for predictive maintenance reasons on the machine and do not want to send data streams with 20 kHz (20,000 times per second) to the next instance.
  • Organizational reasons. For the OT person, it might be better to configure the data contextualization using Node-RED directly at the production machine. They could experiment with it, configure it without endangering other machines, and see immediate results (e.g., when they move the position of a sensor). If the instance is “somewhere in IT,” they may feel they do not have control over it anymore and that it is not their system.

What can I do with it?

With Layered Scaling in the United Manufacturing Hub, you can:

  • Deploy minimized versions of the Helm Chart on edge devices, focusing on specific features required for that environment (e.g., without the Historian and Analytics features enabled, but with the IFM retrofitting feature using sensorconnect, with the barcodereader retrofit feature using barcodereader, or with the data connectivity via Node-RED feature enabled).
  • Seamlessly communicate between edge devices, on-premise servers, and cloud instances using the kafka-bridge microservice, allowing data to be buffered in between in case the internet or network connection drops.
  • Allow plant-overarching analysis / benchmarking, multi-plant kpis, connections to enterprise-IT, etc.. We typically recommend sending only data processed by our API factoryinsight.

How can I use it?

To implement Layered Scaling in the United Manufacturing Hub:

  1. Deploy a minimized version of the Helm Chart on edge devices, tailored to the specific features required for that environment. You can either install the whole version using flatcar and then disable functionalities you do not need, or use the Management Console. If the feature is not available in the Management Console, you could try asking nicely in the Discord and we will, can provide you with a token you can enter during the flatcar installation, so that your edge devices are pre-configured depending on your needs (incl. demilitarized zones, multiple networks, etc.)
  2. Deploy the full Helm Chart with all features enabled on a central instance, such as a server.
  3. Configure the Kafka Bridge microservice to transmit data from the edge devices to the central instance for further processing and analysis.

For MQTT connections, you can just connect external devices via MQTT, and it will land up in kafka directly. To connect on-premise servers with the cloud (plant-overarching architecture), you can use kafka-bridge or write service in benthos or Node-RED that regularly fetches data from factoryinsight and pushes it into your cloud instance.

What are the limitations?

  • Be aware that each device increases the complexity over the entire system. We recommend using the Management Console to manage them centrally.

Because Kafka is used to reliably transmit messages from the edge devices to the server, and it struggles with devices repeatedly going offline and online again, ethernet connections should be used. Also, the total amount of edge devices should not “escalate”. If you have a lot of edge devices (e.g., you want to connect each PLC), we recommend connecting them via MQTT to an instance of the UMH instead.

Where to get more information?

2.3.4 - Upgrading

Discover how to keep your UMH Instance’s up-to-date.

Upgrading is a vital aspect of maintaining your United Manufacturing Hub (UMH) instance. This feature ensures that your UMH environment stays current, secure, and optimized with the latest enhancements. Explore the details below to make the most of the upgrading capabilities.

When Should I Use It?

Upgrade your UMH instance whenever a new version is released to access the latest features, improvements, and security enhancements. Regular upgrades are recommended for a seamless experience.

What Can I Do With It?

Enhance your UMH instance in the following ways:

  • Keep it up-to-date with the latest features and improvements.
  • Enhance security and performance.
  • Take advantage of new functionalities and optimizations introduced in each release.

How Can I Use It?

To upgrade your UMH instance, follow the detailed instructions provided in the Upgrading Guide.

What Are The Limitations?

  • As of now, the upgrade process for the UMH stack is not integrated into the Management Console and must be performed manually.
  • Ensure compatibility with the recommended prerequisites before initiating an upgrade.

3 - Concepts

The Concepts section helps you learn about the parts of the United Manufacturing Hub system, and helps you obtain a deeper understanding of how it works.

3.1 - Security

3.1.1 - Management Console

Concepts related to the security of the Management Console.

The web-based nature of the Management Console means that it is exposed to the same security risks as any other web application. This section describes the measures that we adopt to mitigate these risks.

Encrypted Communication

The Management Console is served over HTTPS, which means that all communication between the browser and the server is encrypted. This prevents attackers from eavesdropping on the communication and stealing sensitive information such as passwords and session cookies.

Cyphered Messages

This feature is currently in development and is subject to change.

Other than the standard TLS encryption provided by HTTPS, we also provide an additional layer of encryption for the messages exchanged between the Management Console and your UMH instance. Every action that you perform on the Management Console, such as creating a new data source, and every information that you retrieve, such as the messages in the Unified Namespace, is encrypted using a secret key that is only known to you and your UMH instance. This ensures that no one, not even us, can see, read or reverse engineer the content of these messages.

The process we use (which is now patent pending) is simple yet effective:

  1. When you create a new user on the Management Console, we generate a new private key and we encrypt it using your password. This means that only you can decrypt it.
  2. The encrypted private key and your hashed password are stored in our database.
  3. When you login to the Management Console, the encrypted private key associated with your user is downloaded to your browser and decrypted using your password. This ensures that your password is never sent to our server, and that the private key is only available to you.
  4. When you add a new UMH instance to the Management Console, it generates a token that the Management Companion (aka your instance) will use to authenticate itself. This token works the same way as your user password: it is used to encrypt a private key that only the UMH instance can decrypt.
  5. The instance encrypted private key and the hashed token are stored in our database. A relationship is also created between the user and the instance.
  6. All the messages exchanged between the Management Console and the UMH instance are encrypted using the private keys, and then encrypted again using the TLS encryption provided by HTTPS.

The only drawback to this approach is that, if you forget your password, we won’t be able to recover your private key. This means that you will have to create a new user and reconfigure all your UMH instances. But your data will still be safe and secure.

However, even though we are unable to read any private key, there is some information that we can inevitably see:

  • IP addresses of the devices using the Management Console and of the UMH instances that they are connected to
  • The time at which the devices connect to the Management Console
  • Amount of data exchanged between the devices and the Management Console (but not its content)

4 - Data Model (v1)

This page describes the data model of the UMH stack - from the message payloads up to database tables.

flowchart LR AP[Automation Pyramid] --> C_d C_d --> UN_d UN_d --> H_d H_d --> DL[Data Lake] subgraph UNS[ ] subgraph C_d[ ] C_d_infoX["Connectivity\n(e.g., OPC UA)"] C_d_info["Time-Series data\nUnstructured / semi-structured data\nRelational data (master, operational, batch)"] C_d_infoX --- C_d_info end subgraph UN_d[ ] UN_d_infoX["Unified Namespace\n(e.g., MQTT, Kafka)"] UN_d_info["umh/v1/enterprise/site/area/productionLine/workCell/originID/_schema/schema_specific"] UN_d_infoX --- UN_d_info end subgraph H_d[ ] H_d_infoX["Historian\n(e.g., TimescaleDB)"] H_d_info[Table: asset\nTable: tag\nTable: tag_string] H_d_infoX --- H_d_info end end click C_d_infoX href "../features/connectivity" click UN_d_infoX href "./messages" click H_d_infoX href "./database" click C_d_info href "../features/connectivity" click UN_d_info href "./messages" click H_d_info href "./database"
flowchart LR AP[Automation Pyramid] --> C_d C_d --> UN_d UN_d --> H_d H_d --> DL[Data Lake] subgraph UNS[ ] subgraph C_d[ ] C_d_infoX["Connectivity\n(e.g., OPC UA)"] C_d_info["Time-Series data\nUnstructured / semi-structured data\nRelational data (master, operational, batch)"] C_d_infoX --- C_d_info end subgraph UN_d[ ] UN_d_infoX["Unified Namespace\n(e.g., MQTT, Kafka)"] UN_d_info["umh/v1/enterprise/site/area/productionLine/workCell/originID/_schema/schema_specific"] UN_d_infoX --- UN_d_info end subgraph H_d[ ] H_d_infoX["Historian\n(e.g., TimescaleDB)"] H_d_info[Table: asset\nTable: tag\nTable: tag_string] H_d_infoX --- H_d_info end end click C_d_infoX href "../features/connectivity" click UN_d_infoX href "./messages" click H_d_infoX href "./database" click C_d_info href "../features/connectivity" click UN_d_info href "./messages" click H_d_info href "./database"
The Data Infrastructure of the UMH consists out of the three components: Connectivity, Unified Namespace, and Historian (see also Architecture). Each of the components has their own standards and best-practices, so a consistent data model across multiple building blocks need to combine all of them.

If you like to learn more about our data model & ADR’s checkout our learn article.

Connectivity

Incoming data is often unstructured, therefore our standard allows either conformant data in our _historian schema, or any kind of data in any other schema.

Our key considerations where:

  1. Event driven architecture: We only look at changes, reducing network and system load
  2. Ease of use: We allow any data in, allowing OT & IT to process it as they wish

Unified Namespace

The UNS employs MQTT and Kafka in a hybrid approach, utilizing MQTT for efficient data collection and Kafka for robust data processing. The UNS is designed to be reliable, scalable, and maintainable, facilitating real-time data processing and seamless integration or removal of system components.

These elements are the foundation for our data model in UNS:

  1. Incoming data based on OT standards: Data needs to be contextualized here not by IT people, but by OT people. They want to model their data (topic hierarchy and payloads) according to ISA-95, Weihenstephaner Standard, Omron PackML, Euromap84, (or similar) standards, and need e.g., JSON as payload to better understand it.

  2. Hybrid Architecture: Combining MQTT’s user-friendliness and widespread adoption in Operational Technology (OT) with Kafka’s advanced processing capabilities. Topics and payloads can not be interchanged fully between them due to limitations in MQTT and Kafka, so some trade-offs needs to be done.

  3. Processed data based on IT standards: Data is sent after processing to IT systems, and needs to adhere with standards: the data inside of the UNS needs to be easy processable for either contextualization, or storing it in a Historian or Data Lake.

Historian

We choose TimescaleDB as our primary database.

Key elements we considered:

  1. IT best-practice: used SQL and Postgres for easy compatibility, and therefore TimescaleDb
  2. Straightforward queries: we aim to make easy SQL queries, so that everyone can build dashboards
  3. Performance: because of time-series and typical workload, the database layout might not be optimized fully on usability, but we did some trade-offs that allow it to store millions of data points per second

4.1 - Unified Namespace

Describes all available _schema and there structure

Topic structure

flowchart LR umh --> v1 v1 --> enterprise enterprise -->|Optional| site site -->|Optional| area area -->|Optional| productionLine productionLine -->|Optional| workCell workCell -->|Optional| originID originID -->|Optional| _schema["_schema (Ex: _historian, _analytics, _local)"] _schema -->_opt["Schema dependent context"] classDef mqtt fill:#00dd00,stroke:#333,stroke-width:4px; class umh,v1,enterprise,_schema mqtt; classDef optional fill:#77aa77,stroke:#333,stroke-width:4px; class site,area,productionLine,workCell,originID optional; enterprise -.-> _schema site -.-> _schema area -.-> _schema productionLine -.-> _schema workCell -.-> _schema click _schema href "#_schema" click umh href "#versioning-prefix" click v1 href "#versioning-prefix"
flowchart LR umh --> v1 v1 --> enterprise enterprise -->|Optional| site site -->|Optional| area area -->|Optional| productionLine productionLine -->|Optional| workCell workCell -->|Optional| originID originID -->|Optional| _schema["_schema (Ex: _historian, _analytics, _local)"] _schema -->_opt["Schema dependent context"] classDef mqtt fill:#00dd00,stroke:#333,stroke-width:4px; class umh,v1,enterprise,_schema mqtt; classDef optional fill:#77aa77,stroke:#333,stroke-width:4px; class site,area,productionLine,workCell,originID optional; enterprise -.-> _schema site -.-> _schema area -.-> _schema productionLine -.-> _schema workCell -.-> _schema click _schema href "#_schema" click umh href "#versioning-prefix" click v1 href "#versioning-prefix"

Versioning Prefix

The umh/v1 at the beginning is obligatory. It ensures that the structure can evolve over time without causing confusion or compatibility issues.

Topic Names & Rules

All part of this structure, except for enterprise and _schema are optional. They can consist of any letters (a-z, A-Z), numbers (0-9) and therein symbols (- & _). Be careful to avoid ., +, # or / as these are special symbols in Kafka or MQTT. Ensure that your topic always begins with umh/v1, otherwise our system will ignore your messages.

Be aware that our topics are case-sensitive, therefore umh.v1.ACMEIncorperated is not the same as umh.v1.acmeincorperated.

Throughout this documentation we will use the MQTT syntax for topics (umh/v1), the corresponding Kafka topic names are the same but / replaced with .

Topic validator



OriginID

This part identifies where the data is coming from. Good options include the senders MAC address, hostname, container id. Examples for originID: 00-80-41-ae-fd-7e, E588974, e5f484a1791d

_schema

_historian

Messages tagged with _historian will be stored in our database and are available via Grafana.

_analytics

Messages tagged with _analytics will be processed by our analytics pipeline. They are used for automatic calculation of KPI’s and other statistics.

_local

This key might contain any data, that you do not want to bridge to other nodes (it will however be MQTT-Kafka bridged on its node).

For example this could be data you want to pre-process on your local node, and then put into another _schema. This data must not necessarily be JSON.

Other

Any other schema, which starts with an underscore (for example: _images), will be forwarded by both MQTT-Kafka & Kafka-Kafka bridges but never processed or stored.

This data must not necessarily be JSON.

Converting other data models

Most data models already follow a location based naming structure.

KKS Identification System for Power Stations

KKS (Kraftwerk-Kennzeichensystem) is a standardized system for identifying and classifying equipment and systems in power plants, particularly in German-speaking countries.

In a flow diagram, the designation is: 1 2LAC03 CT002 QT12

  • Level 0 Classification:

    Block 1 of a power plant site is designated as 1 in this level.

  • Level 1 Classification:

    The designation for the 3rd feedwater pump in the 2nd steam-water circuit is 2LAC03. This means:

    Main group 2L: 2nd steam, water, gas circuit Subgroup (2L)A: Feedwater system Subgroup (2LA)C: Feedwater pump system Counter (2LAC)03: third feedwater pump system

  • Level 2 Classification:

    For the 2nd temperature measurement, the designation CT002 is used. This means:

    Main group C: Direct measurement Subgroup (C)T: Temperature measurement Counter (CT)002: second temperature measurement

  • Level 3 Classification:

    For the 12th immersion sleeve as a sensor protection, the designation QT12 is used. This means:

    • Main group Q: Control technology equipment
    • Subgroup (Q)T: Protective tubes and immersion sleeves as sensor protection
    • Counter (QT)12: twelfth protective tube or immersion sleeve

The above example refers to the 12th immersion sleeve at the 2nd temperature measurement of the 3rd feed pump in block 1 of a power plant site. Translating this in our data model could result in: umh/v1/nuclearCo/1/2LAC03/CT002/QT12/_schema

Where:

  • nuclearCo: Represents the enterprise or the name of the nuclear company.
  • 1: Maps to the site, corresponding to Block 1 of the power plant as per the KKS number.
  • 2LAC03: Fits into the area, representing the 3rd feedwater pump in the 2nd steam-water circuit.
  • CT002: Aligns with productionLine, indicating the 2nd temperature measurement in this context.
  • QT12: Serves as the workCell or originID, denoting the 12th immersion sleeve.
  • _schema: Placeholder for the specific data schema being applied.

4.1.1 - _analytics

Messages for our analytics feature

Topic structure

flowchart LR topicStart["umh.v1..."] --> _analytics _analytics --> wo[work-order] wo --> wo-create[create] wo --> wo-start[start] wo --> wo-stop[stop] _analytics --> pt[product-type] pt --> pt-create[create] _analytics --> p[product] p --> p-add[add] p --> p-set-bad-quantity[setBadQuantity] _analytics --> s[shift] s --> s-add[add] s --> s-delete[delete] _analytics --> st[state] st --> st-add[add] st --> st-overwrite[overwrite] classDef mqtt fill:#00dd00,stroke:#333,stroke-width:4px; class umh,v1,enterprise,_analytics mqtt; classDef type fill:#00ffbb,stroke:#333,stroke-width:4px; class wo,pt,p,s,st type; classDef func fill:#8899dd,stroke:#333,stroke-width:4px; class wo-create,wo-start,wo-stop,pt-create,p-add,p-set-bad-quantity,s-add,s-delete,st-add,st-overwrite func; click topicStart href "../" click wo href "#work-order" click pt href "#product-type" click p href "#product" click s href "#shift" click wo-create href "#create" click wo-start href "#start" click wo-stop href "#stop" click pt-create href "#create-1" click p-add href "#add" click p-set-bad-quantity href "#set-bad-quantity" click s-add href "#add-1" click s-delete href "#delete" click st-add href "#add-2" click st-overwrite href "#overwrite"
flowchart LR topicStart["umh.v1..."] --> _analytics _analytics --> wo[work-order] wo --> wo-create[create] wo --> wo-start[start] wo --> wo-stop[stop] _analytics --> pt[product-type] pt --> pt-create[create] _analytics --> p[product] p --> p-add[add] p --> p-set-bad-quantity[setBadQuantity] _analytics --> s[shift] s --> s-add[add] s --> s-delete[delete] _analytics --> st[state] st --> st-add[add] st --> st-overwrite[overwrite] classDef mqtt fill:#00dd00,stroke:#333,stroke-width:4px; class umh,v1,enterprise,_analytics mqtt; classDef type fill:#00ffbb,stroke:#333,stroke-width:4px; class wo,pt,p,s,st type; classDef func fill:#8899dd,stroke:#333,stroke-width:4px; class wo-create,wo-start,wo-stop,pt-create,p-add,p-set-bad-quantity,s-add,s-delete,st-add,st-overwrite func; click topicStart href "../" click wo href "#work-order" click pt href "#product-type" click p href "#product" click s href "#shift" click wo-create href "#create" click wo-start href "#start" click wo-stop href "#stop" click pt-create href "#create-1" click p-add href "#add" click p-set-bad-quantity href "#set-bad-quantity" click s-add href "#add-1" click s-delete href "#delete" click st-add href "#add-2" click st-overwrite href "#overwrite"

Work Order

Create

Use this topic to create a new work order.

This replaces the addOrder message from our v0 data model.

Fields

  • external_work_order_id (string): The work order ID from your MES or ERP system.
  • product (object): The product being produced.
    • external_product_id (string): The product ID from your MES or ERP system.
    • cycle_time_ms (number) (optional): The cycle time for the product in seconds. Only include this if the product has not been previously created.
  • quantity (number): The quantity of the product to be produced.
  • status (number) (optional): The status of the work order. Defaults to 0 (created).
    • 0 - Planned
    • 1 - In progress
    • 2 - Completed
  • start_time (string) (optional): The start time of the work order. Will be set by the corresponding start message if not provided.
  • end_time (string) (optional): The end time of the work order. Will be set by the corresponding stop message if not provided.

Example

{
  "external_work_order_id": "1234",
  "product": {
    "external_product_id": "5678"
  },
  "quantity": 100,
  "status": 0
}

Start

Use this topic to start a previously created work order.

Each work order can only be started once. Only work orders with status 0 (planned) and no start time can be started.

Fields

  • external_work_order_id (string): The work order ID from your MES or ERP system.
  • start_time (string): The start time of the work order.

Example

{
  "external_work_order_id": "1234",
  "start_time": "2021-01-01T12:00:00Z"
}

Stop

Use this topic to stop a previously started work order.

Stopping an already stopped work order will have no effect. Only work orders with status 1 (in progress) and no end time can be stopped.

Fields

  • external_work_order_id (string): The work order ID from your MES or ERP system.
  • end_time (string): The end time of the work order.

Example

{
  "external_work_order_id": "1234",
  "end_time": "2021-01-01T12:00:00Z"
}

Product Type

Create

Announce a new product type.

We recommend using the work-order/create message to create products on the fly.

Fields

  • external_product_type_id (string): The product type ID from your MES or ERP system.
  • cycle_time_ms (number) (optional): The cycle time for the product in seconds.

Example

{
  "external_product_type_id": "5678",
  "cycle_time_ms": 60
}

Product

Add

Communicates the completion of part of a work order.

Fields

  • external_product_type_id (string): The product type ID from your MES or ERP system.
  • product_batch_id (string) (optional): Unique identifier for the product. This could for example be a barcode or serial number.
  • start_time (string): The start time of the product.
  • end_time (string): The end time of the product.
  • quantity (number): The quantity of the product produced.
  • bad_quantity (number) (optional): The quantity of bad products produced.

Example

{
  "external_product_type_id": "5678",
  "product_batch_id": "1234",
  "start_time": "2021-01-01T12:00:00Z",
  "end_time": "2021-01-01T12:01:00Z",
  "quantity": 100,
  "bad_quantity": 5
}

Set Bad Quantity

Modify the quantity of bad products produced.

Fields

  • external_product_type_id (string): The product type ID from your MES or ERP system.
  • end_time (string): The end time of the product, used to identify an existing product.
  • bad_quantity (number): The new quantity of bad products produced.

Example

{
  "external_product_type_id": "5678",
  "end_time": "2021-01-01T12:01:00Z",
  "bad_quantity": 10
}

Shift

Add

Announce a new shift.

Fields

  • start_time (string): The start time of the shift.
  • end_time (string): The end time of the shift.

Example

{
  "start_time": "2021-01-01T06:00:00Z",
  "end_time": "2021-01-01T14:00:00Z"
}

Delete

Delete a previously created shift.

Fields

  • start_time (string): The start time of the shift.

Example

{
  "start_time": "2021-01-01T06:00:00Z"
}

State

Add

Announce a state change.

Checkout the state documentation for a list of available states.

Fields

  • state (number): The state of the machine.
  • start_time (string): The start time of the state.

Example

{
  "state": 10000,
  "start_time": "2021-01-01T12:00:00Z"
}

Overwrite

Overwrite one or more states between two times.

Fields

  • state (number): The state of the machine.
  • start_time (string): The start time of the state.
  • end_time (string): The end time of the state.

4.1.2 - _historian

Messages for our historian feature

Topic structure

flowchart LR topicStart["umh.v1..."] --> _historian _historian --> |Optional| tagName _historian --> |Optional| tagGroup tagGroup --> tagName classDef mqtt fill:#00dd00,stroke:#333,stroke-width:4px; class umh,v1,enterprise,_historian mqtt; classDef optional fill:#77aa77,stroke:#333,stroke-width:4px; class site,area,productionLine,workCell,originID,tagGroup,tagName optional; tagGroup -.-> |1-N| tagGroup click topicStart href "../"
flowchart LR topicStart["umh.v1..."] --> _historian _historian --> |Optional| tagName _historian --> |Optional| tagGroup tagGroup --> tagName classDef mqtt fill:#00dd00,stroke:#333,stroke-width:4px; class umh,v1,enterprise,_historian mqtt; classDef optional fill:#77aa77,stroke:#333,stroke-width:4px; class site,area,productionLine,workCell,originID,tagGroup,tagName optional; tagGroup -.-> |1-N| tagGroup click topicStart href "../"

Message structure

Our _historian messages are JSON containing a unix timestamp as milliseconds (timestamp_ms) and one or more key value pairs. Each key value pair will be inserted at the given timestamp into the database.

Show JSON Schema

Examples:

{
    "timestamp_ms": 1702286893,
    "temperature_c": 154.1
}
{
    "timestamp_ms": 1702286893,
    "temperature_c": 154.1,
    "pressure_bar": 5,
    "notes": "sensor 1 is faulty"
}

If you use a boolean value, it will be interpreted as a number.

Tag grouping

Sometimes it makes sense to further group data together. In the following example we have a CNC cutter, emitting data about it’s head position. If we want to group this for easier access in Grafana, we could use two types of grouping.

  1. Using Tags / Tag Groups in the Topic: This will result in 3 new database entries, grouped by head & pos.

    Topic: umh/v1/cuttingincorperated/cologne/cnc-cutter/_historian/head/pos

    {
     "timestamp_ms": 1670001234567,
      "x": 12.5,
      "y": 7.3,
      "z": 3.2
    }
    

    This method allows very easy monitoring of the data in tools like our Management Console or MQTT Explorer, as each new / will be displayed as a Tree.

What’s next?

Find out how the data is stored and can be retrieved from our database.

4.2 - Historian

Describes databases of all available _schema

4.2.1 - Analytics

How _analytics data is stored and can be queried
erDiagram asset { int id PK "SERIAL PRIMARY KEY" text enterprise "NOT NULL" text site "DEFAULT '' NOT NULL" text area "DEFAULT '' NOT NULL" text line "DEFAULT '' NOT NULL" text workcell "DEFAULT '' NOT NULL" text origin_id "DEFAULT '' NOT NULL" } product_type { product_type_id INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY external_product_type_id TEXT NOT NULL cycle_time_ms INTEGER NOT NULL asset_id INTEGER REFERENCES asset(id) _ CONSTRAINT "external_product_asset_uniq UNIQUE (external_product_type_id, asset_id)" _ CHECK "(cycle_time_ms > 0)" } work_order { work_order_id INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY external_work_order_id TEXT NOT NULL asset_id INTEGER NOT NULL REFERENCES asset(id) product_type_id INTEGER NOT NULL REFERENCES product_type(product_type_id) quantity INTEGER NOT NULL status INTEGER "NOT NULL DEFAULT 0, -- 0: planned, 1: in progress, 2: completed" start_time TIMESTAMPTZ end_time TIMESTAMPTZ _ CONSTRAINT "asset_workorder_uniq UNIQUE (asset_id, external_work_order_id)" _ CHECK "(quantity > 0)" _ CHECK "(status BETWEEN 0 AND 2)" _ UNIQUE "(asset_id, start_time)" _ EXCLUDE "USING gist (asset_id WITH =, tstzrange(start_time, end_time) WITH &&) WHERE (start_time IS NOT NULL AND end_time IS NOT NULL)" } product { product_type_id INTEGER REFERENCES product_type(product_type_id) product_batch_id TEXT asset_id INTEGER REFERENCES asset(id) start_time TIMESTAMPTZ end_time TIMESTAMPTZ NOT NULL quantity INTEGER NOT NULL bad_quantity INTEGER "DEFAULT 0" _ CHECK "(quantity > 0)" _ CHECK "(bad_quantity >= 0)" _ CHECK "(bad_quantity <= quantity)" _ CHECK "(start_time <= end_time)" _ UNIQUE "(asset_id, end_time, product_batch_id)" _ HYPERTABLE "create_hypertable('product', 'end_time', if_not_exists => TRUE)" _ INDEX "INDEX idx_products_asset_end_time ON product(asset_id, end_time DESC)" } shift { shift_id INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY asset_id INTEGER REFERENCES asset(id) start_time TIMESTAMPTZ NOT NULL end_time TIMESTAMPTZ NOT NULL _ CONSTRAINT "shift_start_asset_uniq UNIQUE (start_time, asset_id)" _ CHECK "(start_time < end_time)" _ EXCLUDE "USING gist (asset_id WITH =, tstzrange(start_time, end_time) WITH &&)" } state { asset_id INTEGER REFERENCES asset(id) start_time TIMESTAMPTZ NOT NULL state INT NOT NULL _ CHECK "(state >= 0)" _ UNIQUE "(start_time, asset_id)" _ HYPERTABLE "create_hypertable('states', 'start_time', if_not_exists => TRUE)" _ INDEX "INDEX idx_states_asset_start_time ON states(asset_id, start_time DESC)" } asset ||--o{ work_order : "id" asset ||--o{ product_type : "id" asset ||--o{ product : "id" asset ||--o{ shift : "id" asset ||--o{ state : "id" work_order ||--o{ product_type : "product_type_id" product ||--o{ product_type : "product_type_id"
erDiagram asset { int id PK "SERIAL PRIMARY KEY" text enterprise "NOT NULL" text site "DEFAULT '' NOT NULL" text area "DEFAULT '' NOT NULL" text line "DEFAULT '' NOT NULL" text workcell "DEFAULT '' NOT NULL" text origin_id "DEFAULT '' NOT NULL" } product_type { product_type_id INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY external_product_type_id TEXT NOT NULL cycle_time_ms INTEGER NOT NULL asset_id INTEGER REFERENCES asset(id) _ CONSTRAINT "external_product_asset_uniq UNIQUE (external_product_type_id, asset_id)" _ CHECK "(cycle_time_ms > 0)" } work_order { work_order_id INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY external_work_order_id TEXT NOT NULL asset_id INTEGER NOT NULL REFERENCES asset(id) product_type_id INTEGER NOT NULL REFERENCES product_type(product_type_id) quantity INTEGER NOT NULL status INTEGER "NOT NULL DEFAULT 0, -- 0: planned, 1: in progress, 2: completed" start_time TIMESTAMPTZ end_time TIMESTAMPTZ _ CONSTRAINT "asset_workorder_uniq UNIQUE (asset_id, external_work_order_id)" _ CHECK "(quantity > 0)" _ CHECK "(status BETWEEN 0 AND 2)" _ UNIQUE "(asset_id, start_time)" _ EXCLUDE "USING gist (asset_id WITH =, tstzrange(start_time, end_time) WITH &&) WHERE (start_time IS NOT NULL AND end_time IS NOT NULL)" } product { product_type_id INTEGER REFERENCES product_type(product_type_id) product_batch_id TEXT asset_id INTEGER REFERENCES asset(id) start_time TIMESTAMPTZ end_time TIMESTAMPTZ NOT NULL quantity INTEGER NOT NULL bad_quantity INTEGER "DEFAULT 0" _ CHECK "(quantity > 0)" _ CHECK "(bad_quantity >= 0)" _ CHECK "(bad_quantity <= quantity)" _ CHECK "(start_time <= end_time)" _ UNIQUE "(asset_id, end_time, product_batch_id)" _ HYPERTABLE "create_hypertable('product', 'end_time', if_not_exists => TRUE)" _ INDEX "INDEX idx_products_asset_end_time ON product(asset_id, end_time DESC)" } shift { shift_id INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY asset_id INTEGER REFERENCES asset(id) start_time TIMESTAMPTZ NOT NULL end_time TIMESTAMPTZ NOT NULL _ CONSTRAINT "shift_start_asset_uniq UNIQUE (start_time, asset_id)" _ CHECK "(start_time < end_time)" _ EXCLUDE "USING gist (asset_id WITH =, tstzrange(start_time, end_time) WITH &&)" } state { asset_id INTEGER REFERENCES asset(id) start_time TIMESTAMPTZ NOT NULL state INT NOT NULL _ CHECK "(state >= 0)" _ UNIQUE "(start_time, asset_id)" _ HYPERTABLE "create_hypertable('states', 'start_time', if_not_exists => TRUE)" _ INDEX "INDEX idx_states_asset_start_time ON states(asset_id, start_time DESC)" } asset ||--o{ work_order : "id" asset ||--o{ product_type : "id" asset ||--o{ product : "id" asset ||--o{ shift : "id" asset ||--o{ state : "id" work_order ||--o{ product_type : "product_type_id" product ||--o{ product_type : "product_type_id"

asset

This table holds all assets. An asset for us is the unique combination of enterprise, site, area, line, workcell & origin_id.

All keys except for id and enterprise are optional. In our example we have just started our CNC cutter, so it’s unique asset will get inserted into the database. It already contains some data we inserted before so the new asset will be inserted at id: 8

identerprisesitearealineworkcellorigin_id
1acme-corporation
2acme-corporationnew-york
3acme-corporationlondonnorthassembly
4stark-industriesberlinsouthfabricationcell-a13002
5stark-industriestokyoeasttestingcell-b33005
6stark-industriespariswestpackagingcell-c23009
7umhcologneofficedevserver1sensor0
8cuttingincoperatedcolognecnc-cutter

work_order

This table holds all work orders. A work order is a unique combination of external_work_order_id and asset_id.

work_order_idexternal_work_order_idasset_idproduct_type_idquantitystatusstart_timeend_time
1#24758110002022-01-01T08:00:00Z2022-01-01T18:00:00Z

product_type

This table holds all product types. A product type is a unique combination of external_product_type_id and asset_id.

product_type_idexternal_product_type_idcycleTimeasset_id
1desk-leg-011210.08

product

This table holds all products.

product_type_idproduct_batch_idasset_idstart_timeend_timequantitybad_quantity
1batch-n11382022-01-01T08:00:00Z2022-01-01T08:10:00Z1007

shift

This table holds all shifts. A shift is a unique combination of asset_id and start_time.

shiftIdasset_idstart_timeend_time
182022-01-01T08:00:00Z2022-01-01T19:00:00Z

state

This table holds all states. A state is a unique combination of asset_id and start_time.

stateIdasset_idstart_timestate
182022-01-01T08:00:00Z20000
282022-01-01T08:10:00Z10000

4.2.2 - Historian

How _historian data is stored and can be queried

Our database for the umh.v1 _historian datamodel currently consists of three tables. These are used for the _historian schema. We choose this layout to enable easy lookups based on the asset features, while maintaining separation between data and names. The split into tag & tag_string prevents accidental lookups of the wrong datatype, which might break queries such as aggregations, averages, …

erDiagram asset { int id PK "SERIAL PRIMARY KEY" text enterprise "NOT NULL" text site "DEFAULT '' NOT NULL" text area "DEFAULT '' NOT NULL" text line "DEFAULT '' NOT NULL" text workcell "DEFAULT '' NOT NULL" text origin_id "DEFAULT '' NOT NULL" } tag { timestamptz timestamp "NOT NULL" text name "NOT NULL" text origin "NOT NULL" int asset_id FK "REFERENCES asset(id) NOT NULL" real value } tag_string { timestamptz timestamp "NOT NULL" text name "NOT NULL" text origin "NOT NULL" int asset_id FK "REFERENCES asset(id) NOT NULL" text value } asset ||--o{ tag : "id" asset ||--o{ tag_string : "id"
erDiagram asset { int id PK "SERIAL PRIMARY KEY" text enterprise "NOT NULL" text site "DEFAULT '' NOT NULL" text area "DEFAULT '' NOT NULL" text line "DEFAULT '' NOT NULL" text workcell "DEFAULT '' NOT NULL" text origin_id "DEFAULT '' NOT NULL" } tag { timestamptz timestamp "NOT NULL" text name "NOT NULL" text origin "NOT NULL" int asset_id FK "REFERENCES asset(id) NOT NULL" real value } tag_string { timestamptz timestamp "NOT NULL" text name "NOT NULL" text origin "NOT NULL" int asset_id FK "REFERENCES asset(id) NOT NULL" text value } asset ||--o{ tag : "id" asset ||--o{ tag_string : "id"

asset

This table holds all assets. An asset for us is the unique combination of enterprise, site, area, line, workcell & origin_id.

All keys except for id and enterprise are optional. In our example we have just started our CNC cutter, so it’s unique asset will get inserted into the database. It already contains some data we inserted before so the new asset will be inserted at id: 8

identerprisesitearealineworkcellorigin_id
1acme-corporation
2acme-corporationnew-york
3acme-corporationlondonnorthassembly
4stark-industriesberlinsouthfabricationcell-a13002
5stark-industriestokyoeasttestingcell-b33005
6stark-industriespariswestpackagingcell-c23009
7umhcologneofficedevserver1sensor0
8cuttingincoperatedcolognecnc-cutter

tag

This table is a timescale hypertable. These tables are optimized to contain a large amount of data which is roughly sorted by time.

In our example we send data to umh/v1/cuttingincorperated/cologne/cnc-cutter/_historian/head using the following JSON:

{
 "timestamp_ms": 1670001234567,
  "pos":{ 
    "x": 12.5,
    "y": 7.3,
    "z": 3.2
  },  
  "temperature": 50.0,
  "collision": false
}

This will result in the following table entries:

timestampnameoriginasset_idvalue
1670001234567head_pos_xunknown812.5
1670001234567head_pos_yunknown87.3
1670001234567head_pos_zunknown83.2
1670001234567head_temperatureunknown850.0
1670001234567head_collisionunknown80

The origin is a placeholder for a later feature, and currently defaults to unknown.

tag_string

This table is the same as tag, but for string data. Our CNC cutter also emits the G-Code currently processed. umh/v1/cuttingincorperated/cologne/cnc-cutter/_historian

{
 "timestamp_ms": 1670001247568,
  "g-code": "G01 X10 Y10 Z0"
}

Resulting in this entry:

timestampnameoriginasset_idvalue
1670001247568g-codeunknown8G01 X10 Y10 Z0

Data retrieval

SQL

  1. SSH into your instance.
  2. Open a PSQL session
  3. Select the umh_v2 database using \c umh_v2
  4. Execute any query against our tables.

Example Queries

  • Get the number of rows in your tag table:
    SELECT COUNT(1) FROM tag;
    
  • Get the newest tag row for “umh/v1/umh/cologne”:
    SELECT * FROM tag WHERE asset_id=get_asset_id('umh', 'cologne') LIMIT 1;
    
    The equivalent function, without using our helper is:
    SELECT t.* FROM tag t, asset a WHERE t.asset_id=a.id AND a.enterprise='umh' AND a.site='cologne' LIMIT 1;
    

get_asset_id(<enterprise>, <site>, <area>, <line>, <workcell>, <origin_id>) is a helper function to ease retrieval of the asset id.

All fields except <enterprise> are optional, and it will always return the first asset id matching the search.

Grafana

Follow our Data Visualization tutorial to get started.

External access (Datagrip, PGAdmin, …)

  1. SSH into your instance.

  2. Get the password of the kafkatopostgresqlv2 user

    sudo kubectl get secret timescale-post-init-pw --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -o json | jq -r '.data["1_set_passwords.sh"]' | base64 -d | grep "kafkatopostgresqlv2 WITH PASSWORD" | awk -F "'" '{print $2}'
    
  3. Use your preferred tool to connect to our umh_v2 database using the kafkatopostgresqlv2 user and the password from above command.

    datagrip
    datagrip

4.3 - States

States are the core of the database model. They represent the state of the machine at a given point in time.

States Documentation Index

Introduction

This documentation outlines the various states used in the United Manufacturing Hub software stack to calculate OEE/KPI and other production metrics.

State Categories

Glossary

  • OEE: Overall Equipment Effectiveness
  • KPI: Key Performance Indicator

Conclusion

This documentation provides a comprehensive overview of the states used in the United Manufacturing Hub software stack and their respective categories. For more information on each state category and its individual states, please refer to the corresponding subpages.

4.3.1 - Active (10000-29999)

These states represent that the asset is actively producing

10000: ProducingAtFullSpeedState

This asset is running at full speed.

Examples for ProducingAtFullSpeedState

  • WS_Cur_State: Operating
  • PackML/Tobacco: Execute

20000: ProducingAtLowerThanFullSpeedState

Asset is producing, but not at full speed.

Examples for ProducingAtLowerThanFullSpeedState

  • WS_Cur_Prog: StartUp
  • WS_Cur_Prog: RunDown
  • WS_Cur_State: Stopping
  • PackML/Tobacco : Stopping
  • WS_Cur_State: Aborting
  • PackML/Tobacco: Aborting
  • WS_Cur_State: Holding
  • Ws_Cur_State: Unholding
  • PackML:Tobacco: Unholding
  • WS_Cur_State Suspending
  • PackML/Tobacco: Suspending
  • WS_Cur_State: Unsuspending
  • PackML/Tobacco: Unsuspending
  • PackML/Tobacco: Completing
  • WS_Cur_Prog: Production
  • EUROMAP: MANUAL_RUN
  • EUROMAP: CONTROLLED_RUN

Currently not included:

  • WS_Prog_Step: all

4.3.2 - Unknown (30000-59999)

These states represent that the asset is in an unspecified state

30000: UnknownState

Data for that particular asset is not available (e.g. connection to the PLC is disrupted)

Examples for UnknownState

  • WS_Cur_Prog: Undefined
  • EUROMAP: Offline

40000 UnspecifiedStopState

The asset is not producing, but the reason is unknown at the time.

Examples for UnspecifiedStopState

  • WS_Cur_State: Clearing
  • PackML/Tobacco: Clearing
  • WS_Cur_State: Emergency Stop
  • WS_Cur_State: Resetting
  • PackML/Tobacco: Clearing
  • WS_Cur_State: Held
  • EUROMAP: Idle
  • Tobacco: Other
  • WS_Cur_State: Stopped
  • PackML/Tobacco: Stopped
  • WS_Cur_State: Starting
  • PackML/Tobacco: Starting
  • WS_Cur_State: Prepared
  • WS_Cur_State: Idle
  • PackML/Tobacco: Idle
  • PackML/Tobacco: Complete
  • EUROMAP: READY_TO_RUN

50000: MicrostopState

The asset is not producing for a short period (typically around five minutes), but the reason is unknown at the time.

4.3.3 - Material (60000-99999)

These states represent that the asset has issues regarding materials.

60000 InletJamState

This machine does not perform its intended function due to a lack of material flow in the infeed of the machine, detected by the sensor system of the control system (machine stop). In the case of machines that have several inlets, the condition o lack in the inlet refers to the main flow , i.e. to the material (crate, bottle) that is fed in the direction of the filling machine (Central machine). The defect in the infeed is an extraneous defect, but because of its importance for visualization and technical reporting, it is recorded separately.

Examples for InletJamState

  • WS_Cur_State: Lack

70000: OutletJamState

The machine does not perform its intended function as a result of a jam in the good flow discharge of the machine, detected by the sensor system of the control system (machine stop). In the case of machines that have several discharges, the jam in the discharge condition refers to the main flow, i.e. to the good (crate, bottle) that is fed in the direction of the filling machine (central machine) or is fed away from the filling machine. The jam in the outfeed is an external fault 1v, but it is recorded separately, because of its importance for visualization and technical reporting.

Examples for OutletJamState

  • WS_Cur_State: Tailback

80000: CongestionBypassState

The machine does not perform its intended function due to a shortage in the bypass supply or a jam in the bypass discharge of the machine, detected by the sensor system of the control system (machine stop). This condition can only occur in machines with two outlets or inlets and in which the bypass is in turn the inlet or outlet of an upstream or downstream machine of the filling line (packaging and palleting machines). The jam/shortage in the auxiliary flow is an external fault, but it is recoded separately due to its importance for visualization and technical reporting.

Examples for the CongestionBypassState

  • WS_Cur_State: Lack/Tailback Branch Line

90000: MaterialIssueOtherState

The asset has a material issue, but it is not further specified.

Examples for MaterialIssueOtherState

  • WS_Mat_Ready (Information of which material is lacking)
  • PackML/Tobacco: Suspended

4.3.4 - Process(100000-139999)

These states represent that the asset is in a stop, which belongs to the process and cannot be avoided.

100000: ChangeoverState

The asset is in a changeover process between products.

Examples for ChangeoverState

  • WS_Cur_Prog: Program-Changeover
  • Tobacco: CHANGE OVER

110000: CleaningState

The asset is currently in a cleaning process.

Examples for CleaningState

  • WS_Cur_Prog: Program-Cleaning
  • Tobacco: CLEAN
120000: EmptyingState

The asset is currently emptied, e.g. to prevent mold for food products over the long breaks, e.g. the weekend.

Examples for EmptyingState
  • Tobacco: EMPTY OUT

130000: SettingUpState

This machine is currently preparing itself for production, e.g. heating up.

Examples for SettingUpState
  • EUROMAP: PREPARING

4.3.5 - Operator (140000-159999)

These states represent that the asset is stopped because of operator related issues.

140000: OperatorNotAtMachineState

The operator is not at the machine.

150000: OperatorBreakState

The operator is taking a break.

This is different from a planned shift as it could contribute to performance losses.

Examples for OperatorBreakState

  • WS_Cur_Prog: Program-Break

4.3.6 - Planning (160000-179999)

These states represent that the asset is stopped as it is planned to stopped (planned idle time).

160000: NoShiftState

There is no shift planned at that asset.

170000: NO OrderState

There is no order planned at that asset.

4.3.7 - Technical (180000-229999)

These states represent that the asset has a technical issue.

180000: EquipmentFailureState

The asset itself is defect, e.g. a broken engine.

Examples for EquipmentFailureState

  • WS_Cur_State: Equipment Failure

190000: ExternalFailureState

There is an external failure, e.g. missing compressed air.

Examples for ExternalFailureState

  • WS_Cur_State: External Failure

200000: ExternalInterferenceState

There is an external interference, e.g. the crane to move the material is currently unavailable.

210000: PreventiveMaintenanceStop

A planned maintenance action.

Examples for PreventiveMaintenanceStop

  • WS_Cur_Prog: Program-Maintenance
  • PackML: Maintenance
  • EUROMAP: MAINTENANCE
  • Tobacco: MAINTENANCE

220000: TechnicalOtherStop

The asset has a technical issue, but it is not specified further.

Examples for TechnicalOtherStop

  • WS_Not_Of_Fail_Code
  • PackML: Held
  • EUROMAP: MALFUNCTION
  • Tobacco: MANUAL
  • Tobacco: SET UP
  • Tobacco: REMOTE SERVICE

5 - Data Model (v0)

This page describes the data model of the UMH stack - from the message payloads up to database tables.

Raw Data

If you have events that you just want to send to the message broker / Unified Namespace without the need for it to be stored, simply send it to the raw topic. This data will not be processed by the UMH stack, but you can use it to build your own data processing pipeline.

ProcessValue Data

If you have data that does not fit in the other topics (such as your PLC tags or sensor data), you can use the processValue topic. It will be saved in the database in the processValue or processValueString and can be queried using factorysinsight or the umh-datasource Grafana plugin.

Production Data

In a production environment, you should first declare products using addProduct. This allows you to create an order using addOrder. Once you have created an order, send an state message to tell the database that the machine is working (or not working) on the order.

When the machine is ordered to produce a product, send a startOrder message. When the machine has finished producing the product, send an endOrder message.

Send count messages if the machine has produced a product, but it does not make sense to give the product its ID. Especially useful for bottling or any other use case with a large amount of products, where not each product is traced.

You can also add shifts using addShift.

All messages land up in different tables in the database and will be accessible from factorysinsight or the umh-datasource Grafana plugin.

Recommendation: Start with addShift and state and continue from there on

Modifying Data

If you have accidentally sent the wrong state or if you want to modify a value, you can use the modifyState message.

Unique Product Tracking

You can use uniqueProduct to tell the database that a new instance of a product has been created. If the produced product is scrapped, you can use scrapUniqueProduct to change its state to scrapped.

5.1 - Messages

For each message topic you will find a short description what the message is used for and which structure it has, as well as what structure the payload is excepted to have.

Introduction

The United Manufacturing Hub provides a specific structure for messages/topics, each with its own unique purpose. By adhering to this structure, the UMH will automatically calculate KPIs for you, while also making it easier to maintain consistency in your topic structure.

5.1.1 - activity

activity messages are sent when a new order is added.

This is part of our recommended workflow to create machine states. The data sent here will not be stored in the database automatically, as it will be required to be converted into a state. In the future, there will be a microservice, which converts these automatically.

Topic


ia/<customerID>/<location>/<AssetID>/activity


ia.<customerID>.<location>.<AssetID>.activity

Usage

A message is sent here each time the machine runs or stops.

Content

keydata typedescription
timestamp_msintunix timestamp of message creation
activitybooltrue if asset is currently active, false if asset is currently inactive

JSON

Examples

The asset was active during the timestamp of the message:

{
  "timestamp_ms":1588879689394,
  "activity": true,
}

Schema

Producers

  • Typically Node-RED

Consumers

  • Typically Node-RED

5.1.2 - addOrder

AddOrder messages are sent when a new order is added.

Topic


ia/<customerID>/<location>/<AssetID>/addOrder


ia.<customerID>.<location>.<AssetID>.addOrder

Usage

A message is sent here each time a new order is added.

Content

keydata typedescription
product_idstringcurrent product name
order_idstringcurrent order name
target_unitsint64amount of units to be produced
  1. The product needs to be added before adding the order. Otherwise, this message will be discarded
  2. One order is always specific to that asset and can, by definition, not be used across machines. For this case one would need to create one order and product for each asset (reason: one product might go through multiple machines, but might have different target durations or even target units, e.g. one big 100m batch get split up into multiple pieces)

JSON

Examples

One order was started for 100 units of product “test”:

{
  "product_id":"test",
  "order_id":"test_order",
  "target_units":100
}

Schema

{
    "$schema": "http://json-schema.org/draft/2019-09/schema",
    "$id": "https://learn.umh.app/content/docs/architecture/datamodel/messages/addOrder.json",
    "type": "object",
    "default": {},
    "title": "Root Schema",
    "required": [
        "product_id",
        "order_id",
        "target_units"
    ],
    "properties": {
        "product_id": {
            "type": "string",
            "default": "",
            "title": "The product id to be produced",
            "examples": [
                "test",
                "Beierlinger 30x15"
            ]
        },
        "order_id": {
            "type": "string",
            "default": "",
            "title": "The order id of the order",
            "examples": [
                "test_order",
                "HA16/4889"
            ]
        },
        "target_units": {
            "type": "integer",
            "default": 0,
            "minimum": 0,
            "title": "The amount of units to be produced",
            "examples": [
                1,
                100
            ]
        }
    },
    "examples": [{
      "product_id": "Beierlinger 30x15",
      "order_id": "HA16/4889",
      "target_units": 1
    },{
      "product_id":"test",
      "order_id":"test_order",
      "target_units":100
    }]
}

Producers

  • Typically Node-RED

Consumers

5.1.3 - addParentToChild

AddParentToChild messages are sent when child products are added to a parent product.

Topic


ia/<customerID>/<location>/<AssetID>/addParentToChild


ia.<customerID>.<location>.<AssetID>.addParentToChild

Usage

This message can be emitted to add a child product to a parent product. It can be sent multiple times, if a parent product is split up into multiple child’s or multiple parents are combined into one child. One example for this if multiple parts are assembled to a single product.

Content

keydata typedescription
timestamp_msint64unix timestamp you want to go back from
childAIDstringthe AID of the child product
parentAIDstringthe AID of the parent product

JSON

Examples

A parent is added to a child:

{
  "timestamp_ms":1589788888888,
  "childAID":"23948723489",
  "parentAID":"4329875"
}

Schema

{
    "$schema": "http://json-schema.org/draft/2019-09/schema",
    "$id": "https://learn.umh.app/content/docs/architecture/datamodel/messages/scrapCount.json",
    "type": "object",
    "default": {},
    "title": "Root Schema",
    "required": [
        "timestamp_ms",
        "childAID",
        "parentAID"
    ],
    "properties": {
        "timestamp_ms": {
            "type": "integer",
            "default": 0,
            "minimum": 0,
            "title": "The unix timestamp you want to go back from",
            "examples": [
              1589788888888
            ]
        },
        "childAID": {
            "type": "string",
            "default": "",
            "title": "The AID of the child product",
            "examples": [
              "23948723489"
            ]
        },
        "parentAID": {
            "type": "string",
            "default": "",
            "title": "The AID of the parent product",
            "examples": [
              "4329875"
            ]
        }
    },
    "examples": [
        {
            "timestamp_ms":1589788888888,
            "childAID":"23948723489",
            "parentAID":"4329875"
        },
        {
            "timestamp_ms":1589788888888,
            "childAID":"TestChild",
            "parentAID":"TestParent"
        }
    ]
}

Producers

  • Typically Node-RED

Consumers

5.1.4 - addProduct

AddProduct messages are sent when a new product is produced.

Topic


ia/<customerID>/<location>/<AssetID>/addProduct


ia.<customerID>.<location>.<AssetID>.addProduct

Usage

A message is sent each time a new product is produced.

Content

keydata typedescription
product_idstringcurrent product name
time_per_unit_in_secondsfloat64the time it takes to produce one unit of the product

See also notes regarding adding products and orders in /addOrder

JSON

Examples

A new product “Beilinger 30x15” with a cycle time of 200ms is added to the asset.

{
  "product_id": "Beilinger 30x15",
  "time_per_unit_in_seconds": "0.2"
}

Schema

{
    "$schema": "http://json-schema.org/draft/2019-09/schema",
    "$id": "https://learn.umh.app/content/docs/architecture/datamodel/messages/scrapCount.json",
    "type": "object",
    "default": {},
    "title": "Root Schema",
    "required": [
        "product_id",
        "time_per_unit_in_seconds"
    ],
    "properties": {
        "product_id": {
          "type": "string",
          "default": "",
          "title": "The product id to be produced"
        },
        "time_per_unit_in_seconds": {
          "type": "number",
          "default": 0.0,
          "minimum": 0,
          "title": "The time it takes to produce one unit of the product"
        }
    },
    "examples": [
        {
            "product_id": "Beierlinger 30x15",
            "time_per_unit_in_seconds": "0.2"
        },
        {
            "product_id": "Test product",
            "time_per_unit_in_seconds": "10"
        }
    ]
}

Producers

  • Typically Node-RED

Consumers

5.1.5 - addShift

AddShift messages are sent to add a shift with start and end timestamp.

Topic


ia/<customerID>/<location>/<AssetID>/addShift


ia.<customerID>.<location>.<AssetID>.addShift

Usage

This message is send to indicate the start and end of a shift.

Content

keydata typedescription
timestamp_msint64unix timestamp of the shift start
timestamp_ms_endint64optional unix timestamp of the shift end

JSON

Examples

A shift with start and end:

{
  "timestamp_ms":1589788888888,
  "timestamp_ms_end":1589788888888
}

And shift without end:

{
  "timestamp_ms":1589788888888
}

Schema

{
    "$schema": "http://json-schema.org/draft/2019-09/schema",
    "$id": "https://learn.umh.app/content/docs/architecture/datamodel/messages/scrapCount.json",
    "type": "object",
    "default": {},
    "title": "Root Schema",
    "required": [
        "timestamp_ms"
    ],
    "properties": {
        "timestamp_ms": {
            "type": "integer",
            "description": "The unix timestamp, of shift start"
        },
        "timestamp_ms_end": {
            "type": "integer",
            "description": "The *optional* unix timestamp, of shift end"
        }
    },
    "examples": [
        {
            "timestamp_ms":1589788888888,
            "timestamp_ms_end":1589788888888
        },
        {
            "timestamp_ms":1589788888888
        }
    ]
}

Producers

Consumers

5.1.6 - count

Count Messages are sent everytime an asset has counted a new item.

Topic


ia/<customerID>/<location>/<AssetID>/count


ia.<customerID>.<location>.<AssetID>.count

Usage

A count message is send everytime an asset has counted a new item.

Content

keydata typedescription
timestamp_msint64unix timestamp of message creation
countint64amount of items counted
scrapint64optional amount of defective items. In unset 0 is assumed

JSON

Examples

One item was counted and there was no scrap:

{
  "timestamp_ms":1589788888888,
  "count":1,
  "scrap":0
}

Ten items where counted and there was five scrap:

{
  "timestamp_ms":1589788888888,
  "count":10,
  "scrap":5
}

Schema

{
    "$schema": "http://json-schema.org/draft/2019-09/schema",
    "$id": "https://learn.umh.app/content/docs/architecture/datamodel/messages/count.json",
    "type": "object",
    "default": {},
    "title": "Root Schema",
    "required": [
        "timestamp_ms",
        "count"
    ],
    "properties": {
        "timestamp_ms": {
            "type": "integer",
            "default": 0,
            "minimum": 0,
            "title": "The unix timestamp of message creation",
            "examples": [
                1589788888888
            ]
        },
        "count": {
            "type": "integer",
            "default": 0,
            "minimum": 0,
            "title": "The amount of items counted",
            "examples": [
                1
            ]
        },
        "scrap": {
            "type": "integer",
            "default": 0,
            "minimum": 0,
            "title": "The optional amount of defective items",
            "examples": [
                0
            ]
        }
    },
    "examples": [{
      "timestamp_ms": 1589788888888,
      "count": 1,
      "scrap": 0
    },{
      "timestamp_ms": 1589788888888,
      "count": 1
    }]
}

Producers

  • Typically Node-RED

Consumers

5.1.7 - deleteShift

DeleteShift messages are sent to delete a shift that starts at the designated timestamp.

Topic


ia/<customerID>/<location>/<AssetID>/deleteShift


ia.<customerID>.<location>.<AssetID>.deleteShift

Usage

deleteShift is generated to delete a shift that started at the designated timestamp.

Content

keydata typedescription
timestamp_msint32unix timestamp of the shift start

JSON

Example

The shift that started at the designated timestamp is deleted from the database.

{
    "begin_time_stamp": 1588879689394
}

Producers

  • Typically Node-RED

Consumers

5.1.8 - detectedAnomaly

detectedAnomaly messages are sent when an asset has stopped and the reason is identified.

This is part of our recommended workflow to create machine states. The data sent here will not be stored in the database automatically, as it will be required to be converted into a state. In the future, there will be a microservice, which converts these automatically.

Topic


ia/<customerID>/<location>/<AssetID>/detectedAnomaly


ia.<customerID>.<location>.<AssetID>.detectedAnomaly

Usage

A message is sent here each time a stop reason has been identified automatically or by input from the machine operator.

Content

keydata typedescription
timestamp_msintUnix timestamp of message creation
detectedAnomalystringreason for the production stop of the asset

JSON

Examples

The anomaly of the asset has been identified as maintenance:

{
  "timestamp_ms":1588879689394,
  "detectedAnomaly":"maintenance",
}

Producers

  • Typically Node-RED

Consumers

  • Typically Node-RED

5.1.9 - endOrder

EndOrder messages are sent whenever a new product is produced.

Topic


ia/<customerID>/<location>/<AssetID>/endOrder


ia.<customerID>.<location>.<AssetID>.endOrder

Usage

A message is sent each time a new product is produced.

Content

keydata typedescription
timestamp_msint64unix timestamp of message creation
order_idint64current order name

See also notes regarding adding products and orders in /addOrder

JSON

Examples

The order “test_order” was finished at the shown timestamp.

{
  "order_id":"test_order",
  "timestamp_ms":1589788888888
}

Schema

{
    "$schema": "http://json-schema.org/draft/2019-09/schema",
    "$id": "https://learn.umh.app/content/docs/architecture/datamodel/messages/endOrder.json",
    "type": "object",
    "default": {},
    "title": "Root Schema",
    "required": [
        "order_id",
        "timestamp_ms"
    ],
    "properties": {
        "timestamp_ms": {
          "type": "integer",
          "description": "The unix timestamp, of shift start"
        },
        "order_id": {
            "type": "string",
            "default": "",
            "title": "The order id of the order",
            "examples": [
                "test_order",
                "HA16/4889"
            ]
        }
    },
    "examples": [{
      "order_id": "HA16/4889",
      "timestamp_ms":1589788888888
    },{
      "product_id":"test",
      "timestamp_ms":1589788888888
    }]
}

Producers

  • Typically Node-RED

Consumers

5.1.10 - modifyProducedPieces

ModifyProducesPieces messages are sent whenever the count of produced and scrapped items need to be modified.

Topic


ia/<customerID>/<location>/<AssetID>/modifyProducedPieces


ia.<customerID>.<location>.<AssetID>.modifyProducedPieces

Usage

modifyProducedPieces is generated to change the count of produced items and scrapped items at the named timestamp.

Content

keydata typedescription
timestamp_msint64unix timestamp of the time point whose count is to be modified
countint32number of produced items
scrapint32number of scrapped items

JSON

Example

The count and scrap are overwritten to be to each at the timestamp.

{
    "timestamp_ms": 1588879689394,
    "count": 10,
    "scrap": 10
}

Producers

  • Typically Node-RED

Consumers

5.1.11 - modifyState

ModifyState messages are generated when a state of an asset during a certain timeframe needs to be modified.

Topic


ia/<customerID>/<location>/<AssetID>/modifyState


ia.<customerID>.<location>.<AssetID>.modifyState

Usage

modifyState is generated to modify the state from the starting timestamp to the end timestamp. You can find a list of all supported states here.

Content

keydata typedescription
timestamp_msint32unix timestamp of the starting point of the timeframe to be modified
timestamp_ms_endint32unix timestamp of the end point of the timeframe to be modified
new_stateint32new state code

JSON

Example

The state of the timeframe between the timestamp is modified to be 150000: OperatorBreakState

{
    "timestamp_ms": 1588879689394,
    "timestamp_ms_end": 1588891381023,
    "new_state": 150000
}

Producers

  • Typically Node-RED

Consumers

5.1.12 - processValue

ProcessValue messages are sent whenever a custom process value with unique name has been prepared. The value is numerical.

Topic


ia/<customerID>/<location>/<AssetID>/processValue 
or: ia/<customerID>/<location>/<AssetID>/processValue/<tagName>


ia.<customerID>.<location>.<AssetID>.processValue
or: ia.<customerID>.<location>.<AssetID>.processValue.<tagName>

If you have a lot of processValues, we’d recommend not using the /processValue as topic, but to append the tag name as well, e.g., /processValue/energyConsumption. This will structure it better for usage in MQTT Explorer or for data processing only certain processValues.

For automatic data storage in kafka-to-postgresql both will work fine as long as the payload is correct.

Please be aware that the values may only be int or float, other character are not valid, so make sure there is no quotation marks or anything sneaking in there. Also be cautious of using the JavaScript ToFixed() function, as it is converting a float into a string.

Usage

A message is sent each time a process value has been prepared. The key has a unique name.

Content

keydata typedescription
timestamp_msint64unix timestamp of message creation
<valuename>int64 or float64Represents a process value, e.g. temperature

Pre 0.10.0: As <valuename> is either of type ´int64´ or ´float64´, you cannot use booleans. Convert to integers as needed; e.g., true = “1”, false = “0”

Post 0.10.0: <valuename> will be converted, even if it is a boolean value. Check integer literals and floating-point literals for other valid values.

JSON

Example

At the shown timestamp the custom process value “energyConsumption” had a readout of 123456.

{
    "timestamp_ms": 1588879689394, 
    "energyConsumption": 123456
}

Producers

  • Typically Node-RED

Consumers

5.1.13 - processValueString

ProcessValueString messages are sent whenever a custom process value is prepared. The value is a string.

This message type is not functional as of 0.9.5!

Topic


ia/<customerID>/<location>/<AssetID>/processValueString


ia.<customerID>.<location>.<AssetID>.processValueString

Usage

A message is sent each time a process value has been prepared. The key has a unique name. This message is used when the datatype of the process value is a string instead of a number.

Content

keydata typedescription
timestamp_msint64unix timestamp of message creation
<valuename>stringRepresents a process value, e.g. temperature

JSON

Example

At the shown timestamp the custom process value “customer” had a readout of “miller”.

{
    "timestamp_ms": 1588879689394, 
    "customer": "miller"
}

Producers

  • Typically Node-RED

Consumers

5.1.14 - productTag

ProductTag messages are sent to contextualize processValue messages.

Topic


ia/<customerID>/<location>/<AssetID>/productTag


ia.<customerID>.<location>.<AssetID>.productTag

Usage

productTagString is usually generated by contextualizing a processValue.

Content

keydata typedescription
AIDstringAID of the product
namestringName of the product
valuefloat64key of the processValue
timestamp_msint64unix timestamp of message creation

JSON

Example

At the shown timestamp the product with the shown AID had 5 blemishes recorded.

{
    "AID": "43298756", 
    "name": "blemishes",
    "value": 5, 
    "timestamp_ms": 1588879689394
}

Producers

  • Typically Node-RED

Consumers

5.1.15 - productTagString

ProductTagString messages are sent to contextualize processValueString messages.

Topic


ia/<customerID>/<location>/<AssetID>/productTagString


ia.<customerID>.<location>.<AssetID>.productTagString

Usage

ProductTagString is usually generated by contextualizing a processValueString.

Content

keydata typedescription
AIDstringAID of the product
namestringKey of the processValue
valuestringvalue of the processValue
timestamp_msint64unix timestamp of message creation

JSON

Example

At the shown timestamp the product with the shown AID had the processValue of “test_value”.

{
    "AID": "43298756", 
    "name": "shirt_size",
    "value": "XL", 
    "timestamp_ms": 1588879689394
}

Producers

Consumers

5.1.16 - recommendation

Recommendation messages are sent whenever rapid actions would quickly improve efficiency on the shop floor.

Topic


ia/<customerID>/<location>/<AssetID>/recommendation


ia.<customerID>.<location>.<AssetID>.recommendation

Usage

recommendation are action recommendations, which require concrete and rapid action in order to quickly eliminate efficiency losses on the store floor.

Content

keydata typedescription
uidstringUniqueID of the product
timestamp_msint64unix timestamp of message creation
customerstringthe customer ID in the data structure
locationstringthe location in the data structure
assetstringthe asset ID in the data structure
recommendationTypeint32Name of the product
enabledbool-
recommendationValuesmapMap of values based on which this recommendation is created
diagnoseTextDEstringDiagnosis of the recommendation in german
diagnoseTextENstringDiagnosis of the recommendation in english
recommendationTextDEstringRecommendation in german
recommendationTextENstringRecommendation in english

JSON

Example

A recommendation for the demonstrator at the shown location has not been running for a while, so a recommendation is sent to either start the machine or specify a reason why it is not running.

{
    "UID": "43298756", 
    "timestamp_ms": 15888796894,
    "customer": "united-manufacturing-hub",
    "location": "dccaachen", 
    "asset": "DCCAachen-Demonstrator",
    "recommendationType": "1", 
    "enabled": true,
    "recommendationValues": { "Treshold": 30, "StoppedForTime": 612685 }, 
    "diagnoseTextDE": "Maschine DCCAachen-Demonstrator steht seit 612685 Sekunden still (Status: 8, Schwellwert: 30)" ,
    "diagnoseTextEN": "Machine DCCAachen-Demonstrator is not running since 612685 seconds (status: 8, threshold: 30)", 
    "recommendationTextDE":"Maschine DCCAachen-Demonstrator einschalten oder Stoppgrund auswählen.",
    "recommendationTextEN": "Start machine DCCAachen-Demonstrator or specify stop reason.", 
}

Producers

  • Typically Node-RED

Consumers

5.1.17 - scrapCount

ScrapCount messages are sent whenever a product is to be marked as scrap.

Topic


ia/<customerID>/<location>/<AssetID>/scrapCount


ia.<customerID>.<location>.<AssetID>.scrapCount

Usage

Here a message is sent every time products should be marked as scrap. It works as follows: A message with scrap and timestamp_ms is sent. It starts with the count that is directly before timestamp_ms. It is now iterated step by step back in time and step by step the existing counts are set to scrap until a total of scrap products have been scraped.

Content

  • timestamp_ms is the unix timestamp, you want to go back from
  • scrap number of item to be considered as scrap.
  1. You can specify maximum of 24h to be scrapped to avoid accidents
  2. (NOT IMPLEMENTED YET) If counts does not equal scrap, e.g. the count is 5 but only 2 more need to be scrapped, it will scrap exactly 2. Currently, it would ignore these 2. see also #125
  3. (NOT IMPLEMENTED YET) If no counts are available for this asset, but uniqueProducts are available, they can also be marked as scrap.

JSON

Examples

Ten items where scrapped:

{
  "timestamp_ms":1589788888888,
  "scrap":10
}

Schema

{
    "$schema": "http://json-schema.org/draft/2019-09/schema",
    "$id": "https://learn.umh.app/content/docs/architecture/datamodel/messages/scrapCount.json",
    "type": "object",
    "default": {},
    "title": "Root Schema",
    "required": [
        "timestamp_ms",
        "scrap"
    ],
    "properties": {
        "timestamp_ms": {
            "type": "integer",
            "default": 0,
            "minimum": 0,
            "title": "The unix timestamp you want to go back from",
            "examples": [
              1589788888888
            ]
        },
        "scrap": {
            "type": "integer",
            "default": 0,
            "minimum": 0,
            "title": "Number of items to be considered as scrap",
            "examples": [
                10
            ]
        }
    },
    "examples": [
        {
            "timestamp_ms": 1589788888888,
            "scrap": 10
        },
        {
            "timestamp_ms": 1589788888888,
            "scrap": 5
        }
    ]
}

Producers

  • Typically Node-RED

Consumers

5.1.18 - scrapUniqueProduct

ScrapUniqueProduct messages are sent whenever a unique product should be scrapped.

Topic


ia/<customerID>/<location>/<AssetID>/scrapUniqueProduct


ia.<customerID>.<location>.<AssetID>.scrapUniqueProduct

Usage

A message is sent here everytime a unique product is scrapped.

Content

keydata typedescription
UIDstringunique ID of the current product

JSON

Example

The product with the unique ID 22 is scrapped.

{
    "UID": 22, 
}

Producers

  • Typically Node-RED

Consumers

5.1.19 - startOrder

StartOrder messages are sent whenever a new order is started.

Topic


ia/<customerID>/<location>/<AssetID>/startOrder


ia.<customerID>.<location>.<AssetID>.startOrder

Usage

A message is sent here everytime a new order is started.

Content

keydata typedescription
order_idstringname of the order
timestamp_msint64unix timestamp of message creation
  1. See also notes regarding adding products and orders in /addOrder
  2. When startOrder is executed multiple times for an order, the last used timestamp is used.

JSON

Example

The order “test_order” is started at the shown timestamp.

{
  "order_id":"test_order",
  "timestamp_ms":1589788888888
}

Producers

  • Typically Node-RED

Consumers

5.1.20 - state

State messages are sent every time an asset changes status.

Topic


ia/<customerID>/<location>/<AssetID>/state


ia.<customerID>.<location>.<AssetID>.state

Usage

A message is sent here each time the asset changes status. Subsequent changes are not possible. Different statuses can also be process steps, such as “setup”, “post-processing”, etc. You can find a list of all supported states here.

Content

keydata typedescription
stateuint32value of the state according to the link above
timestamp_msuint64unix timestamp of message creation

JSON

Example

The asset has a state of 10000, which means it is actively producing.

{
  "timestamp_ms":1589788888888,
  "state":10000
}

Producers

  • Typically Node-RED

Consumers

5.1.21 - uniqueProduct

UniqueProduct messages are sent whenever a unique product was produced or modified.

Topic


ia/<customerID>/<location>/<AssetID>/uniqueProduct


ia.<customerID>.<location>.<AssetID>.uniqueProduct

Usage

A message is sent here each time a product has been produced or modified. A modification can take place, for example, due to a downstream quality control.

There are two cases of when to send a message under the uniqueProduct topic:

  • The exact product doesn’t already have a UID (-> This is the case, if it has not been produced at an asset incorporated in the digital shadow). Specify a space holder asset = “storage” in the MQTT message for the uniqueProduct topic.
  • The product was produced at the current asset (it is now different from before, e.g. after machining or after something was screwed in). The newly produced product is always the “child” of the process. Products it was made out of are called the “parents”.

Content

keydata typedescription
begin_timestamp_msint64unix timestamp of start time
end_timestamp_msint64unix timestamp of completion time
product_idstringproduct ID of the currently produced product
isScrapbooloptional information whether the current product is of poor quality and will be sorted out. Is considered false if not specified.
uniqueProductAlternativeIDstringalternative ID of the product

JSON

Example

The processing of product “Beilinger 30x15” with the AID 216381 started and ended at the designated timestamps. It is of low quality and due to be scrapped.

{
  "begin_timestamp_ms":1589788888888,
  "end_timestamp_ms":1589788893729,
  "product_id":"Beilinger 30x15",
  "isScrap":true,
  "uniqueProductAlternativeID":"216381"
}

Producers

  • Typically Node-RED

Consumers

5.2 - Database

The database stores the messages in different tables.

Introduction

We are using the database TimescaleDB, which is based on PostgreSQL and supports standard relational SQL database work, while also supporting time-series databases. This allows for usage of regular SQL queries, while also allowing to process and store time-series data. Postgresql has proven itself reliable over the last 25 years, so we are happy to use it.

If you want to learn more about database paradigms, please refer to the knowledge article about that topic. It also includes a concise video summarizing what you need to know about different paradigms.

Our database model is designed to represent a physical manufacturing process. It keeps track of the following data:

  • The state of the machine
  • The products that are produced
  • The orders for the products
  • The workers’ shifts
  • Arbitrary process values (sensor data)
  • The producible products
  • Recommendations for the production

Please note that our database does not use a retention policy. This means that your database can grow quite fast if you save a lot of process values. Take a look at our guide on enabling data compression and retention in TimescaleDB to customize the database to your needs.

A good method to check your db size would be to use the following commands inside postgres shell:

SELECT pg_size_pretty(pg_database_size('factoryinsight'));

5.2.1 - assetTable

assetTable is contains all assets and their location.

Usage

Primary table for our data structure, it contains all the assets and their location.

Structure

keydata typedescriptionexample
idintAuto incrementing id of the asset0
assetIDtextAsset namePrinter-03
locationtextPhysical location of the assetDCCAachen
customertextCustomer name, in most cases “factoryinsight”factoryinsight

Relations

assetTable
assetTable

DDL

 CREATE TABLE IF NOT EXISTS assetTable
 (
     id         SERIAL  PRIMARY KEY,
     assetID    TEXT    NOT NULL,
     location   TEXT    NOT NULL,
     customer   TEXT    NOT NULL,
     unique (assetID, location, customer)
 );

5.2.2 - configurationTable

configurationTable stores the configuration of the UMH system.

Usage

This table stores the configuration of the system

Structure

keydata typedescriptionexample
customertextCustomer namefactoryinsight
MicrostopDurationInSecondsintegerStop counts as microstop if smaller than this value120
IgnoreMicrostopUnderThisDurationInSecondsintegerIgnore stops under this value-1
MinimumRunningTimeInSecondsintegerMinimum runtime of the asset before tracking micro-stops0
ThresholdForNoShiftsConsideredBreakInSecondsintegerIf no shift is shorter than this value, it is a break2100
LowSpeedThresholdInPcsPerHourintegerThreshold once machine should go into low speed state-1
AutomaticallyIdentifyChangeoversbooleanAutomatically identify changeovers in productiontrue
LanguageCodeinteger0 is german, 1 is english1
AvailabilityLossStatesinteger[]States to count as availability loss{40000, 180000, 190000, 200000, 210000, 220000}
PerformanceLossStatesinteger[]States to count as performance loss{20000, 50000, 60000, 70000, 80000, 90000, 100000, 110000, 120000, 130000, 140000, 150000}

Relations

configurationTable
configurationTable

DDL

CREATE TABLE IF NOT EXISTS configurationTable
(
    customer TEXT PRIMARY KEY,
    MicrostopDurationInSeconds INTEGER DEFAULT 60*2,
    IgnoreMicrostopUnderThisDurationInSeconds INTEGER DEFAULT -1, --do not apply
    MinimumRunningTimeInSeconds INTEGER DEFAULT 0, --do not apply
    ThresholdForNoShiftsConsideredBreakInSeconds INTEGER DEFAULT 60*35,
    LowSpeedThresholdInPcsPerHour INTEGER DEFAULT -1, --do not apply
    AutomaticallyIdentifyChangeovers BOOLEAN DEFAULT true,
    LanguageCode INTEGER DEFAULT 1, -- english
    AvailabilityLossStates INTEGER[] DEFAULT '{40000, 180000, 190000, 200000, 210000, 220000}',
    PerformanceLossStates INTEGER[] DEFAULT '{20000, 50000, 60000, 70000, 80000, 90000, 100000, 110000, 120000, 130000, 140000, 150000}'
);

5.2.3 - countTable

countTable contains all reported counts of all assets.

Usage

This table contains all reported counts of the assets.

Structure

keydata typedescriptionexample
timestamptimestamptzEntry timestamp0
asset_idserialAsset id (see assetTable)1
countintegerA count greater 01

Relations

countTable
countTable

DDL

CREATE TABLE IF NOT EXISTS countTable
(
    timestamp                TIMESTAMPTZ                         NOT NULL,
    asset_id            SERIAL REFERENCES assetTable (id),
    count INTEGER CHECK (count > 0),
    UNIQUE(timestamp, asset_id)
);
-- creating hypertable
SELECT create_hypertable('countTable', 'timestamp');

-- creating an index to increase performance
CREATE INDEX ON countTable (asset_id, timestamp DESC);

5.2.4 - orderTable

orderTable contains orders for production.

Usage

This table stores orders for product production

Structure

keydata typedescriptionexample
order_idserialAuto incrementing id0
order_nametextName of the orderScarjit-500-DaVinci-1-24062022
product_idserialProduct id to produce1
begin_timestamptimestamptzBegin timestamp of the order0
end_timestamptimestamptzEnd timestamp of the order10000
target_unitsintegerHow many product to produce500
asset_idserialWhich asset to produce on (see assetTable)1

Relations

orderTable
orderTable

DDL

CREATE TABLE IF NOT EXISTS orderTable
(
    order_id        SERIAL          PRIMARY KEY,
    order_name      TEXT            NOT NULL,
    product_id      SERIAL          REFERENCES productTable (product_id),
    begin_timestamp TIMESTAMPTZ,
    end_timestamp   TIMESTAMPTZ,
    target_units    INTEGER,
    asset_id        SERIAL          REFERENCES assetTable (id),
    unique (asset_id, order_name),
    CHECK (begin_timestamp < end_timestamp),
    CHECK (target_units > 0),
    EXCLUDE USING gist (asset_id WITH =, tstzrange(begin_timestamp, end_timestamp) WITH &&) WHERE (begin_timestamp IS NOT NULL AND end_timestamp IS NOT NULL)
);

5.2.5 - processValueStringTable

processValueStringTable contains process values.

Usage

This table stores process values, for example toner level of a printer, flow rate of a pump, etc. This table, has a closely related table for storing number values, processValueTable.

Structure

keydata typedescriptionexample
timestamptimestamptzEntry timestamp0
asset_idserialAsset id (see assetTable)1
valueNametextName of the process valuetoner-level
valuestringValue of the process value100

Relations

processValueTable
processValueTable

DDL

CREATE TABLE IF NOT EXISTS processValueStringTable
(
    timestamp               TIMESTAMPTZ                         NOT NULL,
    asset_id                SERIAL                              REFERENCES assetTable (id),
    valueName               TEXT                                NOT NULL,
    value                   TEST                                NULL,
    UNIQUE(timestamp, asset_id, valueName)
);
-- creating hypertable
SELECT create_hypertable('processValueStringTable', 'timestamp');

-- creating an index to increase performance
CREATE INDEX ON processValueStringTable (asset_id, timestamp DESC);

-- creating an index to increase performance
CREATE INDEX ON processValueStringTable (valuename);

5.2.6 - processValueTable

processValueTable contains process values.

Usage

This table stores process values, for example toner level of a printer, flow rate of a pump, etc. This table, has a closely related table for storing string values, processValueStringTable.

Structure

keydata typedescriptionexample
timestamptimestamptzEntry timestamp0
asset_idserialAsset id (see assetTable)1
valueNametextName of the process valuetoner-level
valuedoubleValue of the process value100

Relations

processValueTable
processValueTable

DDL

CREATE TABLE IF NOT EXISTS processValueTable
(
    timestamp               TIMESTAMPTZ                         NOT NULL,
    asset_id                SERIAL                              REFERENCES assetTable (id),
    valueName               TEXT                                NOT NULL,
    value                   DOUBLE PRECISION                    NULL,
    UNIQUE(timestamp, asset_id, valueName)
);
-- creating hypertable
SELECT create_hypertable('processValueTable', 'timestamp');

-- creating an index to increase performance
CREATE INDEX ON processValueTable (asset_id, timestamp DESC);

-- creating an index to increase performance
CREATE INDEX ON processValueTable (valuename);

5.2.7 - productTable

productTable contains products in production.

Usage

This table products to be produced at assets

Structure

keydata typedescriptionexample
product_idserialAuto incrementing id0
product_nametextName of the productPainting-DaVinci-1
asset_idserialAsset producing this product (see assetTable)1
time_per_unit_in_secondsrealTime in seconds to produce this product600

Relations

productTable
productTable

DDL

CREATE TABLE IF NOT EXISTS productTable
(
    product_id                  SERIAL PRIMARY KEY,
    product_name                TEXT NOT NULL,
    asset_id                    SERIAL REFERENCES assetTable (id),
    time_per_unit_in_seconds    REAL NOT NULL,
    UNIQUE(product_name, asset_id),
    CHECK (time_per_unit_in_seconds > 0)
);

5.2.8 - recommendationTable

recommendationTable contains given recommendation for the shop floor assets.

Usage

This table stores recommendations

Structure

keydata typedescriptionexample
uidtextId of the recommendationrefill_toner
timestamptimestamptzTimestamp of recommendation insertion1
recommendationTypeintegerUsed to subscribe people to specific types only3
enabledboolRecommendation can be outputtedtrue
recommendationValuestextValues to change to resolve recommendation{ “toner-level”: 100 }
diagnoseTextDEtextDiagnose text in german“Der Toner ist leer”
diagnoseTextENtextDiagnose text in english“The toner is empty”
recommendationTextDEtextRecommendation text in german“Bitte den Toner auffüllen”
recommendationTextENtextRecommendation text in english“Please refill the toner”

Relations

recommendationTable
recommendationTable

DDL

CREATE TABLE IF NOT EXISTS recommendationTable
(
    uid                     TEXT                                PRIMARY KEY,
    timestamp               TIMESTAMPTZ                         NOT NULL,
    recommendationType      INTEGER                             NOT NULL,
    enabled                 BOOLEAN                             NOT NULL,
    recommendationValues    TEXT,
    diagnoseTextDE          TEXT,
    diagnoseTextEN          TEXT,
    recommendationTextDE    TEXT,
    recommendationTextEN    TEXT
);

5.2.9 - shiftTable

shiftTable contains shifts with asset, start and finish timestamp

Usage

This table stores shifts

Structure

keydata typedescriptionexample
idserialAuto incrementing id0
typeintegerShift type (1 for shift, 0 for no shift)1
begin_timestamptimestamptzBegin of the shift3
end_timestamptimestamptzEnd of the shift10
asset_idtextAsset ID the shift is performed on (see assetTable)1

Relations

shiftTable
shiftTable

DDL

-- Using btree_gist to avoid overlapping shifts
-- Source: https://gist.github.com/fphilipe/0a2a3d50a9f3834683bf
CREATE EXTENSION btree_gist;
CREATE TABLE IF NOT EXISTS shiftTable
(
    id              SERIAL      PRIMARY KEY,
    type            INTEGER,
    begin_timestamp TIMESTAMPTZ NOT NULL,
    end_timestamp   TIMESTAMPTZ,
    asset_id        SERIAL      REFERENCES assetTable (id),
    unique (begin_timestamp, asset_id),
    CHECK (begin_timestamp < end_timestamp),
    EXCLUDE USING gist (asset_id WITH =, tstzrange(begin_timestamp, end_timestamp) WITH &&)
);

5.2.10 - stateTable

stateTable contains the states of all assets.

Usage

This table contains all state changes of the assets.

Structure

keydata typedescriptionexample
timestamptimestamptzEntry timestamp0
asset_idserialAsset ID (see assetTable)1
stateintegerState ID (see states)40000

Relations

stateTable
stateTable

DDL

CREATE TABLE IF NOT EXISTS stateTable
(
    timestamp   TIMESTAMPTZ NOT NULL,
    asset_id    SERIAL      REFERENCES assetTable (id),
    state       INTEGER     CHECK (state >= 0),
    UNIQUE(timestamp, asset_id)
);
-- creating hypertable
SELECT create_hypertable('stateTable', 'timestamp');

-- creating an index to increase performance
CREATE INDEX ON stateTable (asset_id, timestamp DESC);

5.2.11 - uniqueProductTable

uniqueProductTable contains unique products and their IDs.

Usage

This table stores unique products.

Structure

keydata typedescriptionexample
uidtextID of a unique product0
asset_idserialAsset id (see assetTable)1
begin_timestamp_mstimestamptzTime when product was inputted in asset0
end_timestamp_mstimestamptzTime when product was output of asset100
product_idtextID of the product (see productTable)1
is_scrapbooleanTrue if product is scraptrue
quality_classtextQuality class of the productA
station_idtextID of the station where the product was processedSoldering Iron-1

Relations

uniqueProductTable
uniqueProductTable

DDL

CREATE TABLE IF NOT EXISTS uniqueProductTable
(
    uid                 TEXT        NOT NULL,
    asset_id            SERIAL      REFERENCES assetTable (id),
    begin_timestamp_ms  TIMESTAMPTZ NOT NULL,
    end_timestamp_ms    TIMESTAMPTZ NOT NULL,
    product_id          TEXT        NOT NULL,
    is_scrap            BOOLEAN     NOT NULL,
    quality_class       TEXT        NOT NULL,
    station_id          TEXT        NOT NULL,
    UNIQUE(uid, asset_id, station_id),
    CHECK (begin_timestamp_ms < end_timestamp_ms)
);

-- creating an index to increase performance
CREATE INDEX ON uniqueProductTable (asset_id, uid, station_id);

5.3 - States

States are the core of the database model. They represent the state of the machine at a given point in time.

States Documentation Index

Introduction

This documentation outlines the various states used in the United Manufacturing Hub software stack to calculate OEE/KPI and other production metrics.

State Categories

Glossary

  • OEE: Overall Equipment Effectiveness
  • KPI: Key Performance Indicator

Conclusion

This documentation provides a comprehensive overview of the states used in the United Manufacturing Hub software stack and their respective categories. For more information on each state category and its individual states, please refer to the corresponding subpages.

5.3.1 - Active (10000-29999)

These states represent that the asset is actively producing

10000: ProducingAtFullSpeedState

This asset is running at full speed.

Examples for ProducingAtFullSpeedState

  • WS_Cur_State: Operating
  • PackML/Tobacco: Execute

20000: ProducingAtLowerThanFullSpeedState

Asset is producing, but not at full speed.

Examples for ProducingAtLowerThanFullSpeedState

  • WS_Cur_Prog: StartUp
  • WS_Cur_Prog: RunDown
  • WS_Cur_State: Stopping
  • PackML/Tobacco : Stopping
  • WS_Cur_State: Aborting
  • PackML/Tobacco: Aborting
  • WS_Cur_State: Holding
  • Ws_Cur_State: Unholding
  • PackML:Tobacco: Unholding
  • WS_Cur_State Suspending
  • PackML/Tobacco: Suspending
  • WS_Cur_State: Unsuspending
  • PackML/Tobacco: Unsuspending
  • PackML/Tobacco: Completing
  • WS_Cur_Prog: Production
  • EUROMAP: MANUAL_RUN
  • EUROMAP: CONTROLLED_RUN

Currently not included:

  • WS_Prog_Step: all

5.3.2 - Unknown (30000-59999)

These states represent that the asset is in an unspecified state

30000: UnknownState

Data for that particular asset is not available (e.g. connection to the PLC is disrupted)

Examples for UnknownState

  • WS_Cur_Prog: Undefined
  • EUROMAP: Offline

40000 UnspecifiedStopState

The asset is not producing, but the reason is unknown at the time.

Examples for UnspecifiedStopState

  • WS_Cur_State: Clearing
  • PackML/Tobacco: Clearing
  • WS_Cur_State: Emergency Stop
  • WS_Cur_State: Resetting
  • PackML/Tobacco: Clearing
  • WS_Cur_State: Held
  • EUROMAP: Idle
  • Tobacco: Other
  • WS_Cur_State: Stopped
  • PackML/Tobacco: Stopped
  • WS_Cur_State: Starting
  • PackML/Tobacco: Starting
  • WS_Cur_State: Prepared
  • WS_Cur_State: Idle
  • PackML/Tobacco: Idle
  • PackML/Tobacco: Complete
  • EUROMAP: READY_TO_RUN

50000: MicrostopState

The asset is not producing for a short period (typically around five minutes), but the reason is unknown at the time.

5.3.3 - Material (60000-99999)

These states represent that the asset has issues regarding materials.

60000 InletJamState

This machine does not perform its intended function due to a lack of material flow in the infeed of the machine, detected by the sensor system of the control system (machine stop). In the case of machines that have several inlets, the condition o lack in the inlet refers to the main flow , i.e. to the material (crate, bottle) that is fed in the direction of the filling machine (Central machine). The defect in the infeed is an extraneous defect, but because of its importance for visualization and technical reporting, it is recorded separately.

Examples for InletJamState

  • WS_Cur_State: Lack

70000: OutletJamState

The machine does not perform its intended function as a result of a jam in the good flow discharge of the machine, detected by the sensor system of the control system (machine stop). In the case of machines that have several discharges, the jam in the discharge condition refers to the main flow, i.e. to the good (crate, bottle) that is fed in the direction of the filling machine (central machine) or is fed away from the filling machine. The jam in the outfeed is an external fault 1v, but it is recorded separately, because of its importance for visualization and technical reporting.

Examples for OutletJamState

  • WS_Cur_State: Tailback

80000: CongestionBypassState

The machine does not perform its intended function due to a shortage in the bypass supply or a jam in the bypass discharge of the machine, detected by the sensor system of the control system (machine stop). This condition can only occur in machines with two outlets or inlets and in which the bypass is in turn the inlet or outlet of an upstream or downstream machine of the filling line (packaging and palleting machines). The jam/shortage in the auxiliary flow is an external fault, but it is recoded separately due to its importance for visualization and technical reporting.

Examples for the CongestionBypassState

  • WS_Cur_State: Lack/Tailback Branch Line

90000: MaterialIssueOtherState

The asset has a material issue, but it is not further specified.

Examples for MaterialIssueOtherState

  • WS_Mat_Ready (Information of which material is lacking)
  • PackML/Tobacco: Suspended

5.3.4 - Process(100000-139999)

These states represent that the asset is in a stop, which belongs to the process and cannot be avoided.

100000: ChangeoverState

The asset is in a changeover process between products.

Examples for ChangeoverState

  • WS_Cur_Prog: Program-Changeover
  • Tobacco: CHANGE OVER

110000: CleaningState

The asset is currently in a cleaning process.

Examples for CleaningState

  • WS_Cur_Prog: Program-Cleaning
  • Tobacco: CLEAN
120000: EmptyingState

The asset is currently emptied, e.g. to prevent mold for food products over the long breaks, e.g. the weekend.

Examples for EmptyingState
  • Tobacco: EMPTY OUT

130000: SettingUpState

This machine is currently preparing itself for production, e.g. heating up.

Examples for SettingUpState
  • EUROMAP: PREPARING

5.3.5 - Operator (140000-159999)

These states represent that the asset is stopped because of operator related issues.

140000: OperatorNotAtMachineState

The operator is not at the machine.

150000: OperatorBreakState

The operator is taking a break.

This is different from a planned shift as it could contribute to performance losses.

Examples for OperatorBreakState

  • WS_Cur_Prog: Program-Break

5.3.6 - Planning (160000-179999)

These states represent that the asset is stopped as it is planned to stopped (planned idle time).

160000: NoShiftState

There is no shift planned at that asset.

170000: NO OrderState

There is no order planned at that asset.

5.3.7 - Technical (180000-229999)

These states represent that the asset has a technical issue.

180000: EquipmentFailureState

The asset itself is defect, e.g. a broken engine.

Examples for EquipmentFailureState

  • WS_Cur_State: Equipment Failure

190000: ExternalFailureState

There is an external failure, e.g. missing compressed air.

Examples for ExternalFailureState

  • WS_Cur_State: External Failure

200000: ExternalInterferenceState

There is an external interference, e.g. the crane to move the material is currently unavailable.

210000: PreventiveMaintenanceStop

A planned maintenance action.

Examples for PreventiveMaintenanceStop

  • WS_Cur_Prog: Program-Maintenance
  • PackML: Maintenance
  • EUROMAP: MAINTENANCE
  • Tobacco: MAINTENANCE

220000: TechnicalOtherStop

The asset has a technical issue, but it is not specified further.

Examples for TechnicalOtherStop

  • WS_Not_Of_Fail_Code
  • PackML: Held
  • EUROMAP: MALFUNCTION
  • Tobacco: MANUAL
  • Tobacco: SET UP
  • Tobacco: REMOTE SERVICE

6 - Architecture

A comprehensive overview of the United Manufacturing Hub architecture, detailing its deployment, management, and data processing capabilities.

The United Manufacturing Hub is a comprehensive Helm Chart for Kubernetes, integrating a variety of open source software, including notable third-party applications such as Node-RED and Grafana. Designed for versatility, UMH is deployable across a wide spectrum of environments, from edge devices to virtual machines, and even managed Kubernetes services, catering to diverse industrial needs.

The following diagram depicts the interaction dynamics between UMH’s components and user types, offering a visual guide to its architecture and operational mechanisms.

graph LR subgraph group1 [United Manufacturing Hub] style group1 fill:#ffffff,stroke:#47a0b5,color:#47a0b5,stroke-dasharray:5 16["`**Management Console** Configures, manages, and monitors Data and Device & Container Infrastructures in the UMH Integrated Platform`"] style 16 fill:#aaaaaa,stroke:#47a0b5,color:#000000 27["`**Device & Container Infrastructure** Oversees automated, streamlined installation of key software and operating systems`"] style 27 fill:#aaaaaa,stroke:#47a0b5,color:#000000 50["`**Data Infrastructure** Integrates every ISA-95 standard layer with the Unified Namespace, adding data sources beyond typical automation pyramid bounds`"] style 50 fill:#aaaaaa,stroke:#47a0b5,color:#000000 end 1["`fa:fa-user **IT/OT Professional** Manages and monitors the United Manufacturing Hub`"] style 1 fill:#dddddd,stroke:#9a9a9a,color:#000000 2["`fa:fa-user **OT Professional / Shopfloor** Monitors and manages the shopfloor, including safety, automation and maintenance`"] style 2 fill:#dddddd,stroke:#9a9a9a,color:#000000 3["`fa:fa-user **Business Analyst** Gathers and analyzes company data to identify needs and recommend solutions`"] style 3 fill:#dddddd,stroke:#9a9a9a,color:#000000 4["`**Data Warehouse/Data Lake** Stores data for analysis, on-premise or in the cloud`"] style 4 fill:#f4f4f4,stroke:#f4f4f4,color:#000000 5["`**Automation Pyramid** Represents the layered structure of systems in manufacturing operations based on the ISA-95 model`"] style 5 fill:#f4f4f4,stroke:#f4f4f4,color:#000000 1-. Interacts with the entire infrastructure .->16 16-. Manages & monitors .->27 16-. Manages & monitors .->50 2-. Access real-time dashboards from .->50 2-. Works with .->5 3-. Gets and analyzes data from .->4 50-. Is installed on .->27 50-. Provides data to .->4 50-. Provides to and extracts data from .->5
graph LR subgraph group1 [United Manufacturing Hub] style group1 fill:#ffffff,stroke:#47a0b5,color:#47a0b5,stroke-dasharray:5 16["`**Management Console** Configures, manages, and monitors Data and Device & Container Infrastructures in the UMH Integrated Platform`"] style 16 fill:#aaaaaa,stroke:#47a0b5,color:#000000 27["`**Device & Container Infrastructure** Oversees automated, streamlined installation of key software and operating systems`"] style 27 fill:#aaaaaa,stroke:#47a0b5,color:#000000 50["`**Data Infrastructure** Integrates every ISA-95 standard layer with the Unified Namespace, adding data sources beyond typical automation pyramid bounds`"] style 50 fill:#aaaaaa,stroke:#47a0b5,color:#000000 end 1["`fa:fa-user **IT/OT Professional** Manages and monitors the United Manufacturing Hub`"] style 1 fill:#dddddd,stroke:#9a9a9a,color:#000000 2["`fa:fa-user **OT Professional / Shopfloor** Monitors and manages the shopfloor, including safety, automation and maintenance`"] style 2 fill:#dddddd,stroke:#9a9a9a,color:#000000 3["`fa:fa-user **Business Analyst** Gathers and analyzes company data to identify needs and recommend solutions`"] style 3 fill:#dddddd,stroke:#9a9a9a,color:#000000 4["`**Data Warehouse/Data Lake** Stores data for analysis, on-premise or in the cloud`"] style 4 fill:#f4f4f4,stroke:#f4f4f4,color:#000000 5["`**Automation Pyramid** Represents the layered structure of systems in manufacturing operations based on the ISA-95 model`"] style 5 fill:#f4f4f4,stroke:#f4f4f4,color:#000000 1-. Interacts with the entire infrastructure .->16 16-. Manages & monitors .->27 16-. Manages & monitors .->50 2-. Access real-time dashboards from .->50 2-. Works with .->5 3-. Gets and analyzes data from .->4 50-. Is installed on .->27 50-. Provides data to .->4 50-. Provides to and extracts data from .->5

Management Console

The Management Console of the United Manufacturing Hub is a robust web application designed to configure, manage, and monitor the various aspects of Data and Device & Container Infrastructures within UMH. Acting as the central command center, it provides a comprehensive overview and control over the system’s functionalities, ensuring efficient operation and maintenance. The console simplifies complex processes, making it accessible for users to oversee the vast array of services and operations integral to UMH.

Device & Container Infrastructure

The Device & Container Infrastructure lays the foundation of the United Manufacturing Hub’s architecture, streamlining the deployment and setup of essential software and operating systems across devices. This infrastructure is pivotal in automating the installation process, ensuring that the essential software components and operating systems are efficiently and reliably established. It provides the groundwork upon which the Data Infrastructure is built, embodying a robust and scalable base for the entire architecture.

Data Infrastructure

The Data Infrastructure is the heart of the United Manufacturing Hub, orchestrating the interconnection of data sources, storage, monitoring, and analysis solutions. It comprises three key components:

  • Data Connectivity: Facilitates the integration of diverse data sources into UMH, enabling uninterrupted data exchange.
  • Unified Namespace (UNS): Centralizes and standardizes data within UMH into a cohesive model, by linking each layer of the ISA-95 automation pyramid to the UNS and assimilating non-traditional data sources.
  • Historian: Stores data in TimescaleDB, a PostgreSQL-based time-series database, allowing real-time and historical data analysis through Grafana or other tools.

The UMH Data Infrastructure leverages Industrial IoT to expand the ISA95 Automation Pyramid, enabling high-speed data processing using systems like Kafka. It enhances system availability through Kubernetes and simplifies maintenance with Docker and Prometheus. Additionally, it facilitates the use of AI, predictive maintenance, and digital twin technologies

Expandability

The United Manufacturing Hub is architecturally designed for high expandability, enabling integration of custom microservices or Docker containers. This adaptability allows for users to establish connections with third-party systems or to implement specialized data analysis tools. The platform also accommodates any third-party application available as a Helm Chart, Kubernetes resource, or Docker Compose, offering vast potential for customization to suit evolving industrial demands.

6.1 - Data Infrastructure

An overview of UMH’s Data Infrastructure, integrating and managing diverse data sources.

The United Manufacturing Hub’s Data Infrastructure is where all data converges. It extends the ISA95 Automation Pyramid, the usual model for data flow in factory settings. This infrastructure links each level of the traditional pyramid to the Unified Namespace (UNS), incorporating extra data sources that the typical automation pyramid doesn’t include. The data is then organized, stored, and analyzed to offer useful information for frontline workers. Afterwards, it can be sent to the a data lake or analytics platform, where business analysts can access it for deeper insights.

It comprises three primary elements:

  • Data Connectivity: This component includes an array of tools and services designed to connect various systems and sensors on the shop floor, facilitating the flow of data into the Unified Namespace.
  • Unified Namespace: Acts as the central hub for all events and messages on the shop floor, ensuring data consistency and accessibility.
  • Historian: Responsible for storing events in a time-series database, it also provides tools for data visualization, enabling both real-time and historical analytics.

Together, these elements provide a comprehensive framework for collecting, storing, and analyzing data, enhancing the operational efficiency and decision-making processes on the shop floor.

graph LR 2["`fa:fa-user **OT Professional / Shopfloor** Monitors and manages the shopfloor, including safety, automation and maintenance`"] style 2 fill:#dddddd,stroke:#9a9a9a,color:#000000 4["`**Data Warehouse/Data Lake** Stores data for analysis, on-premise or in the cloud`"] style 4 fill:#f4f4f4,stroke:#f4f4f4,color:#000000 5["`**Automation Pyramid** Represents the layered structure of systems in manufacturing operations based on the ISA-95 model`"] style 5 fill:#f4f4f4,stroke:#f4f4f4,color:#000000 16["`**Management Console** Configures, manages, and monitors Data and Device & Container Infrastructures in the UMH Integrated Platform`"] style 16 fill:#aaaaaa,stroke:#47a0b5,color:#000000 subgraph 50 [Data Infrastructure] style 50 fill:#ffffff,stroke:#47a0b5,color:#47a0b5 51["`**Unified Namespace** The central source of truth for all events and messages on the shop floor.`"] style 51 fill:#aaaaaa,stroke:#47a0b5,color:#000000 64["`**Historian** Stores events in a time-series database and provides visualization tools.`"] style 64 fill:#aaaaaa,stroke:#47a0b5,color:#000000 85["`**Connectivity** Includes tools and services for connecting various shop floor systems and sensors.`"] style 85 fill:#aaaaaa,stroke:#47a0b5,color:#000000 end 16-. Manages & monitors .->85 85-. Provides contextualized data .->51 85-. Provides and extracts data .->5 51-. Provides data .->4 51-. Stores data in a predefined schema .->64 5<-. Works with .-2 2-. Visualize real-time dashboards .->64
graph LR 2["`fa:fa-user **OT Professional / Shopfloor** Monitors and manages the shopfloor, including safety, automation and maintenance`"] style 2 fill:#dddddd,stroke:#9a9a9a,color:#000000 4["`**Data Warehouse/Data Lake** Stores data for analysis, on-premise or in the cloud`"] style 4 fill:#f4f4f4,stroke:#f4f4f4,color:#000000 5["`**Automation Pyramid** Represents the layered structure of systems in manufacturing operations based on the ISA-95 model`"] style 5 fill:#f4f4f4,stroke:#f4f4f4,color:#000000 16["`**Management Console** Configures, manages, and monitors Data and Device & Container Infrastructures in the UMH Integrated Platform`"] style 16 fill:#aaaaaa,stroke:#47a0b5,color:#000000 subgraph 50 [Data Infrastructure] style 50 fill:#ffffff,stroke:#47a0b5,color:#47a0b5 51["`**Unified Namespace** The central source of truth for all events and messages on the shop floor.`"] style 51 fill:#aaaaaa,stroke:#47a0b5,color:#000000 64["`**Historian** Stores events in a time-series database and provides visualization tools.`"] style 64 fill:#aaaaaa,stroke:#47a0b5,color:#000000 85["`**Connectivity** Includes tools and services for connecting various shop floor systems and sensors.`"] style 85 fill:#aaaaaa,stroke:#47a0b5,color:#000000 end 16-. Manages & monitors .->85 85-. Provides contextualized data .->51 85-. Provides and extracts data .->5 51-. Provides data .->4 51-. Stores data in a predefined schema .->64 5<-. Works with .-2 2-. Visualize real-time dashboards .->64

6.1.1 - Data Connectivity

Learn about the tools and services in UMH’s Data Connectivity for integrating shop floor systems.

The Data Connectivity module in the United Manufacturing Hub is designed to enable seamless integration of various data sources from the manufacturing environment into the Unified Namespace. Key components include:

  • Node-RED: A versatile programming tool that links hardware devices, APIs, and online services.
  • barcodereader: Connects to USB barcode readers, pushing data to the message broker.
  • benthos-umh: A specialized version of benthos featuring an OPC UA plugin for efficient data extraction.
  • sensorconnect: Integrates with IO-Link Masters and their sensors, relaying data to the message broker.

These tools collectively facilitate the extraction and contextualization of data from diverse sources, adhering to the ISA-95 automation pyramid model, and enhancing the Management Console’s capability to monitor and manage data flow within the UMH ecosystem.

graph LR 5["`**Automation Pyramid** Represents the layered structure of systems in manufacturing operations based on the ISA-95 model`"] style 5 fill:#f4f4f4,stroke:#f4f4f4,color:#000000 16["`**Management Console** Configures, manages, and monitors Data and Device & Container Infrastructures in the UMH Integrated Platform`"] style 16 fill:#aaaaaa,stroke:#47a0b5,color:#000000 51["`**Unified Namespace** The central source of truth for all events and messages on the shop floor.`"] style 51 fill:#aaaaaa,stroke:#47a0b5,color:#000000 subgraph 85 [Connectivity] style 85 fill:#ffffff,stroke:#47a0b5,color:#47a0b5 86["`**Node-RED** A programming tool for wiring together hardware devices, APIs, and online services.`"] style 86 fill:#aaaaaa,stroke:#47a0b5,color:#000000 87["`**Barcode Reader** Connects to USB barcode reader devices and pushes data to the message broker.`"] style 87 fill:#aaaaaa,stroke:#47a0b5,color:#000000 88["`**Sensor Connect** Reads out IO-Link Master and their connected sensors, pushing data to the message broker.`"] style 88 fill:#aaaaaa,stroke:#47a0b5,color:#000000 89["`**benthos-umh** Customized version of benthos with an OPC UA plugin`"] style 89 fill:#aaaaaa,stroke:#47a0b5,color:#000000 end 16-. Manages & monitors .->89 89-. Provides contextualized data .->51 86-. Provides contextualized data .->51 87-. Provides contextualized data .->51 88-. Provides contextualized data .->51 89-. Extracts data via OPC UA .->5 86-. Extracts data via S7, and many more protocols .->5
graph LR 5["`**Automation Pyramid** Represents the layered structure of systems in manufacturing operations based on the ISA-95 model`"] style 5 fill:#f4f4f4,stroke:#f4f4f4,color:#000000 16["`**Management Console** Configures, manages, and monitors Data and Device & Container Infrastructures in the UMH Integrated Platform`"] style 16 fill:#aaaaaa,stroke:#47a0b5,color:#000000 51["`**Unified Namespace** The central source of truth for all events and messages on the shop floor.`"] style 51 fill:#aaaaaa,stroke:#47a0b5,color:#000000 subgraph 85 [Connectivity] style 85 fill:#ffffff,stroke:#47a0b5,color:#47a0b5 86["`**Node-RED** A programming tool for wiring together hardware devices, APIs, and online services.`"] style 86 fill:#aaaaaa,stroke:#47a0b5,color:#000000 87["`**Barcode Reader** Connects to USB barcode reader devices and pushes data to the message broker.`"] style 87 fill:#aaaaaa,stroke:#47a0b5,color:#000000 88["`**Sensor Connect** Reads out IO-Link Master and their connected sensors, pushing data to the message broker.`"] style 88 fill:#aaaaaa,stroke:#47a0b5,color:#000000 89["`**benthos-umh** Customized version of benthos with an OPC UA plugin`"] style 89 fill:#aaaaaa,stroke:#47a0b5,color:#000000 end 16-. Manages & monitors .->89 89-. Provides contextualized data .->51 86-. Provides contextualized data .->51 87-. Provides contextualized data .->51 88-. Provides contextualized data .->51 89-. Extracts data via OPC UA .->5 86-. Extracts data via S7, and many more protocols .->5

6.1.1.1 - Barcodereader

This microservice is still in development and is not considered stable for production use.

Barcodereader is a microservice that reads barcodes and sends the data to the Kafka broker.

How it works

Connect a barcode scanner to the system and the microservice will read the barcodes and send the data to the Kafka broker.

What’s next

  • Read the Barcodereader reference documentation to learn more about the technical details of the Barcodereader microservice.

6.1.1.2 - Node Red

Node-RED is a programming tool for wiring together hardware devices, APIs and online services in new and interesting ways. It provides a browser-based editor that makes it easy to wire together flows using the wide range of nodes in the Node-RED library.

How it works

Node-RED is a JavaScript-based tool that can be used to create flows that interact with the other microservices in the United Manufacturing Hub or external services.

See our guides for Node-RED to learn more about how to use it.

What’s next

  • Read the Node-RED reference documentation to learn more about the technical details of the Node-RED microservice.

6.1.1.3 - Sensorconnect

Sensorconnect automatically detects ifm gateways connected to the network and reads data from the connected IO-Link sensors.

How it works

Sensorconnect continuosly scans the given IP range for gateways, making it effectively a plug-and-play solution. Once a gateway is found, it automatically download the IODD files for the connected sensors and starts reading the data at the configured interval. Then it processes the data and sends it to the MQTT or Kafka broker, to be consumed by other microservices.

If you want to learn more about how to use sensors in your asstes, check out the retrofitting section of the UMH Learn website.

IODD files

The IODD files are used to describe the sensors connected to the gateway. They contain information about the data type, the unit of measurement, the minimum and maximum values, etc. The IODD files are downloaded automatically from IODDFinder once a sensor is found, and are stored in a Persistent Volume. If downloading from internet is not possible, for example in a closed network, you can download the IODD files manually and store them in the folder specified by the IODD_FILE_PATH environment variable.

If no IODD file is found for a sensor, the data will not be processed, but sent to the broker as-is.

What’s next

  • Read the Sensorconnect reference documentation to learn more about the technical details of the Sensorconnect microservice.

6.1.2 - Unified Namespace

Discover the Unified Namespace’s role as a central hub for shop floor data in UMH.

The Unified Namespace (UNS) within the United Manufacturing Hub is a vital module facilitating the streamlined flow and management of data. It comprises various microservices:

  • data-bridge: Bridges data between MQTT and Kafka and between multiple Kafka instances, ensuring efficient data transmission.
  • HiveMQ: An MQTT broker crucial for receiving data from IoT devices on the shop floor.
  • Redpanda (Kafka): Manages large-scale data processing and orchestrates communication between microservices.
  • Redpanda Console: Offers a graphical interface for monitoring Kafka topics and messages.

The UNS serves as a pivotal point in the UMH architecture, ensuring data from shop floor systems and sensors (gathered via the Data Connectivity module) is effectively processed and relayed to the Historian and external Data Warehouses/Data Lakes for storage and analysis.

graph LR 4["`**Data Warehouse/Data Lake** Stores data for analysis, on-premise or in the cloud`"] style 4 fill:#f4f4f4,stroke:#f4f4f4,color:#000000 64["`**Historian** Stores events in a time-series database and provides visualization tools.`"] style 64 fill:#aaaaaa,stroke:#47a0b5,color:#000000 85["`**Connectivity** Includes tools and services for connecting various shop floor systems and sensors.`"] style 85 fill:#aaaaaa,stroke:#47a0b5,color:#000000 subgraph 51 [Unified Namespace] style 51 fill:#ffffff,stroke:#47a0b5,color:#47a0b5 52["`**Redpanda (Kafka)** Handles large-scale data processing and communication between microservices.`"] style 52 fill:#aaaaaa,stroke:#47a0b5,color:#000000 53["`**HiveMQ** MQTT broker used for receiving data from IoT devices on the shop floor.`"] style 53 fill:#aaaaaa,stroke:#47a0b5,color:#000000 54["`**Redpanda Console** Provides a graphical view of topics and messages in Kafka.`"] style 54 fill:#aaaaaa,stroke:#47a0b5,color:#000000 55["`**databridge** Bridges messages between MQTT and Kafka as well as between Kafka and other Kafka instances.`"] style 55 fill:#aaaaaa,stroke:#47a0b5,color:#000000 end 54-.->52 52<-.->55 55<-.->53 55-. Provides data .->4 52-. Stores data in a predefined schema .->64 85-. Provides contextualized data .->53 85-. Provides contextualized data .->52
graph LR 4["`**Data Warehouse/Data Lake** Stores data for analysis, on-premise or in the cloud`"] style 4 fill:#f4f4f4,stroke:#f4f4f4,color:#000000 64["`**Historian** Stores events in a time-series database and provides visualization tools.`"] style 64 fill:#aaaaaa,stroke:#47a0b5,color:#000000 85["`**Connectivity** Includes tools and services for connecting various shop floor systems and sensors.`"] style 85 fill:#aaaaaa,stroke:#47a0b5,color:#000000 subgraph 51 [Unified Namespace] style 51 fill:#ffffff,stroke:#47a0b5,color:#47a0b5 52["`**Redpanda (Kafka)** Handles large-scale data processing and communication between microservices.`"] style 52 fill:#aaaaaa,stroke:#47a0b5,color:#000000 53["`**HiveMQ** MQTT broker used for receiving data from IoT devices on the shop floor.`"] style 53 fill:#aaaaaa,stroke:#47a0b5,color:#000000 54["`**Redpanda Console** Provides a graphical view of topics and messages in Kafka.`"] style 54 fill:#aaaaaa,stroke:#47a0b5,color:#000000 55["`**databridge** Bridges messages between MQTT and Kafka as well as between Kafka and other Kafka instances.`"] style 55 fill:#aaaaaa,stroke:#47a0b5,color:#000000 end 54-.->52 52<-.->55 55<-.->53 55-. Provides data .->4 52-. Stores data in a predefined schema .->64 85-. Provides contextualized data .->53 85-. Provides contextualized data .->52

6.1.2.1 - Data Bridge

Data-bridge is a microservice specifically tailored to adhere to the UNS data model. It consumes topics from a message broker, translates them to the proper format and publishes them to the other message broker.

How it works

Data-bridge connects to the source broker, that can be either Kafka or MQTT, and subscribes to the topics specified in the configuration. It then processes the messages, and publishes them to the destination broker, that can be either Kafka or MQTT.

In the case where the destination broker is Kafka, messages from multiple topics can be merged into a single topic, making use of the message key to identify the source topic. For example, subscribing to a topic using a wildcard, such as umh.v1.acme.anytown..*, and a merge point of 4, will result in messages from the topics umh.v1.acme.anytown.foo.bar, umh.v1.acme.anytown.foo.baz, umh.v1.acme.anytown and umh.v1.acme.anytown.frob being merged into a single topic, umh.v1.acme.anytown, with the message key being the missing part of the topic name, in this case foo.bar, foo.baz, etc.

Here is a diagram showing the flow of messages:

graph LR source((MQTT or Kafka broker)) subgraph Messages direction TB msg1(topic: umh/v1/acme/anytown/foo/bar
value: 1) msg2(topic: umh/v1/acme/anytown/foo/baz
value: 2) msg3(topic: umh/v1/acme/anytown
value: 3) msg4(topic: umh/v1/acme/anytown/frob
value: 4) end source --> msg1 source --> msg2 source --> msg3 source --> msg4 msg1 --> bridge msg2 --> bridge msg3 --> bridge msg4 --> bridge bridge{{data-bridge
subscribes to: umh/v1/acme/anytown/#
topic merge point: 4}} subgraph Grouped messages direction TB gmsg1(topic: umh.v1.acme.anytown
key: foo.bar
value: 1) gmsg2(topic: umh.v1.acme.anytown
key: foo.baz
value: 2) gmsg3(topic: umh.v1.acme.anytown
value: 3) gmsg4(topic: umh.v1.acme.anytown
key: frob
value: 4) end bridge --> gmsg1 bridge --> gmsg2 bridge --> gmsg3 bridge --> gmsg4 dest((Kafka broker)) gmsg1 --> dest gmsg2 --> dest gmsg3 --> dest gmsg4 --> dest
graph LR source((MQTT or Kafka broker)) subgraph Messages direction TB msg1(topic: umh/v1/acme/anytown/foo/bar
value: 1) msg2(topic: umh/v1/acme/anytown/foo/baz
value: 2) msg3(topic: umh/v1/acme/anytown
value: 3) msg4(topic: umh/v1/acme/anytown/frob
value: 4) end source --> msg1 source --> msg2 source --> msg3 source --> msg4 msg1 --> bridge msg2 --> bridge msg3 --> bridge msg4 --> bridge bridge{{data-bridge
subscribes to: umh/v1/acme/anytown/#
topic merge point: 4}} subgraph Grouped messages direction TB gmsg1(topic: umh.v1.acme.anytown
key: foo.bar
value: 1) gmsg2(topic: umh.v1.acme.anytown
key: foo.baz
value: 2) gmsg3(topic: umh.v1.acme.anytown
value: 3) gmsg4(topic: umh.v1.acme.anytown
key: frob
value: 4) end bridge --> gmsg1 bridge --> gmsg2 bridge --> gmsg3 bridge --> gmsg4 dest((Kafka broker)) gmsg1 --> dest gmsg2 --> dest gmsg3 --> dest gmsg4 --> dest

The value of the message is not changed, only the topic and key are modified.

Another important feature is that it is possible to configure multiple data bridges, each with its own source and destination brokers, and each with its own set of topics to subscribe to and merge point.

The brokers can be local or remote, and, in case of MQTT, they can be secured using TLS.

What’s next

  • Read the Data Bridge reference documentation to learn more about the technical details of the data-bridge microservice.

6.1.2.2 - Kafka Broker

The Kafka broker in the United Manufacturing Hub is RedPanda, a Kafka-compatible event streaming platform. It’s used to store and process messages, in order to stream real-time data between the microservices.

How it works

RedPanda is a distributed system that is made up of a cluster of brokers, designed for maximum performance and reliability. It does not depend on external systems like ZooKeeper, as it’s shipped as a single binary.

Read more about RedPanda in the official documentation.

What’s next

  • Read the Kafka Broker reference documentation to learn more about the technical details of the Kafka broker microservice

6.1.2.3 - Kafka Console

Kafka-console uses Redpanda Console to help you manage and debug your Kafka workloads effortlessy.

With it, you can explore your Kafka topics, view messages, list the active consumers, and more.

How it works

You can access the Kafka console via its Service.

It’s automatically connected to the Kafka broker, so you can start using it right away. You can view the Kafka broker configuration in the Broker tab, and explore the topics in the Topics tab.

What’s next

  • Read the Kafka Console reference documentation to learn more about the technical details of the Kafka Console microservice.

6.1.2.4 - MQTT Broker

The MQTT broker in the United Manufacturing Hub is HiveMQ and is customized to fit the needs of the stack. It’s a core component of the stack and is used to communicate between the different microservices.

How it works

The MQTT broker is responsible for receiving MQTT messages from the different microservices and forwarding them to the MQTT Kafka bridge.

What’s next

  • Read the MQTT Broker reference documentation to learn more about the technical details of the MQTT Broker microservice.

6.1.3 - Historian

Insight into the Historian’s role in storing and visualizing data within the UMH ecosystem.

The Historian in the United Manufacturing Hub serves as a comprehensive data management and visualization system. It includes:

  • kafka-to-postgresql-v2: Archives Kafka messages adhering to the Data Model V2 schema into the database.
  • TimescaleDB: An open-source SQL database specialized in time-series data storage.
  • Grafana: A software tool for data visualization and analytics.
  • factoryinsight: An analytics tool designed for data analysis, including calculating operational efficiency metrics like OEE.
  • grafana-datasource-v2: A Grafana plugin facilitating connection to factoryinsight.
  • Redis: Utilized as an in-memory data structure store for caching purposes.

This structure ensures that data from the Unified Namespace is systematically stored, processed, and made visually accessible, providing OT professionals with real-time insights and analytics on shop floor operations.

graph LR 2["`fa:fa-user **OT Professional / Shopfloor** Monitors and manages the shopfloor, including safety, automation and maintenance`"] style 2 fill:#dddddd,stroke:#9a9a9a,color:#000000 51["`**Unified Namespace** The central source of truth for all events and messages on the shop floor.`"] style 51 fill:#aaaaaa,stroke:#47a0b5,color:#000000 subgraph 64 [Historian] style 64 fill:#ffffff,stroke:#47a0b5,color:#47a0b5 65["`**kafka-to-postgresql-v2** Stores in the database the Kafka messages that follow the Data Model V2 schema`"] style 65 fill:#aaaaaa,stroke:#47a0b5,color:#000000 66["`**TimescaleDB** An open-source time-series SQL database`"] style 66 fill:#aaaaaa,stroke:#47a0b5,color:#000000 67["`**Grafana** Visualization and analytics software`"] style 67 fill:#aaaaaa,stroke:#47a0b5,color:#000000 68["`**factoryinsight** Analytics software that allows data analysis, like OEE`"] style 68 fill:#aaaaaa,stroke:#47a0b5,color:#000000 69["`**grafana-datasource-v2** Grafana plugin to easily connect to factoryinsight`"] style 69 fill:#aaaaaa,stroke:#47a0b5,color:#000000 70["`**Redis** In-memory data structure store used for caching`"] style 70 fill:#aaaaaa,stroke:#47a0b5,color:#000000 end 65-. Stores data .->66 51-. Stores data in a predefined schema via .->65 67-. Performs SQL queries .->66 67-. Includes .->69 69-. Extracts KPIs and other high-level metrics .->68 68-. Queries data .->66 68<-.->70 65<-.->70 2-. Visualize real-time dashboards .->67
graph LR 2["`fa:fa-user **OT Professional / Shopfloor** Monitors and manages the shopfloor, including safety, automation and maintenance`"] style 2 fill:#dddddd,stroke:#9a9a9a,color:#000000 51["`**Unified Namespace** The central source of truth for all events and messages on the shop floor.`"] style 51 fill:#aaaaaa,stroke:#47a0b5,color:#000000 subgraph 64 [Historian] style 64 fill:#ffffff,stroke:#47a0b5,color:#47a0b5 65["`**kafka-to-postgresql-v2** Stores in the database the Kafka messages that follow the Data Model V2 schema`"] style 65 fill:#aaaaaa,stroke:#47a0b5,color:#000000 66["`**TimescaleDB** An open-source time-series SQL database`"] style 66 fill:#aaaaaa,stroke:#47a0b5,color:#000000 67["`**Grafana** Visualization and analytics software`"] style 67 fill:#aaaaaa,stroke:#47a0b5,color:#000000 68["`**factoryinsight** Analytics software that allows data analysis, like OEE`"] style 68 fill:#aaaaaa,stroke:#47a0b5,color:#000000 69["`**grafana-datasource-v2** Grafana plugin to easily connect to factoryinsight`"] style 69 fill:#aaaaaa,stroke:#47a0b5,color:#000000 70["`**Redis** In-memory data structure store used for caching`"] style 70 fill:#aaaaaa,stroke:#47a0b5,color:#000000 end 65-. Stores data .->66 51-. Stores data in a predefined schema via .->65 67-. Performs SQL queries .->66 67-. Includes .->69 69-. Extracts KPIs and other high-level metrics .->68 68-. Queries data .->66 68<-.->70 65<-.->70 2-. Visualize real-time dashboards .->67

6.1.3.1 - Cache

The cache in the United Manufacturing Hub is Redis, a key-value store that is used as a cache for the other microservices.

How it works

Recently used data is stored in the cache to reduce the load on the database. All the microservices that need to access the database will first check if the data is available in the cache. If it is, it will be used, otherwise the microservice will query the database and store the result in the cache.

By default, Redis is configured to run in standalone mode, which means that it will only have one master node.

What’s next

  • Read the Cache reference documentation to learn more about the technical details of the cache microservice.

6.1.3.2 - Database

The database microservice is the central component of the United Manufacturing Hub and is based on TimescaleDB, an open-source relational database built for handling time-series data. TimescaleDB is designed to provide scalable and efficient storage, processing, and analysis of time-series data.

You can find more information on the datamodel of the database in the Data Model section, and read about the choice to use TimescaleDB in the blog article.

How it works

When deployed, the database microservice will create two databases, with the related usernames and passwords:

  • grafana: This database is used by Grafana to store the dashboards and other data.
  • factoryinsight: This database is the main database of the United Manufacturing Hub. It contains all the data that is collected by the microservices.

Then, it creates the tables based on the database schema.

If you want to learn more about how TimescaleDB works, you can read the TimescaleDB documentation.

What’s next

  • Read the Database reference documentation to learn more about the technical details of the database microservice.

6.1.3.3 - Factoryinsight

Factoryinsight is a microservice that provides a set of REST APIs to access the data from the database. It is particularly useful to calculate the Key Performance Indicators (KPIs) of the factories.

How it works

Factoryinsight exposes REST APIs to access the data from the database or calculate the KPIs. By default, it’s only accessible from the internal network of the cluster, but it can be configured to be accessible from the external network.

The APIs require authentication, that can be either a Basic Auth or a Bearer token. Both of these can be found in the Secret factoryinsight-secret.

What’s next

  • Read the Factoryinsight reference documentation to learn more about the technical details of the Factoryinsight microservice.

6.1.3.4 - Grafana

The grafana microservice is a web application that provides visualization and analytics capabilities. Grafana allows you to query, visualize, alert on and understand your metrics no matter where they are stored.

It has a rich ecosystem of plugins that allow you to extend its functionality beyond the core features.

How it works

Grafana is a web application that can be accessed through a web browser. It let’s you create dashboards that can be used to visualize data from the database.

Thanks to some custom datasource plugins, Grafana can use the various APIs of the United Manufacturing Hub to query the database and display useful information.

What’s next

  • Read the Grafana reference documentation to learn more about the technical details of the grafana microservice.

6.1.3.5 - Kafka to Postgresql V2

The Kafka to PostgreSQL v2 microservice plays a crucial role in consuming and translating Kafka messages for storage in a PostgreSQL database. It aligns with the specifications outlined in the Data Model v2.

How it works

Utilizing Data Model v2, Kafka to PostgreSQL v2 is specifically configured to process messages from topics beginning with umh.v1.. Each new topic undergoes validation against Data Model v2 before message consumption begins. This ensures adherence to the defined data structure and standards.

Message payloads are scrutinized for structural validity prior to database insertion. Messages with invalid payloads are systematically rejected to maintain data integrity.

The microservice then evaluates the payload to determine the appropriate table for insertion within the PostgreSQL database. The decision is based on the data type of the payload field, adhering to the following rules:

  • Numeric data types are directed to the tag table.
  • String data types are directed to the tag_string table.

What’s next

  • Read the Kafka to Postgresql v2 reference documentation to learn more about the technical details of the Kafka to Postgresql v2 microservice.

6.1.3.6 - Umh Datasource V2

The plugin, umh-datasource-v2, is a Grafana data source plugin that allows you to fetch resources from a database and build queries for your dashboard.

How it works

  1. When creating a new panel, select umh-datasource-v2 from the Data source drop-down menu. It will then fetch the resources from the database. The loading time may depend on your internet speed.

    Select data source
    Select data source

  2. Select the resources in the cascade menu to build your query. DefaultArea and DefaultProductionLine are placeholders for the future implementation of the new data model.

    Select workcell to query
    Select workcell to query

  3. Only the available values for the specified work cell will be fetched from the database. You can then select which data value you want to query.

    Select value to query
    Select value to query

  4. Next you can specify how to transform the data, depending on what value you selected. For example, all the custom tags will have the aggregation options available. For example if you query a processValue:

    • Time bucket: lets you group data in a time bucket
    • Aggregates: common statistical aggregations (maximum, minimum, sum or count)
    • Handling missing values: lets you choose how missing data should be handled

    Transform data options
    Transform data options

Configuration

  1. In Grafana, navigate to the Data sources configuration panel.

    Settings menu
    Settings menu

  2. Select umh-v2-datasource to configure it.

    Configuration menu
    Configuration menu

  3. Configurations:

    • Base URL: the URL for the factoryinsight backend. Defaults to http://united-manufacturing-hub-factoryinsight-service/.
    • Enterprise name: previously customerID for the old datasource plugin. Defaults to factoryinsight.
    • API Key: authenticates the API calls to factoryinsight. Can be found with UMHLens by going to Secrets → factoryinsight-secret → apiKey. It should follow the format Basic xxxxxxxx.

    Configure data source
    Configure data source

6.2 - Device & Container Infrastructure

Understand the automated deployment and setup process in UMH’s Device & Container Infrastructure.

The Device & Container Infrastructure in the United Manufacturing Hub automates the deployment and setup of the data infrastructure in various environments. It is tailored for Edge deployments, particularly in Demilitarized Zones, to minimize latency on-premise, and also extends into the Cloud to harness its functionalities. It consists of several interconnected components:

  • Provisioning Server: Manages the initial bootstrapping of devices, including iPXE configuration and ignition file distribution.
  • Flatcar Image Server: A central repository hosting various versions of Flatcar Container Linux images, ensuring easy access and version control.
  • Customized iPXE: A specialized bootloader configured to streamline the initial boot process by fetching UMH-specific settings and configurations.
  • First and Second Stage Flatcar OS: A two-stage operating system setup where the first stage is a temporary OS used for installing the second stage, which is the final operating system equipped with specific configurations and tools.
  • Installation Script: An automated script hosted at management.umh.app, responsible for setting up and configuring the Kubernetes environment.
  • Kubernetes (k3s): A lightweight Kubernetes setup that forms the backbone of the container orchestration system.

This infrastructure ensures a streamlined, automated installation process, laying a robust foundation for the United Manufacturing Hub’s operation.

graph LR 16["`**Management Console** Configures, manages, and monitors Data and Device & Container Infrastructures in the UMH Integrated Platform`"] style 16 fill:#aaaaaa,stroke:#47a0b5,color:#000000 29["`**Provisioning Server** Handles device bootstrapping, including iPXE configuration and ignition file distribution.`"] style 29 fill:#aaaaaa,stroke:#47a0b5,color:#000000 30["`**Flatcar Image Server** Hosts Flatcar Container Linux images, providing a central repository for version management.`"] style 30 fill:#aaaaaa,stroke:#47a0b5,color:#000000 subgraph 27 [Device & Container Infrastructure] style 27 fill:#ffffff,stroke:#47a0b5,color:#47a0b5 28["`**Installation Script** This script automates the Kubernetes environment setup and configuration.`"] style 28 fill:#aaaaaa,stroke:#47a0b5,color:#000000 31["`**Kubernetes** Core of the container orchestration system, featuring a lightweight Kubernetes setup.`"] style 31 fill:#aaaaaa,stroke:#47a0b5,color:#000000 32["`**Customized iPXE** Configured bootloader for fetching UMH-specific settings, optimizing the initial boot process.`"] style 32 fill:#aaaaaa,stroke:#47a0b5,color:#000000 33["`**First Stage Flatcar OS** Transitional OS used during the installation of the permanent Flatcar OS.`"] style 33 fill:#aaaaaa,stroke:#47a0b5,color:#000000 34["`**Second Stage Flatcar OS** Final operating system, equipped with specific configurations and essential tools.`"] style 34 fill:#aaaaaa,stroke:#47a0b5,color:#000000 end 32-. "Downloads specified Flatcar version for initial boot." .->30 33-. "Retrieves image for second-stage Flatcar OS." .->30 34-. "Regularly checks for OS updates." .->30 32-. "Initiates boot-up of first-stage OS." .->33 32-. "Requests configuration and retrieves iPXE config." .->29 33-. "Installs second-stage Flatcar OS." .->34 33-. "Fetches ignition config with installation script." .->29 34-. "Executes installation script." .->28 34-. "Acquires token-specific ignition config." .->29 28-. "Installs Kubernetes (k3s) and required tools, then deploys the Data Infrastructure." .->31 28-. "Deploys Management Companion into Kubernetes." .->16
graph LR 16["`**Management Console** Configures, manages, and monitors Data and Device & Container Infrastructures in the UMH Integrated Platform`"] style 16 fill:#aaaaaa,stroke:#47a0b5,color:#000000 29["`**Provisioning Server** Handles device bootstrapping, including iPXE configuration and ignition file distribution.`"] style 29 fill:#aaaaaa,stroke:#47a0b5,color:#000000 30["`**Flatcar Image Server** Hosts Flatcar Container Linux images, providing a central repository for version management.`"] style 30 fill:#aaaaaa,stroke:#47a0b5,color:#000000 subgraph 27 [Device & Container Infrastructure] style 27 fill:#ffffff,stroke:#47a0b5,color:#47a0b5 28["`**Installation Script** This script automates the Kubernetes environment setup and configuration.`"] style 28 fill:#aaaaaa,stroke:#47a0b5,color:#000000 31["`**Kubernetes** Core of the container orchestration system, featuring a lightweight Kubernetes setup.`"] style 31 fill:#aaaaaa,stroke:#47a0b5,color:#000000 32["`**Customized iPXE** Configured bootloader for fetching UMH-specific settings, optimizing the initial boot process.`"] style 32 fill:#aaaaaa,stroke:#47a0b5,color:#000000 33["`**First Stage Flatcar OS** Transitional OS used during the installation of the permanent Flatcar OS.`"] style 33 fill:#aaaaaa,stroke:#47a0b5,color:#000000 34["`**Second Stage Flatcar OS** Final operating system, equipped with specific configurations and essential tools.`"] style 34 fill:#aaaaaa,stroke:#47a0b5,color:#000000 end 32-. "Downloads specified Flatcar version for initial boot." .->30 33-. "Retrieves image for second-stage Flatcar OS." .->30 34-. "Regularly checks for OS updates." .->30 32-. "Initiates boot-up of first-stage OS." .->33 32-. "Requests configuration and retrieves iPXE config." .->29 33-. "Installs second-stage Flatcar OS." .->34 33-. "Fetches ignition config with installation script." .->29 34-. "Executes installation script." .->28 34-. "Acquires token-specific ignition config." .->29 28-. "Installs Kubernetes (k3s) and required tools, then deploys the Data Infrastructure." .->31 28-. "Deploys Management Companion into Kubernetes." .->16

6.3 - Management Console

Delve into the functionalities and components of the UMH’s Management Console, ensuring efficient system management.

The Management Console is pivotal in configuring, managing, and monitoring the United Manufacturing Hub. It comprises a web application, a backend API and the management companion agent, all designed to ensure secure and efficient operation.

Management Console Architecture
Management Console Architecture

Web Application

The client-side Web Application, available at management.umh.app enables users to register, add, and manage instances, and monitor the infrastructure within the United Manufacturing Hub. All communications between the Web Application and the user’s devices are end-to-end encrypted, ensuring complete confidentiality from the backend.

Management Companion

Deployed on each UMH instance, the Management Companion acts as an agent responsible for decrypting messages coming from the user via the Backend and executing requested actions. Responses are end-to-end encrypted as well, maintaining a secure and opaque channel to the Backend.

Management Updater

The Updater is a custom Job run by the Management Companion, responsible for updating the Management Companion itself. Its purpose is to automate the process of upgrading the Management Companion to the latest version, reducing the administrative overhead of managing UMH instances.

Backend

The Backend is the public API for the Management Console. It functions as a bridge between the Web Application and the Management Companion. Its primary role is to verify user permissions for accessing UMH instances. Importantly, the backend does not have access to the contents of the messages exchanged between the Web Application and the Management Companion, ensuring that communication remains opaque and secure.

6.4 - Legacy

This section gives an overview of the legacy microservices that can be found in older versions of the United Manufacturing Hub.

This section provides a comprehensive overview of the legacy microservices within the United Manufacturing Hub. These microservices are currently in a transitional phase, being maintained and deployed alongside newer versions of UMH as we gradually shift from Data Model v1 to v2. While these legacy components are set to be deprecated in the future, they continue to play a crucial role in ensuring smooth operations and compatibility during this transition period.

6.4.1 - Factoryinput

This microservice is still in development and is not considered stable for production use

Factoryinput provides REST endpoints for MQTT messages via HTTP requests.

This microservice is typically accessed via grafana-proxy

How it works

The factoryinput microservice provides REST endpoints for MQTT messages via HTTP requests.

The main endpoint is /api/v1/{customer}/{location}/{asset}/{value}, with a POST request method. The customer, location, asset and value are all strings. And are used to build the MQTT topic. The body of the HTTP request is used as the MQTT payload.

What’s next

  • Read the Factoryinput reference documentation to learn more about the technical details of the Factoryinput microservice.

6.4.2 - Grafana Proxy

This microservice is still in development and is not considered stable for production use

How it works

The grafana-proxy microservice serves an HTTP REST endpoint located at /api/v1/{service}/{data}. The service parameter specifies the backend service to which the request should be proxied, like factoryinput or factoryinsight. The data parameter specifies the API endpoint to forward to the backend service. The body of the HTTP request is used as the payload for the proxied request.

What’s next

  • Read the Grafana Proxy reference documentation to learn more about the technical details of the Grafana Proxy microservice.

6.4.3 - Kafka Bridge

Kafka-bridge is a microservice that connects two Kafka brokers and forwards messages between them. It is used to connect the local broker of the edge computer with the remote broker on the server.

How it works

This microservice has two ways of operation:

  • High Integrity: This mode is used for topics that are critical for the user. It is garanteed that no messages are lost. This is achieved by committing the message only after it has been successfully inserted into the database. Ususally all the topics are forwarded in this mode, except for processValue, processValueString and raw messages.
  • High Throughput: This mode is used for topics that are not critical for the user. They are forwarded as fast as possible, but it is possible that messages are lost, for example if the database struggles to keep up. Usually only the processValue, processValueString and raw messages are forwarded in this mode.

What’s next

  • Read the Kafka Bridge reference documentation to learn more about the technical details of the Kafka Bridge microservice.

6.4.4 - Kafka State Detector

This microservice is still in development and is not considered stable for production use

How it works

What’s next

6.4.5 - Kafka to Postgresql

Kafka-to-postgresql is a microservice responsible for consuming kafka messages and inserting the payload into a Postgresql database. Take a look at the Datamodel to see how the data is structured.

This microservice requires that the Kafka Topic umh.v1.kafka.newTopic exits. This will happen automatically from version 0.9.12.

How it works

By default, kafka-to-postgresql sets up two Kafka consumers, one for the High Integrity topics and one for the High Throughput topics.

The graphic below shows the program flow of the microservice.

Kafka-to-postgres-flow
Kafka-to-postgres-flow

High integrity

The High integrity topics are forwarded to the database in a synchronous way. This means that the microservice will wait for the database to respond with a non error message before committing the message to the Kafka broker. This way, the message is garanteed to be inserted into the database, even though it might take a while.

Most of the topics are forwarded in this mode.

The picture below shows the program flow of the high integrity mode.

high-integrity-data-flow
high-integrity-data-flow

High throughput

The High throughput topics are forwarded to the database in an asynchronous way. This means that the microservice will not wait for the database to respond with a non error message before committing the message to the Kafka broker. This way, the message is not garanteed to be inserted into the database, but the microservice will try to insert the message into the database as soon as possible. This mode is used for the topics that are expected to have a high throughput.

The topics that are forwarded in this mode are processValue, processValueString and all the raw topics.

What’s next

  • Read the Kafka to Postgresql reference documentation to learn more about the technical details of the Kafka to Postgresql microservice.

6.4.6 - MQTT Bridge

MQTT-bridge is a microservice that connects two MQTT brokers and forwards messages between them. It is used to connect the local broker of the edge computer with the remote broker on the server.

How it works

This microservice subscribes to topics on the local broker and publishes the messages to the remote broker, while also subscribing to topics on the remote broker and publishing the messages to the local broker.

What’s next

  • Read the MQTT Bridge reference documentation to learn more about the technical details of the MQTT Bridge microservice.

6.4.7 - MQTT Kafka Bridge

Mqtt-kafka-bridge is a microservice that acts as a bridge between MQTT brokers and Kafka brokers, transfering messages from one to the other and vice versa.

This microservice requires that the Kafka Topic umh.v1.kafka.newTopic exits. This will happen automatically from version 0.9.12.

Since version 0.9.10, it allows all raw messages, even if their content is not in a valid JSON format.

How it works

Mqtt-kafka-bridge consumes topics from a message broker, translates them to the proper format and publishes them to the other message broker.

What’s next

  • Read the MQTT Kafka Bridge reference documentation to learn more about the technical details of the MQTT Kafka Bridge microservice.

6.4.8 - MQTT Simulator

This microservice is a community contribution and is not part of the main stack of the United Manufacturing Hub, but is enabled by default.

The IoTSensors MQTT Simulator is a microservice that simulates sensors sending data to the MQTT broker. You can read the full documentation on the

GitHub repository.

How it works

The microservice publishes messages on the topic ia/raw/development/ioTSensors/, creating a subtopic for each simulation. The subtopics are the names of the simulations, which are Temperature, Humidity, and Pressure. The values are calculated using a normal distribution with a mean and standard deviation that can be configured.

What’s next

  • Read the IoTSensors MQTT Simulator reference documentation to learn more about the technical details of the IoTSensors MQTT Simulator microservice.

6.4.9 - MQTT to Postgresql

If you landed here from Google, you probably might want to check out either the architecture of the United Manufacturing Hub or our knowledge website for more information on the general topics of IT, OT and IIoT.

This microservice is deprecated and should not be used anymore in production. Please use kafka-to-postgresql instead.

How it works

The mqtt-to-postgresql microservice subscribes to the MQTT broker and saves the values of the messages on the topic ia/# in the database.

What’s next

  • Read the MQTT to Postgresql reference documentation to learn more about the technical details of the MQTT to Postgresql microservice.

6.4.10 - OPCUA Simulator

This microservice is a community contribution and is not part of the main stack of the United Manufacturing Hub, but is enabled by default.

How it works

The OPCUA Simulator is a microservice that simulates OPCUA devices. You can read the full documentation on the GitHub repository.

You can then connect to the simulated OPCUA server via Node-RED and read the values of the simulated devices. Learn more about how to connect to the OPCUA simulator to Node-RED in our guide.

What’s next

  • Read the OPCUA Simulator reference documentation to learn more about the technical details of the OPCUA Simulator microservice.

6.4.11 - PackML Simulator

This microservice is a community contribution and is not part of the main stack of the United Manufacturing Hub, but it is enabled by default.

PackML MQTT Simulator is a virtual line that interfaces using PackML implemented over MQTT. It implements the following PackML State model and communicates over MQTT topics as defined by environmental variables. The simulator can run with either a basic MQTT topic structure or SparkPlugB.

PackML StateModel
PackML StateModel

How it works

You can read the full documentation on the GitHub repository.

What’s next

  • Read the PackML Simulator reference documentation to learn more about the technical details of the PackML Simulator microservice.

6.4.12 - Tulip Connector

This microservice is still in development and is not considered stable for production use.

The tulip-connector microservice enables communication with the United Manufacturing Hub by exposing internal APIs, like factoryinsight, to the internet. With this REST endpoint, users can access data stored in the UMH and seamlessly integrate Tulip with a Unified Namespace and on-premise Historian. Furthermore, the tulip-connector can be customized to meet specific customer requirements, including integration with an on-premise MES system.

How it works

The tulip-connector acts as a proxy between the internet and the UMH. It exposes an endpoint to forward requests to the UMH and returns the response.

What’s next

  • Read the Tulip Connector reference documentation to learn more about the technical details of the Tulip Connector microservice.

6.4.13 - Grafana Plugins

This section contains the overview of the custom Grafana plugins that can be used to access the United Manufacturing Hub.

6.4.13.1 - Umh Datasource

This page contains the technical documentation of the plugin umh-datasource, which allows for easy data extraction from factoryinsight.

We are no longer maintaining this microservice. Use instead our new microservice datasource-v2 for data extraction from factoryinsight.

The umh datasource is a Grafana 8.X compatible plugin, that allows you to fetch resources from a database and build queries for your dashboard.

How it works

  1. When creating a new panel, select umh-datasource from the Data source drop-down menu. It will then fetch the resources from the database. The loading time may depend on your internet speed.

    selectingDatasource
    selectingDatasource

  2. Select your query parameters Location, Asset and Value to build your query.

    selectingDatasource
    selectingDatasource

Configuration

  1. In Grafana, navigate to the Data sources configuration panel.

    selectingConfiguration
    selectingConfiguration

  2. Select umh-datasource to configure it.

    selectingConfiguration
    selectingConfiguration

  3. Configurations:

    • Base URL: the URL for the factoryinsight backend. Defaults to http://united-manufacturing-hub-factoryinsight-service/.
    • Enterprise name: previously customerID for the old datasource plugin. Defaults to factoryinsight.
    • API Key: authenticates the API calls to factoryinsight. Can be found with UMHLens by going to Secrets → factoryinsight-secret → apiKey. It should follow the format Basic xxxxxxxx.

    selectingConfiguration
    selectingConfiguration

6.4.13.2 - Factoryinput Panel

This page contains the technical documentation of the plugin factoryinput-panel, which allows for easy execution of MQTT messages inside the UMH stack from a Grafana panel.

This plugin is still in development and is not considered stable for production use

Requirements

  • A United Manufacturing Hub stack
  • External IP or URL to the grafana-proxy
    • In most cases it is the same IP address as your Grafana dashboard.

Getting started

For development, the steps to build the plugin from source are described here.

  1. Go to united-manufacturing-hub/grafana-plugins/umh-factoryinput-panel
  2. Install dependencies.
yarn install
  1. Build plugin in development mode or run in watch mode.
yarn dev
  1. Build plugin in production mode (not recommended due to Issue 32336).
yarn build
  1. Move the resulting dis folder in your Grafana plugins directory.
  • Windows: C:\Program Files\GrafanaLabs\grafana\data\plugins
  • Linux: /var/lib/grafana/plugins
  1. Rename the folder to umh-factoryinput-panel.

  2. Enable the enable development mode to load unsigned plugins.

  3. restart your Grafana service.

Technical Information

Below you will find a schematic of this flow, through our stack.

7 - Production Guide

This section contains information about how to use the stack in a production environment.

7.1 - Installation

This section contains guides on how to install the United Manufacturing Hub.

Learn how to install the United Manufacturing Hub using completely Free and Open Source Software.

7.1.1 - Flatcar Installation

This page describes how to deploy the United Manufacturing Hub on Flatcar Linux.

Here is a step-by-step guide on how to deploy the United Manufacturing Hub on Flatcar Linux, a Linux distribution designed for container workloads with high security and low maintenance. This will leverage the UMH Device and Container Infrastructure.

The system can be installed either bare metal or in a virtual machine.

Before you begin

Ensure your system meets these minimum requirements:

  • 4-core CPU
  • 8 GB system RAM
  • 32 GB available disk space
  • Internet access

You will also need the latest version of the iPXE boot image, suitable for your system:

For bare metal installations, flash the image to a USB stick with at least 4 GB of storage. Our guide on flashing an operating system to a USB stick can assist you.

For virtual machines, ensure UEFI boot is enabled when creating the VM.

Lastly, ensure you are on the same network as the device for SSH access post-installation.

System Preparation and Booting from iPXE

Identify the drive for Flatcar Linux installation. For virtual machines, this is typically sda. For bare metal, the drive depends on your physical storage. The troubleshooting section can help identify the correct drive.

Boot your device from the iPXE image. Consult your device or hypervisor documentation for booting instructions.

You can find a comprehensive guide on how to configure a virtual machine in Proxmox for installing Flatcar Linux on the Learning Hub.

Installation

At the first prompt, read and accept the license to proceed.

Read and Accept the License
Read and Accept the License

Next, configure your network settings. Select DHCP if uncertain.

Network Settings
Network Settings

The connection will be tested next. If it fails, revisit the network settings.

Ensure your device has internet access and no firewalls are blocking the connection.

Then, select the drive for Flatcar Linux installation.

Select the Drive
Select the Drive

A summary of the installation will appear. Check that everything is correct and confirm to start the process.

Summary
Summary

Shortly after, you’ll see a green command line core@flatcar-0-install. Remove the USB stick or the CD drive from the VM. The system will continue processing.

Flatcar Install Step 0
Flatcar Install Step 0

The installation will complete after a few minutes, and the system will reboot.

When you see the green core@flatcar-1-umh login prompt, the installation is complete, and the device’s IP address will be displayed.

Installation time varies based on network speed and system performance.

Connect to the Device

With the system installed, access it via SSH.

For Windows 11 users, the default Windows Terminal is recommended. For other OS users, try MobaXTerm.

To do so, open you terminal of choice. We recommend the default Windows Terminal, or MobaXTerm if you are not on Windows 11.

Connect to the device using this command, substituting <ip-address> with your device’s IP address:

ssh core@<ip-address>

When prompted, enter the default password for the core user: umh.

Troubleshooting

The Installation Stops at the First Green Login Prompt

If the installation halts at the first green login prompt, check the installation status with:

systemctl status installer

A typical response for an ongoing installation will look like this:

● installer.service - Flatcar Linux Installer
     Loaded: loaded (/usr/lib/systemd/system/installer.service; static; vendor preset: enabled)
     Active: active (running) since Wed 2021-05-12 14:00:00 UTC; 1min 30s ago

If the status differs, the installation may have failed. Review the logs to identify the issue.

Unsure Which Drive to Select

To determine the correct drive, refer to your device’s manual:

  • SATA drives (HDD or SSD): Typically labeled as sda.
  • NVMe drives: Usually labeled as nvm0n1.

For further verification, boot any Linux distribution on your device and execute:

lsblk

The output, resembling the following, will help identify the drive:

NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 223.6G  0 disk
├─sda1   8:1    0   512M  0 part /boot
└─sda2   8:2    0 223.1G  0 part /
sdb      8:0    0  31.8G  0 disk
└─sdb1   8:1    0  31.8G  0 part /mnt/usb

In most cases, the correct drive is the first listed or the one not matching the USB stick size.

No Resources in the Cluster

If you can access the cluster but see no resources, SSH into the edge device and check the cluster status:

systemctl status k3s

If the status is not active (running), the cluster isn’t operational. Restart it with:

sudo systemctl restart k3s

If the cluster is active or restarting doesn’t resolve the issue, inspect the installation logs:

systemctl status umh-install
systemctl status helm-install

Persistent errors may necessitate a system reinstallation.

I can’t SSH into the virtual machine

Ensure that your computer is on the same network as the virtual machine, with no firewalls or VPNs blocking the connection.

What’s next

  • You can follow the Getting Started guide to get familiar with the UMH stack.
  • If you already know your way around the United Manufacturing Hub, you can follow the Administration guides to configure the stack for production.

7.2 - Upgrading

This section contains all upgrade guides, from the Companion of the Management Console to the UMH stack.

7.2.1 - Upgrade to v0.13.6

This page describes how to upgrade the United Manufacturing Hub to version 0.13.6

This page describes how to upgrade the United Manufacturing Hub to version 0.13.6. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

Upgrade Helm Chart

Upgrade the Helm chart to the 0.13.6 version:

bash <(curl -s https://management.umh.app/binaries/umh/migrations/0_13_6.sh)

Troubleshooting

If for some reason the upgrade fails, you can delete the deployment and statefulsets and try again: This will not delete your data.

sudo $(which kubectl) -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml delete deployment \
united-manufacturing-hub-factoryinsight-deployment \
united-manufacturing-hub-iotsensorsmqtt \
united-manufacturing-hub-opcuasimulator-deployment \
united-manufacturing-hub-packmlmqttsimulator \
united-manufacturing-hub-mqttkafkabridge \
united-manufacturing-hub-kafkatopostgresqlv2 \
united-manufacturing-hub-kafkatopostgresql \
united-manufacturing-hub-grafana \
united-manufacturing-hub-databridge-0 \
united-manufacturing-hub-console

sudo $(which kubectl) -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml delete statefulset \
united-manufacturing-hub-hivemqce \
united-manufacturing-hub-kafka \
united-manufacturing-hub-nodered \
united-manufacturing-hub-sensorconnect \
united-manufacturing-hub-mqttbridge \
united-manufacturing-hub-timescaledb \
united-manufacturing-hub-redis-master

sudo $(which kubectl) -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml delete jobs \
united-manufacturing-hub-kafka-configuration

7.2.2 - Upgrade to v0.10.6

This page describes how to upgrade the United Manufacturing Hub to version 0.10.6

This page describes how to upgrade the United Manufacturing Hub to version 0.10.6. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

All the following commands are to be run from the UMH instance’s shell.

Update Helm Repo

Fetch the latest Helm charts from the UMH repository:

sudo $(which helm) repo update --kubeconfig /etc/rancher/k3s/k3s.yaml

Upgrade Helm Chart

Upgrade the Helm chart to the 0.10.6 version:

sudo $(which helm) upgrade united-manufacturing-hub united-manufacturing-hub/united-manufacturing-hub -n united-manufacturing-hub --version 0.10.6 --reuse-values --kubeconfig /etc/rancher/k3s/k3s.yaml \
--set _000_commonConfig.infrastructure.mqtt.tls.factoryinput=null \
--set _000_commonConfig.datainput=null \
--set _000_commonConfig.mqttBridge=null \
--set _000_commonConfig.mqttBridge=null \
--set mqttbridge=null \
--set factoryinput=null \
--set grafanaproxy=null \
--set kafkastatedetector.image.repository=management.umh.app/oci/united-manufacturing-hub/kafkastatedetector \
--set barcodereader.image.repository=management.umh.app/oci/united-manufacturing-hub/barcodereader \
--set sensorconnect.image=management.umh.app/oci/united-manufacturing-hub/sensorconnect \
--set iotsensorsmqtt.image=management.umh.app/oci/amineamaach/sensors-mqtt \
--set opcuasimulator.image=management.umh.app/oci/united-manufacturing-hub/opcuasimulator \
--set kafkabridge.image.repository=management.umh.app/oci/united-manufacturing-hub/kafka-bridge \
--set kafkabridge.initContainer.repository=management.umh.app/oci/united-manufacturing-hub/kafka-init \
--set factoryinsight.image.repository=management.umh.app/oci/united-manufacturing-hub/factoryinsight \
--set kafkatopostgresql.image.repository=management.umh.app/oci/united-manufacturing-hub/kafka-to-postgresql \
--set kafkatopostgresql.initContainer.repository=management.umh.app/oci/united-manufacturing-hub/kafka-init \
--set timescaledb-single.image.repository=management.umh.app/oci/timescale/timescaledb-ha \
--set timescaledb-single.prometheus.image.repository=management.umh.app/oci/prometheuscommunity/postgres-exporter \
--set grafana.image.repository=management.umh.app/oci/grafana/grafana \
--set grafana.downloadDashboardsImage.repository=management.umh.app/oci/curlimages/curl \
--set grafana.testFramework.image=management.umh.app/oci/bats/bats \
--set grafana.initChownData.image.repository=management.umh.app/oci/library/busybox \
--set grafana.sidecar.image.repository=management.umh.app/oci/kiwigrid/k8s-sidecar \
--set grafana.imageRenderer.image.repository=management.umh.app/oci/grafana/grafana-image-renderer \
--set packmlmqttsimulator.image.repository=management.umh.app/oci/spruiktec/packml-simulator \
--set tulipconnector.image.repository=management.umh.app/oci/united-manufacturing-hub/tulip-connector \
--set mqttkafkabridge.image.repository=management.umh.app/oci/united-manufacturing-hub/mqtt-kafka-bridge \
--set mqttkafkabridge.initContainer.repository=management.umh.app/oci/united-manufacturing-hub/kafka-init \
--set kafkatoblob.image.repository=management.umh.app/oci/united-manufacturing-hub/kafka-to-blob \
--set redpanda.image.repository=management.umh.app/oci/redpandadata/redpanda \
--set redpanda.statefulset.initContainerImage.repository=management.umh.app/oci/library/busybox \
--set redpanda.console.image.registry=management.umh.app/oci \
--set redis.image.registry=management.umh.app/oci \
--set redis.metrics.image.registry=management.umh.app/oci \
--set redis.sentinel.image.registry=management.umh.app/oci \
--set redis.volumePermissions.image.registry=management.umh.app/oci \
--set redis.sysctl.image.registry=management.umh.app/oci \
--set mqtt_broker.image.repository=management.umh.app/oci/hivemq/hivemq-ce \
--set mqtt_broker.initContainer.hivemqextensioninit.image.repository=management.umh.app/oci/united-manufacturing-hub/hivemq-init \
--set metrics.image.repository=management.umh.app/oci/united-manufacturing-hub/metrics \
--set databridge.image.repository=management.umh.app/oci/united-manufacturing-hub/databridge \
--set kafkatopostgresqlv2.image.repository=management.umh.app/oci/united-manufacturing-hub/kafka-to-postgresql-v2

Manual steps (optional)

Due to a limitation of Helm, we cannot automatically set grafana.env.GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=umh-datasource,umh-v2-datasource. You could either ignore this (if your network is not restricuted to a single domain) or set it manually in the Grafana deployment.

We are also not able to manually overwrite grafana.extraInitContainers[0].image=management.umh.app/oci/united-manufacturing-hub/grafana-umh. You could either ignore this (if your network is not restricuted to a single domain) or set it manually in the Grafana deployment.

Host system

Open the /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl using vi as root and add the following lines:

version = 2

[plugins."io.containerd.internal.v1.opt"]
path = "/var/lib/rancher/k3s/agent/containerd"
[plugins."io.containerd.grpc.v1.cri"]
stream_server_address = "127.0.0.1"
stream_server_port = "10010"
enable_selinux = false
enable_unprivileged_ports = true
enable_unprivileged_icmp = true
sandbox_image = "management.umh.app/v2/rancher/mirrored-pause:3.6"

[plugins."io.containerd.grpc.v1.cri".containerd]
snapshotter = "overlayfs"
disable_snapshot_annotations = true


[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/var/lib/rancher/k3s/data/ab2055bc72380bad965b219e8688ac02b2e1b665cad6bdde1f8f087637aa81df/bin"
conf_dir = "/var/lib/rancher/k3s/agent/etc/cni/net.d"


[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true

[plugins."io.containerd.grpc.v1.cri".registry.mirrors]

# Mirror configuration for Docker Hub with fallback
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://management.umh.app/oci", "https://registry-1.docker.io"]

# Mirror configuration for GitHub Container Registry with fallback
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."ghcr.io"]
endpoint = ["https://management.umh.app/oci", "https://ghcr.io"]

# Mirror configuration for Quay with fallback
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]
endpoint = ["https://management.umh.app/oci", "https://quay.io"]

# Catch-all configuration for any other registries
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."*"]
endpoint = ["https://management.umh.app/oci"]

Open /etc/flatcar/update.conf using vi as root and add the following lines:

GROUP=stable
SERVER=https://management.umh.app/nebraska/

Restart k3s or reboot the host system:

sudo systemctl restart k3s

Troubleshooting

If for some reason the upgrade fails, you can delete the deployment and statefulsets and try again: This will not delete your data.

sudo $(which kubectl) -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml delete deployment \
united-manufacturing-hub-factoryinsight-deployment \
united-manufacturing-hub-iotsensorsmqtt \
united-manufacturing-hub-opcuasimulator-deployment \
united-manufacturing-hub-packmlmqttsimulator \
united-manufacturing-hub-mqttkafkabridge \
united-manufacturing-hub-kafkatopostgresqlv2 \
united-manufacturing-hub-kafkatopostgresql \
united-manufacturing-hub-grafana \
united-manufacturing-hub-databridge-0 \
united-manufacturing-hub-console

sudo $(which kubectl) -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml delete statefulset \
united-manufacturing-hub-hivemqce \
united-manufacturing-hub-kafka \
united-manufacturing-hub-nodered \
united-manufacturing-hub-sensorconnect \
united-manufacturing-hub-mqttbridge \
united-manufacturing-hub-timescaledb \
united-manufacturing-hub-redis-master

sudo $(which kubectl) -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml delete jobs \
united-manufacturing-hub-kafka-configuration

7.2.3 - Management Console Upgrades

This page describes how to perform the upgrades that are available for the Management Console.

Easily upgrade your UMH instance with the Management Console. This page offers clear, step-by-step instructions for a smooth upgrade process.

Before you begin

Before proceeding with the upgrade of the Companion, ensure that you have the following:

  • A functioning UMH instance, verified as “online” and in good health.
  • A reliable internet connection.
  • Familiarity with the changelog of the new version you are upgrading to, especially to identify any breaking changes or required manual interventions.

Management Companion

Upgrade your UMH instance seamlessly using the Management Console. Follow these steps:

Identify Outdated Instance

From the Overview tab, check for an upgrade icon next to your instance’s name, signaling an outdated Companion version. Additionally, locate the Upgrade Companion button at the bottom of the tab.

Outdated Instance Overview
Outdated Instance Overview

Start the Upgrade

When you’re prepared to upgrade your UMH instance, start by pressing the Upgrade Companion button. This will open a modal, initially displaying a changelog with a quick overview of the latest changes. You can expand the changelog for a detailed view from your current version up to the latest one. Additionally, it may highlight any warnings requiring manual intervention.

Navigate through the changelog, and when comfortable, proceed by clicking the Next button. This step grants you access to crucial information about recommended actions and precautions during the upgrade process.

With the necessary insights, take the next step by clicking the Upgrade button. The system will guide you through the upgrade process, displaying real-time progress updates, including a progress bar and logs.

Upon successful completion, a confirmation message will appear. Simply click the Let’s Go button to return to the dashboard, where you can seamlessly continue using your UMH instance with the latest enhancements.

Upgrade Success
Upgrade Success

United Manufacturing Hub

As of now, the upgrade of the UMH is not yet included in the Management Console, meaning that it has to be performed manually. However, it is planned to be included in the future. Until then, you can follow the instructions in the What’s New page.

Troubleshooting

I encountered an issue during the upgrade process. What should I do?

If you encounter issues during the upgrade process, consider the following steps:

  1. Retry the Process: Sometimes, a transient issue may cause a hiccup. Retry the upgrade process to ensure it’s not a temporary glitch.

  2. Check Logs: Review the logs displayed during the upgrade process for any error messages or indications of what might be causing the problem. This information can offer insights into potential issues.

If the problem persists after retrying and checking the logs, and you’ve confirmed that all prerequisites are met, please reach out to our support team for assistance.

I installed the Management Companion before the 0.1.0 release. How do I upgrade it?

If you installed the Management Companion before the 0.1.0 release, you will need to reinstall it. This is because we made some changes that are not compatible with the previous version.

Before reinstalling the Management Companion, you have to backup your configuration, so that you can restore your connections after the upgrade. To do so, follow these steps:

  1. Access your UMH instance via SSH.

  2. Run the following command to backup your configuration:

    sudo $(which kubectl) get configmap/mgmtcompanion-config --kubeconfig /etc/rancher/k3s/k3s.yaml -n mgmtcompanion -o=jsonpath='{.data}' | sed -e 's/^/{"data":/' | sed -e 's/$/}/'> mgmtcompanion-config.bak.json
    

    This will create a file called mgmtcompanion-config.bak.json in your current directory.

  3. For good measure, copy the file to your local machine:

    scp <user>@<ip>:/home/<user>/mgmtcompanion-config.bak.json .
    

    Replace <user> with your username, and <ip> with the IP address of your UMH instance. You will be prompted for your password.

  4. Now you can reinstall the Management Companion. Follow the instructions in the Installation guide. Your data will be preserved, and you will be able to restore your connections.

  5. After the installation is complete, you can restore your connections by running the following command:

    sudo $(which kubectl) patch configmap/mgmtcompanion-config --kubeconfig /etc/rancher/k3s/k3s.yaml -n mgmtcompanion --patch-file mgmtcompanion-config.bak.json
    

7.2.4 - Migrate to Data Model V1

This page describes how to migrate your existing instances from the old Data Model to the new Data Model V1.

In this guide, you will learn how to migrate your existing instances from the old Data Model to the new Data Model V1.

The old Data Model will continue to work, and all the data will be still available.

Before you begin

You need to have a UMH cluster. If you do not already have a cluster, you can create one by following the Getting Started guide.

You also need to access the system where the cluster is running, either by logging into it or by using a remote shell.

Upgrade Your Companion to the Latest Version

If you haven’t already, upgrade your Companion to the latest version. You can easily do this from the Management Console by selecting your Instance and clicking on the “Upgrade” button.

Upgrade the Helm Chart

The new Data Model was introduced in the 0.10 release of the Helm Chart. To upgrade to the latest 0.10 release, you first need to update the Helm Chart to the latest 0.9 release and then upgrade to the latest 0.10 release.

There is no automatic way (yet!) to upgrade the Helm Chart, so you need to follow the manual steps below.

First, after accessing your instance, find the Helm Chart version you are currently using by running the following command:

sudo $(which helm) get metadata united-manufacturing-hub -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml | grep -e ^VERSION

Then, head to the upgrading archive and follow the instructions to upgrade from your current version to the latest version, one version at a time.

7.2.5 - Archive

This section is meant to archive the upgrading guides for the different versions of the United Manufacturing Hub.

The United Manufacturing Hub is a continuously evolving product. This means that new features and bug fixes are added to the product on a regular basis. This section contains the upgrading guides for the different versions the United Manufacturing Hub.

The upgrading process is done by upgrading the Helm chart.

7.2.5.1 - Upgrade to v0.9.34

This page describes how to upgrade the United Manufacturing Hub to version 0.9.34

This page describes how to upgrade the United Manufacturing Hub to version 0.9.34. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

All the following commands are to be run from the UMH instance’s shell.

Update Helm Repo

Fetch the latest Helm charts from the UMH repository:

sudo $(which helm) repo update --kubeconfig /etc/rancher/k3s/k3s.yaml

Clear Workloads

Some workloads need to be deleted before upgrading. This process does not delete any data, but it will cause downtime.

sudo $(which kubectl) -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml delete deployment united-manufacturing-hub-factoryinsight-deployment united-manufacturing-hub-iotsensorsmqtt united-manufacturing-hub-opcuasimulator-deployment
sudo $(which kubectl) -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml delete statefulset united-manufacturing-hub-hivemqce united-manufacturing-hub-kafka united-manufacturing-hub-nodered united-manufacturing-hub-sensorconnect united-manufacturing-hub-mqttbridge

Upgrade Helm Chart

Upgrade the Helm chart to the 0.9.34 version:

sudo helm upgrade united-manufacturing-hub united-manufacturing-hub/united-manufacturing-hub -n united-manufacturing-hub --version 0.9.34 --reuse-values --kubeconfig /etc/rancher/k3s/k3s.yaml \
--set kafkatopostgresqlv2.enabled=false \
--set kafkatopostgresqlv2.image.repository=ghcr.io/united-manufacturing-hub/kafka-to-postgresql-v2 \
--set kafkatopostgresqlv2.image.pullPolicy=IfNotPresent \
--set kafkatopostgresqlv2.replicas=1 \
--set kafkatopostgresqlv2.resources.limits.cpu=1000m \
--set kafkatopostgresqlv2.resources.limits.memory=300Mi \
--set kafkatopostgresqlv2.resources.requests.cpu=100m \
--set kafkatopostgresqlv2.resources.requests.memory=150Mi \
--set kafkatopostgresqlv2.probes.startup.failureThreshold=30 \
--set kafkatopostgresqlv2.probes.startup.initialDelaySeconds=10 \
--set kafkatopostgresqlv2.probes.startup.periodSeconds=10 \
--set kafkatopostgresqlv2.probes.liveness.periodSeconds=5 \
--set kafkatopostgresqlv2.probes.readiness.periodSeconds=5 \
--set kafkatopostgresqlv2.logging.level=PRODUCTION \
--set kafkatopostgresqlv2.asset.cache.lru.size=1000 \
--set kafkatopostgresqlv2.workers.channel.size=10000 \
--set kafkatopostgresqlv2.workers.goroutines.multiplier=16 \
--set kafkatopostgresqlv2.database.user=kafkatopostgresqlv2 \
--set kafkatopostgresqlv2.database.password=changemetoo \
--set _000_commonConfig.datamodel_v2.enabled=true \
--set _000_commonConfig.datamodel_v2.bridges[0].mode=mqtt-kafka \
--set _000_commonConfig.datamodel_v2.bridges[0].brokerA=united-manufacturing-hub-mqtt:1883 \
--set _000_commonConfig.datamodel_v2.bridges[0].brokerB=united-manufacturing-hub-kafka:9092 \
--set _000_commonConfig.datamodel_v2.bridges[0].topic=umh.v1..* \
--set _000_commonConfig.datamodel_v2.bridges[0].topicMergePoint=5 \
--set _000_commonConfig.datamodel_v2.bridges[0].partitions=6 \
--set _000_commonConfig.datamodel_v2.bridges[0].replicationFactor=1 \
--set _000_commonConfig.datamodel_v2.database.name=umh_v2 \
--set _000_commonConfig.datamodel_v2.database.host=united-manufacturing-hub \
--set _000_commonConfig.datamodel_v2.grafana.dbreader=grafanareader \
--set _000_commonConfig.datamodel_v2.grafana.dbpassword=changeme

Update Database

There has been some changes to the database, which need to be applied. This process does not delete any data.

sudo $(which kubectl) -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml exec -it united-manufacturing-hub-timescaledb-0 -c timescaledb -- sh -c ". /etc/timescaledb/post_init.d/0_create_dbs.sh; . /etc/timescaledb/post_init.d/1_set_passwords.sh"

Restart kafka-to-postgresql-v2

sudo $(which kubectl) rollout restart deployment united-manufacturing-hub-kafkatopostgresqlv2  -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml

7.2.5.2 - Upgrade to v0.9.15

This page describes how to upgrade the United Manufacturing Hub to version 0.9.15

This page describes how to upgrade the United Manufacturing Hub to version 0.9.15. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

Add Helm repo in UMHLens / OpenLens

Check if the UMH Helm repository is added in UMHLens / OpenLens. To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,). Click on the Kubernetes tab and check if the Helm Chart section contains the https://repo.umh.app repository.

If it doesn’t, click the Add Custom Helm Repo button and fill in the following values:

Then click Add.

Clear Workloads

Some workloads need to be deleted before upgrading. This process does not delete any data, but it will cause downtime. If a workload is missing, it means that it was not enabled in your cluster, therefore you can skip it.

To delete a resource, you can select it using the box on the left of the resource name and click the - button on the bottom right corner.

  1. Open the Workloads tab.
  2. From the Deployment section, delete the following deployments:
    • united-manufacturing-hub-factoryinsight-deployment
    • united-manufacturing-hub-opcuasimulator-deployment
    • united-manufacturing-hub-iotsensorsmqtt
    • united-manufacturing-hub-grafanaproxy
  3. From the StatefulSet section, delete the following statefulsets:
    • united-manufacturing-hub-hivemqce
    • united-manufacturing-hub-kafka
    • united-manufacturing-hub-nodered
    • united-manufacturing-hub-sensorconnect
    • united-manufacturing-hub-mqttbridge
  4. Open the Network tab.
  5. From the Services section, delete the following services:
    • united-manufacturing-hub-kafka

Upgrade Helm Chart

Now everything is ready to upgrade the Helm chart.

  1. Navigate to the Helm > Releases tab.

  2. Select the united-manufacturing-hub release and click Upgrade.

  3. In the Helm Upgrade window, make sure that the Upgrade version field contains the version you want to upgrade to.

  4. You can also change the values of the Helm chart, if needed. If you want to activate the new databridge you need to add & edit the following section

    _000_commonConfig:      
       ...
       datamodel_v2:
       enabled: true
       bridges:
       - mode: mqtt-kafka
         brokerA: united-manufacturing-hub-mqtt:1883 # The flow is always from A->B, for omni-directional flow, setup a 2nd bridge with reversed broker setup
         brokerB: united-manufacturing-hub-kafka:9092
         topic: umh.v1..*              # accept mqtt or kafka topic format. after the topic seprator, you can use # for mqtt wildcard, or .* for kafka wildcard
         topicMergePoint: 5            # This is a new feature of our datamodel_old, which splits topics in topic and key (only in Kafka), preventing having lots of topics
         partitions: 6                 # optional: number of partitions for the new kafka topic. default: 6
         replicationFactor: 1          # optional: replication factor for the new kafka topic. default: 1
       ...
    

    You can also enable the new container registry by changing the values in the image or image.repository fields from unitedmanufacturinghub/<image-name> to ghcr.io/united-manufacturing-hub/<image-name>.

  5. Click Upgrade.

The upgrade process can take a few minutes. The upgrade is complete when the Status field of the release is Deployed.

7.2.5.3 - Upgrade to v0.9.14

This page describes how to upgrade the United Manufacturing Hub to version 0.9.14

This page describes how to upgrade the United Manufacturing Hub to version 0.9.14. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

Add Helm repo in UMHLens / OpenLens

Check if the UMH Helm repository is added in UMHLens / OpenLens. To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,). Click on the Kubernetes tab and check if the Helm Chart section contains the https://repo.umh.app repository.

If it doesn’t, click the Add Custom Helm Repo button and fill in the following values:

Then click Add.

Clear Workloads

Some workloads need to be deleted before upgrading. This process does not delete any data, but it will cause downtime. If a workload is missing, it means that it was not enabled in your cluster, therefore you can skip it.

To delete a resource, you can select it using the box on the left of the resource name and click the - button on the bottom right corner.

  1. Open the Workloads tab.
  2. From the Deployment section, delete the following deployments:
    • united-manufacturing-hub-factoryinsight-deployment
    • united-manufacturing-hub-opcuasimulator-deployment
    • united-manufacturing-hub-iotsensorsmqtt
    • united-manufacturing-hub-grafanaproxy
  3. From the StatefulSet section, delete the following statefulsets:
    • united-manufacturing-hub-hivemqce
    • united-manufacturing-hub-kafka
    • united-manufacturing-hub-nodered
    • united-manufacturing-hub-sensorconnect
    • united-manufacturing-hub-mqttbridge
  4. Open the Network tab.
  5. From the Services section, delete the following services:
    • united-manufacturing-hub-kafka

Upgrade Helm Chart

Now everything is ready to upgrade the Helm chart.

  1. Navigate to the Helm > Releases tab.

  2. Select the united-manufacturing-hub release and click Upgrade.

  3. In the Helm Upgrade window, make sure that the Upgrade version field contains the version you want to upgrade to.

  4. You can also change the values of the Helm chart, if needed. For example, if you want to apply the new tweaks to the resources in order to avoid the Out Of Memory crash of the MQTT Broker, you can change the following values:

    iotsensorsmqtt:
      resources:
        requests:
          cpu: 10m
          memory: 20Mi
        limits:
          cpu: 30m
          memory: 50Mi
    grafanaproxy:
     resources:
       requests:
         cpu: 100m
       limits:
         cpu: 300m
    kafkatopostgresql:
      resources:
        requests:
          memory: 150Mi
        limits:
          memory: 300Mi
    opcuasimulator:
      resources:
        requests:
          cpu: 10m
          memory: 20Mi
        limits:
          cpu: 30m
          memory: 50Mi
    packmlmqttsimulator:
      resources:
        requests:
          cpu: 10m
          memory: 20Mi
        limits:
          cpu: 30m
          memory: 50Mi
    tulipconnector:
      resources:
        limits:
          cpu: 30m
          memory: 50Mi
        requests:
          cpu: 10m
          memory: 20Mi
    redis:
      master:
        resources:
          limits:
            cpu: 100m
            memory: 100Mi
          requests:
            cpu: 50m
            memory: 50Mi
    mqtt_broker:
      resources:
        limits:
          cpu: 700m
          memory: 1700Mi
        requests:
          cpu: 300m
          memory: 1000Mi
    

    You can also enable the new container registry by changing the values in the image or image.repository fields from unitedmanufacturinghub/<image-name> to ghcr.io/united-manufacturing-hub/<image-name>.

  5. Click Upgrade.

The upgrade process can take a few minutes. The upgrade is complete when the Status field of the release is Deployed.

7.2.5.4 - Upgrade to v0.9.13

This page describes how to upgrade the United Manufacturing Hub to version 0.9.13

This page describes how to upgrade the United Manufacturing Hub to version 0.9.13. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

Add Helm repo in UMHLens / OpenLens

Check if the UMH Helm repository is added in UMHLens / OpenLens. To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,). Click on the Kubernetes tab and check if the Helm Chart section contains the https://repo.umh.app repository.

If it doesn’t, click the Add Custom Helm Repo button and fill in the following values:

Then click Add.

Clear Workloads

Some workloads need to be deleted before upgrading. This process does not delete any data, but it will cause downtime. If a workload is missing, it means that it was not enabled in your cluster, therefore you can skip it.

To delete a resource, you can select it using the box on the left of the resource name and click the - button on the bottom right corner.

  1. Open the Workloads tab.
  2. From the Deployment section, delete the following deployments:
    • united-manufacturing-hub-barcodereader
    • united-manufacturing-hub-factoryinsight-deployment
    • united-manufacturing-hub-kafkatopostgresql
    • united-manufacturing-hub-mqttkafkabridge
    • united-manufacturing-hub-iotsensorsmqtt
    • united-manufacturing-hub-opcuasimulator-deployment
  3. From the StatefulSet section, delete the following statefulsets:
    • united-manufacturing-hub-mqttbridge
    • united-manufacturing-hub-hivemqce
    • united-manufacturing-hub-nodered
    • united-manufacturing-hub-sensorconnect

Upgrade Helm Chart

Now everything is ready to upgrade the Helm chart.

  1. Navigate to the Helm > Releases tab.
  2. Select the united-manufacturing-hub release and click Upgrade.
  3. In the Helm Upgrade window, make sure that the Upgrade version field contains the version you want to upgrade to.
  4. You can also change the values of the Helm chart, if needed.
  5. Click Upgrade.

The upgrade process can take a few minutes. The upgrade is complete when the Status field of the release is Deployed.

7.2.5.5 - Upgrade to v0.9.12

This page describes how to upgrade the United Manufacturing Hub to version 0.9.12

This page describes how to upgrade the United Manufacturing Hub to version 0.9.12. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

Add Helm repo in UMHLens / OpenLens

Check if the UMH Helm repository is added in UMHLens / OpenLens. To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,). Click on the Kubernetes tab and check if the Helm Chart section contains the https://repo.umh.app repository.

If it doesn’t, click the Add Custom Helm Repo button and fill in the following values:

Then click Add.

Backup RBAC configuration for MQTT Broker

This step is only needed if you enabled RBAC for the MQTT Broker and changed the default password. If you did not change the default password, you can skip this step.

  1. Navigate to Config > ConfigMaps.
  2. Select the united-manufacturing-hub-hivemqce-extension ConfigMap.
  3. Copy the content of credentials.xml and save it in a safe place.

Clear Workloads

Some workloads need to be deleted before upgrading. This process does not delete any data, but it will cause downtime. If a workload is missing, it means that it was not enabled in your cluster, therefore you can skip it.

To delete a resource, you can select it using the box on the left of the resource name and click the - button on the bottom right corner.

  1. Open the Workloads tab.
  2. From the Deployment section, delete the following deployments:
    • united-manufacturing-hub-barcodereader
    • united-manufacturing-hub-factoryinsight-deployment
    • united-manufacturing-hub-kafkatopostgresql
    • united-manufacturing-hub-mqttkafkabridge
    • united-manufacturing-hub-iotsensorsmqtt
    • united-manufacturing-hub-opcuasimulator-deployment
  3. From the StatefulSet section, delete the following statefulsets:
    • united-manufacturing-hub-mqttbridge
    • united-manufacturing-hub-hivemqce
    • united-manufacturing-hub-nodered
    • united-manufacturing-hub-sensorconnect

Remove MQTT Broker extension PVC

In this version we reduced the size of the MQTT Broker extension PVC. To do so, we need to delete the old PVC and create a new one. This process will set the credentials of the MQTT Broker to the default ones. If you changed the default password, you can restore them after the upgrade.

  1. Navigate to Storage > Persistent Volume Claims.
  2. Select the united-manufacturing-hub-hivemqce-claim-extensions PVC and click Delete.

Upgrade Helm Chart

Now everything is ready to upgrade the Helm chart.

  1. Navigate to the Helm > Releases tab.

  2. Select the united-manufacturing-hub release and click Upgrade.

  3. In the Helm Upgrade window, make sure that the Upgrade version field contains the version you want to upgrade to.

  4. There are some incompatible changes in this version. To avoid errors, you need to change the following values:

    • Remove property console.console.config.kafka.tls.passphrase:

      console:
        console:
          config:
            kafka:
              tls:
                passphrase: "" # <- remove this line
      
    • console.extraContainers: remove the property and its content.

      console:
        extraContainers: {} # <- remove this line
      
    • console.extraEnv: remove the property and its content.

      console:
        extraEnv: "" # <- remove this line
      
    • console.extraEnvFrom: remove the property and its content.

      console:
        extraEnvFrom: ""  # <- remove this line
      
    • console.extraVolumeMounts: remove the |- characters right after the property name. It should look like this:

      console:
        extraVolumeMounts: # <- remove the `|-` characters in this line
          - name: united-manufacturing-hub-kowl-certificates
            mountPath: /SSL_certs/kafka
            readOnly: true
      
    • console.extraVolumes: remove the |- characters right after the property name. It should look like this:

      console:
        extraVolumes: # <- remove the `|-` characters in this line
          - name: united-manufacturing-hub-kowl-certificates
            secret:
              secretName: united-manufacturing-hub-kowl-secrets
      
    • Change the console.service property to the following:

      console:
        service:
          type: LoadBalancer
          port: 8090
          targetPort: 8080
      
    • Change the Redis URI in factoryinsight.redis:

      factoryinsight:
        redis:
          URI: united-manufacturing-hub-redis-headless:6379
      
    • Set the following values in the kafka section to true, or add them if they are missing:

      kafka:
        externalAccess:
          autoDiscovery:
            enabled: true
          enabled: true
        rbac:
          create: true
      
    • Change redis.architecture to standalone:

      redis:
        architecture: standalone
      
    • redis.sentinel: remove the property and its content.

      redis:
        sentinel: {} # <- remove all the content of this section
      
    • Remove the property redis.master.command:

      redis:
        master:
        command: /run.sh # <- remove this line
      
    • timescaledb-single.fullWalPrevention: remove the property and its content.

      timescaledb-single:
        fullWalPrevention:              # <- remove this line
          checkFrequency: 30            # <- remove this line
          enabled: false                # <- remove this line
          thresholds:                   # <- remove this line
            readOnlyFreeMB: 64          # <- remove this line
            readOnlyFreePercent: 5      # <- remove this line
            readWriteFreeMB: 128        # <- remove this line
            readWriteFreePercent: 8     # <- remove this line
      
    • timescaledb-single.loadBalancer: remove the property and its content.

      timescaledb-single:
        loadBalancer:          # <- remove this line
          annotations:         # <- remove this line
            service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "4000" # <- remove this line
          enabled: true        # <- remove this line
          port: 5432           # <- remove this line
      
    • timescaledb-single.replicaLoadBalancer: remove the property and its content.

      timescaledb-single:
        replicaLoadBalancer:
          annotations:         # <- remove this line
            service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "4000" # <- remove this line
          enabled: false       # <- remove this line
          port: 5432           # <- remove this line
      
    • timescaledb-single.secretNames: remove the property and its content.

      timescaledb-single:
        secretNames: {} # <- remove this line 
      
    • timescaledb-single.unsafe: remove the property and its content.

      timescaledb-single:
        unsafe: false # <- remove this line
      
    • Change the value of the timescaledb-single.service.primary.type property to LoadBalancer:

      timescaledb-single:
        service:
          primary:
            type: LoadBalancer
      
  5. Click Upgrade.

The upgrade process can take a few minutes. The upgrade is complete when the Status field of the release is Deployed.

7.2.5.6 - Upgrade to v0.9.11

This page describes how to upgrade the United Manufacturing Hub to version 0.9.11

This page describes how to upgrade the United Manufacturing Hub to version 0.9.11. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

Add Helm repo in UMHLens / OpenLens

Check if the UMH Helm repository is added in UMHLens / OpenLens. To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,). Click on the Kubernetes tab and check if the Helm Chart section contains the https://repo.umh.app repository.

If it doesn’t, click the Add Custom Helm Repo button and fill in the following values:

Then click Add.

Clear Workloads

Some workloads need to be deleted before upgrading. This process does not delete any data, but it will cause downtime. If a workload is missing, it means that it was not enabled in your cluster, therefore you can skip it.

To delete a resource, you can select it using the box on the left of the resource name and click the - button on the bottom right corner.

  1. Open the Workloads tab.
  2. From the Deployment section, delete the following deployments:
    • united-manufacturing-hub-barcodereader
    • united-manufacturing-hub-factoryinsight-deployment
    • united-manufacturing-hub-kafkatopostgresql
    • united-manufacturing-hub-mqttkafkabridge
    • united-manufacturing-hub-iotsensorsmqtt
    • united-manufacturing-hub-opcuasimulator-deployment
  3. From the StatefulSet section, delete the following statefulsets:
    • united-manufacturing-hub-mqttbridge
    • united-manufacturing-hub-hivemqce
    • united-manufacturing-hub-nodered
    • united-manufacturing-hub-sensorconnect

Upgrade Helm Chart

Now everything is ready to upgrade the Helm chart.

  1. Navigate to the Helm > Releases tab.
  2. Select the united-manufacturing-hub release and click Upgrade.
  3. In the Helm Upgrade window, make sure that the Upgrade version field contains the version you want to upgrade to.
  4. You can also change the values of the Helm chart, if needed.
  5. Click Upgrade.

The upgrade process can take a few minutes. The upgrade is complete when the Status field of the release is Deployed.

7.2.5.7 - Upgrade to v0.9.10

This page describes how to upgrade the United Manufacturing Hub to version 0.9.10

This page describes how to upgrade the United Manufacturing Hub to version 0.9.10. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

Add Helm repo in UMHLens / OpenLens

Check if the UMH Helm repository is added in UMHLens / OpenLens. To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,). Click on the Kubernetes tab and check if the Helm Chart section contains the https://repo.umh.app repository.

If it doesn’t, click the Add Custom Helm Repo button and fill in the following values:

Then click Add.

Grafana plugins

In this release, the Grafana version has been updated from 8.5.9 to 9.3.1. Check the release notes for further information about the changes.

Additionally, the way default plugins are installed has changed. Unfortunatly, it is necesary to manually install all the plugins that were previously installed.

If you didn’t install any plugin other than the default ones, you can skip this section.

Follow these steps to see the list of plugins installed in your cluster:

  1. Open the browser and go to the Grafana dashboard.

  2. Navigate to the Configuration > Plugins tab.

  3. Select the Installed filter.

    Show installed grafana plugins
    Show installed grafana plugins

  4. Write down all the plugins that you manually installed. You can recognize them by not having the Core tag.

    Image of core and signed plugins
    Image of core and signed plugins

    The following ones are installed by default, therefore you can skip them:

    • ACE.SVG by Andrew Rodgers
    • Button Panel by UMH Systems Gmbh
    • Button Panel by CloudSpout LLC
    • Discrete by Natel Energy
    • Dynamic Text by Marcus Olsson
    • FlowCharting by agent
    • Pareto Chart by isaozler
    • Pie Chart (old) by Grafana Labs
    • Timepicker Buttons Panel by williamvenner
    • UMH Datasource by UMH Systems Gmbh
    • Untimely by factry
    • Worldmap Panel by Grafana Labs

Clear Workloads

Some workloads need to be deleted before upgrading. This process does not delete any data, but it will cause downtime. If a workload is missing, it means that it was not enabled in your cluster, therefore you can skip it.

To delete a resource, you can select it using the box on the left of the resource name and click the - button on the bottom right corner.

  1. Open the Workloads tab.
  2. From the Deployment section, delete the following deployments:
    • united-manufacturing-hub-barcodereader
    • united-manufacturing-hub-factoryinsight-deployment
    • united-manufacturing-hub-grafana
    • united-manufacturing-hub-kafkatopostgresql
    • united-manufacturing-hub-mqttkafkabridge
    • united-manufacturing-hub-iotsensorsmqtt
    • united-manufacturing-hub-opcuasimulator-deployment
  3. From the StatefulSet section, delete the following statefulsets:
    • united-manufacturing-hub-mqttbridge
    • united-manufacturing-hub-hivemqce
    • united-manufacturing-hub-nodered
    • united-manufacturing-hub-sensorconnect

Upgrade Helm Chart

Now everything is ready to upgrade the Helm chart.

  1. Navigate to the Helm > Releases tab.

  2. Select the united-manufacturing-hub release and click Upgrade.

  3. In the Helm Upgrade window, make sure that the Upgrade version field contains the version you want to upgrade to.

  4. You can also change the values of the Helm chart, if needed.

    • In the grafana section, find the extraInitContainers field and change its value to the following:

          - image: unitedmanufacturinghub/grafana-umh:1.1.2
            name: init-plugins
            imagePullPolicy: IfNotPresent
            command: ['sh', '-c', 'cp -r /plugins /var/lib/grafana/']
            volumeMounts:
              - name: storage
                mountPath: /var/lib/grafana
      
    • Make these changes in the kafka section:

      • Set the value of the heapOpts field to -Xmx2048m -Xms2048m.

      • Replace the content of the resources section with the following:

            limits:
              cpu: 1000m
              memory: 4Gi
            requests:
              cpu: 100m
              memory: 2560Mi
        
  5. Click Upgrade.

The upgrade process can take a few minutes. The upgrade is complete when the Status field of the release is Deployed.

Afterwards, you can reinstall the additional Grafana plugins.

Replace VerneMQ with HiveMQ

In this upgrade we switched from using VerneMQ to HiveMQ as our MQTT Broker (you can read the blog article about it).

While this process is fully backwards compatible, we suggest to update NodeRed flows and any other additional service that uses MQTT, to use the new service broker called united-manufacturing-hub-mqtt. The old united-manufacturing-hub-vernemq is still functional and, despite the name, also points to HiveMQ, but in future upgrades will be removed.

Additionally, for production environments, we recommend to enable RBAC for the MQTT Broker.

Please double-check if all of your services can connect to the new MQTT broker. It might be needed for them to be restarted, so that they can resolve the DNS name and get the new IP. Also, it can happen with tools like chirpstack, that you need to specify the client-id as the automatically generated ID worked with VerneMQ, but is now declined by HiveMQ.

Troubleshooting

Some microservices can’t connect to the new MQTT broker

If you are using the united-manufacturing-hub-mqtt service, but some microservice can’t connect to it, restarting the microservice might solve the issue. To do so, you can delete the Pod of the microservice and let Kubernetes recreate it.

ChirpStack can’t connect to the new MQTT broker

ChirpStack uses a generated client-id to connect to the MQTT broker. This client-id is not accepted by HiveMQ. To solve this issue, you can set the client_id field in the integration.mqtt section of the chirpstack configuration file to a fixed value:

[integration]
...
  [integration.mqtt]
  client_id="chirpstack"

7.2.5.8 - Upgrade to v0.9.9

This page describes how to upgrade the United Manufacturing Hub to version 0.9.9

This page describes how to upgrade the United Manufacturing Hub to version 0.9.9. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

Add Helm repo in UMHLens / OpenLens

Check if the UMH Helm repository is added in UMHLens / OpenLens. To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,). Click on the Kubernetes tab and check if the Helm Chart section contains the https://repo.umh.app repository.

If it doesn’t, click the Add Custom Helm Repo button and fill in the following values:

Then click Add.

Clear Workloads

Some workloads need to be deleted before upgrading. This process does not delete any data, but it will cause downtime. If a workload is missing, it means that it was not enabled in your cluster, therefore you can skip it.

To delete a resource, you can select it using the box on the left of the resource name and click the - button on the bottom right corner.

  1. Open the Workloads tab.
  2. From the Deployment section, delete the following deployments:
    • united-manufacturing-hub-barcodereader
    • united-manufacturing-hub-factoryinsight-deployment
    • united-manufacturing-hub-kafkatopostgresql
    • united-manufacturing-hub-mqttkafkabridge
    • united-manufacturing-hub-iotsensorsmqtt
    • united-manufacturing-hub-opcuasimulator-deployment
  3. From the StatefulSet section, delete the following statefulsets:
    • united-manufacturing-hub-mqttbridge
    • united-manufacturing-hub-hivemqce
    • united-manufacturing-hub-nodered
    • united-manufacturing-hub-sensorconnect

Upgrade Helm Chart

Now everything is ready to upgrade the Helm chart.

  1. Navigate to the Helm > Releases tab.
  2. Select the united-manufacturing-hub release and click Upgrade.
  3. In the Helm Upgrade window, make sure that the Upgrade version field contains the version you want to upgrade to.
  4. You can also change the values of the Helm chart, if needed. In the grafana section, find the extraInitContainers field and change the value of the image field to unitedmanufacturinghub/grafana-plugin-extractor:0.1.4.
  5. Click Upgrade.

The upgrade process can take a few minutes. The upgrade is complete when the Status field of the release is Deployed.

7.2.5.9 - Upgrade to v0.9.8

This page describes how to upgrade the United Manufacturing Hub to version 0.9.8

This page describes how to upgrade the United Manufacturing Hub to version 0.9.8. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

Add Helm repo in UMHLens / OpenLens

Check if the UMH Helm repository is added in UMHLens / OpenLens. To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,). Click on the Kubernetes tab and check if the Helm Chart section contains the https://repo.umh.app repository.

If it doesn’t, click the Add Custom Helm Repo button and fill in the following values:

Then click Add.

Clear Workloads

Some workloads need to be deleted before upgrading. This process does not delete any data, but it will cause downtime. If a workload is missing, it means that it was not enabled in your cluster, therefore you can skip it.

To delete a resource, you can select it using the box on the left of the resource name and click the - button on the bottom right corner.

  1. Open the Workloads tab.
  2. From the Deployment section, delete the following deployments:
    • united-manufacturing-hub-barcodereader
    • united-manufacturing-hub-factoryinsight-deployment
    • united-manufacturing-hub-kafkatopostgresql
    • united-manufacturing-hub-mqttkafkabridge
    • united-manufacturing-hub-iotsensorsmqtt
    • united-manufacturing-hub-opcuasimulator-deployment
  3. From the StatefulSet section, delete the following statefulsets:
    • united-manufacturing-hub-mqttbridge
    • united-manufacturing-hub-hivemqce
    • united-manufacturing-hub-nodered
    • united-manufacturing-hub-sensorconnect

Upgrade Helm Chart

Now everything is ready to upgrade the Helm chart.

  1. Navigate to the Helm > Releases tab.
  2. Select the united-manufacturing-hub release and click Upgrade.
  3. In the Helm Upgrade window, make sure that the Upgrade version field contains the version you want to upgrade to.
  4. You can also change the values of the Helm chart, if needed.
  5. Click Upgrade.

The upgrade process can take a few minutes. The upgrade is complete when the Status field of the release is Deployed.

7.2.5.10 - Upgrade to v0.9.7

This page describes how to upgrade the United Manufacturing Hub to version 0.9.7

This page describes how to upgrade the United Manufacturing Hub to version 0.9.7. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

Add Helm repo in UMHLens / OpenLens

Check if the UMH Helm repository is added in UMHLens / OpenLens. To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,). Click on the Kubernetes tab and check if the Helm Chart section contains the https://repo.umh.app repository.

If it doesn’t, click the Add Custom Helm Repo button and fill in the following values:

Then click Add.

Clear Workloads

Some workloads need to be deleted before upgrading. This process does not delete any data, but it will cause downtime. If a workload is missing, it means that it was not enabled in your cluster, therefore you can skip it.

To delete a resource, you can select it using the box on the left of the resource name and click the - button on the bottom right corner.

  1. Open the Workloads tab.
  2. From the Deployment section, delete the following deployments:
    • united-manufacturing-hub-barcodereader
    • united-manufacturing-hub-factoryinsight-deployment
    • united-manufacturing-hub-kafkatopostgresql
    • united-manufacturing-hub-mqttkafkabridge
    • united-manufacturing-hub-iotsensorsmqtt
    • united-manufacturing-hub-opcuasimulator-deployment
  3. From the StatefulSet section, delete the following statefulsets:
    • united-manufacturing-hub-mqttbridge
    • united-manufacturing-hub-hivemqce
    • united-manufacturing-hub-nodered
    • united-manufacturing-hub-sensorconnect

Upgrade Helm Chart

Now everything is ready to upgrade the Helm chart.

  1. Navigate to the Helm > Releases tab.
  2. Select the united-manufacturing-hub release and click Upgrade.
  3. In the Helm Upgrade window, make sure that the Upgrade version field contains the version you want to upgrade to.
  4. You can also change the values of the Helm chart, if needed.
    • Make these changes in the grafana section:

      • Replace the content of datasources with the following:

            datasources.yaml:
              apiVersion: 1
              datasources:
              - access: proxy
                editable: false
                isDefault: true
                jsonData:
                  apiKey: $FACTORYINSIGHT_PASSWORD
                  apiKeyConfigured: true
                  customerId: $FACTORYINSIGHT_CUSTOMERID
                  serverURL: http://united-manufacturing-hub-factoryinsight-service/
                name: umh-datasource
                orgId: 1
                type: umh-datasource
                url: http://united-manufacturing-hub-factoryinsight-service/
                version: 1
              - access: proxy
                editable: false
                isDefault: false
                jsonData:
                  apiKey: $FACTORYINSIGHT_PASSWORD
                  apiKeyConfigured: true
                  baseURL: http://united-manufacturing-hub-factoryinsight-service/
                  customerID: $FACTORYINSIGHT_CUSTOMERID
                name: umh-v2-datasource
                orgId: 1
                type: umh-v2-datasource
                url: http://united-manufacturing-hub-factoryinsight-service/
                version: 1
        
      • Replace the content of env with the following:

            GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS: umh-datasource,umh-factoryinput-panel,umh-v2-datasource
        
      • Replace the content of extraInitContainers with the following:

          - name: init-umh-datasource
            image: unitedmanufacturinghub/grafana-plugin-extractor:0.1.3
            volumeMounts:
            - name: storage
              mountPath: /var/lib/grafana
            imagePullPolicy: IfNotPresent
        
    • In the timescaledb-single section, make sure that the image.tag field is set to pg13.8-ts2.8.0-p1.

  5. Click Upgrade.

The upgrade process can take a few minutes. The upgrade is complete when the Status field of the release is Deployed.

Change Factoryinsight API version

The Factoryinsight API version has changed from v1 to v2. To make sure that you are using the new version, click on any Factoryinsight Pod and check that the VERSION environment variable is set to 2.

If it’s not, follow these steps:

  1. Navigate to the Workloads > Deployments tab.
  2. Select the united-manufacturing-hub-factoryinsight-deployment deployment.
  3. Click the Edit button to open the deployment’s configuration.

    Lens deployment Edit
    Lens deployment Edit

  4. Find the spec.template.spec.containers[0].env field.
  5. Set the value field of the VERSION variable to 2.

7.2.5.11 - Upgrade to v0.9.6

This page describes how to upgrade the United Manufacturing Hub to version 0.9.6

This page describes how to upgrade the United Manufacturing Hub to version 0.9.6. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

Add Helm repo in UMHLens / OpenLens

Check if the UMH Helm repository is added in UMHLens / OpenLens. To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,). Click on the Kubernetes tab and check if the Helm Chart section contains the https://repo.umh.app repository.

If it doesn’t, click the Add Custom Helm Repo button and fill in the following values:

Then click Add.

Add new index to the database

In this version, a new index has been added to the processValueTabe table, allowing to speed up the queries.

Open a shell in the database

sudo $(which kubectl) exec -it $(sudo $(which kubectl) get pods --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -l app.kubernetes.io/component=timescaledb -o jsonpath="{.items[0].metadata.name}") --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -- psql -U postgres

This command will open a psql shell connected to the default postgres database.

Create the index

Execute the following query:

CREATE INDEX ON processvaluetable(valuename, asset_id) WITH (timescaledb.transaction_per_chunk);
REINDEX TABLE processvaluetable;

This command could take a while to complete, especially on larger tables.

Type exit to close the shell.

Clear Workloads

Some workloads need to be deleted before upgrading. This process does not delete any data, but it will cause downtime. If a workload is missing, it means that it was not enabled in your cluster, therefore you can skip it.

To delete a resource, you can select it using the box on the left of the resource name and click the - button on the bottom right corner.

  1. Open the Workloads tab.
  2. From the Deployment section, delete the following deployments:
    • united-manufacturing-hub-barcodereader
    • united-manufacturing-hub-factoryinsight-deployment
    • united-manufacturing-hub-kafkatopostgresql
    • united-manufacturing-hub-mqttkafkabridge
    • united-manufacturing-hub-iotsensorsmqtt
    • united-manufacturing-hub-opcuasimulator-deployment
  3. From the StatefulSet section, delete the following statefulsets:
    • united-manufacturing-hub-mqttbridge
    • united-manufacturing-hub-hivemqce
    • united-manufacturing-hub-nodered
    • united-manufacturing-hub-sensorconnect

Upgrade Helm Chart

Now everything is ready to upgrade the Helm chart.

  1. Navigate to the Helm > Releases tab.
  2. Select the united-manufacturing-hub release and click Upgrade.
  3. In the Helm Upgrade window, make sure that the Upgrade version field contains the version you want to upgrade to.
  4. You can also change the values of the Helm chart, if needed.
  5. Click Upgrade.

The upgrade process can take a few minutes. The upgrade is complete when the Status field of the release is Deployed.

7.2.5.12 - Upgrade to v0.9.5

This page describes how to upgrade the United Manufacturing Hub to version 0.9.5

This page describes how to upgrade the United Manufacturing Hub to version 0.9.5. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

Add Helm repo in UMHLens / OpenLens

Check if the UMH Helm repository is added in UMHLens / OpenLens. To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,). Click on the Kubernetes tab and check if the Helm Chart section contains the https://repo.umh.app repository.

If it doesn’t, click the Add Custom Helm Repo button and fill in the following values:

Then click Add.

Alter ordertable constraint

In this version, one of the constraints of the ordertable table has been modified.

Make sure to backup the database before exectuing the following steps.

Open a shell in the database

sudo $(which kubectl) exec -it $(sudo $(which kubectl) get pods --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -l app.kubernetes.io/component=timescaledb -o jsonpath="{.items[0].metadata.name}") --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -- psql -U postgres

This command will open a psql shell connected to the default postgres database.

Alter the table

  1. Check for possible conflicts in the ordertable table:

    SELECT order_name, asset_id, count(*) FROM ordertable GROUP BY order_name, asset_id HAVING count(*) > 1;
    

    If the result is empty, you can skip the next step.

  2. Delete the duplicates:

    DELETE FROM ordertable ox USING (
         SELECT MIN(CTID) as ctid, order_name, asset_id
         FROM ordertable
         GROUP BY order_name, asset_id HAVING count(*) > 1
         ) b
    WHERE ox.order_name = b.order_name AND ox.asset_id = b.asset_id
    AND ox.CTID <> b.ctid;
    

    If the data cannot be deleted, you have to manually update each duplicate order_names to a unique value.

  3. Get the name of the constraint:

    SELECT conname FROM pg_constraint WHERE conrelid = 'ordertable'::regclass AND contype = 'u';
    
  4. Drop the constraint:

    ALTER TABLE ordertable DROP CONSTRAINT ordertable_asset_id_order_id_key;
    
  5. Add the new constraint:

    ALTER TABLE ordertable ADD CONSTRAINT ordertable_asset_id_order_name_key UNIQUE (asset_id, order_name);
    

Now you can close the shell by typing exit and continue with the upgrade process.

Clear Workloads

Some workloads need to be deleted before upgrading. This process does not delete any data, but it will cause downtime. If a workload is missing, it means that it was not enabled in your cluster, therefore you can skip it.

To delete a resource, you can select it using the box on the left of the resource name and click the - button on the bottom right corner.

  1. Open the Workloads tab.
  2. From the Deployment section, delete the following deployments:
    • united-manufacturing-hub-barcodereader
    • united-manufacturing-hub-factoryinsight-deployment
    • united-manufacturing-hub-kafkatopostgresql
    • united-manufacturing-hub-mqttkafkabridge
    • united-manufacturing-hub-iotsensorsmqtt
    • united-manufacturing-hub-opcuasimulator-deployment
  3. From the StatefulSet section, delete the following statefulsets:
    • united-manufacturing-hub-mqttbridge
    • united-manufacturing-hub-hivemqce
    • united-manufacturing-hub-nodered
    • united-manufacturing-hub-sensorconnect

Upgrade Helm Chart

Now everything is ready to upgrade the Helm chart.

  1. Navigate to the Helm > Releases tab.

  2. Select the united-manufacturing-hub release and click Upgrade.

  3. In the Helm Upgrade window, make sure that the Upgrade version field contains the version you want to upgrade to.

  4. You can also change the values of the Helm chart, if needed.

    • Enable the startup probe for the Kafka Broker by adding the following into the kafka section:

      startupProbe:
        enabled: true
        failureThreshold: 600
        periodSeconds: 10
        timeoutSeconds: 10
      
  5. Click Upgrade.

The upgrade process can take a few minutes. The upgrade is complete when the Status field of the release is Deployed.

Changes to the messages

Some messages have been modified in this version. You need to update some payolads in your Node-RED flows.

  • modifyState:
    • start_time_stamp has been renamed to timestamp_ms
    • end_time_stamp has been renamed to timestamp_ms_end
  • modifyProducedPieces:
    • start_time_stamp has been renamed to timestamp_ms
    • end_time_stamp has been renamed to timestamp_ms_end
  • deleteShiftByAssetIdAndBeginTimestamp and deleteShiftById have been removed. Use the deleteShift message instead.

7.2.5.13 - Upgrade to v0.9.4

This page describes how to upgrade the United Manufacturing Hub to version 0.9.4

This page describes how to upgrade the United Manufacturing Hub to version 0.9.4. Before upgrading, remember to backup the database, Node-RED flows, and your cluster configuration.

Add Helm repo in UMHLens / OpenLens

Check if the UMH Helm repository is added in UMHLens / OpenLens. To do so, from the top-left menu, select FIle > Preferences (or press CTRL + ,). Click on the Kubernetes tab and check if the Helm Chart section contains the https://repo.umh.app repository.

If it doesn’t, click the Add Custom Helm Repo button and fill in the following values:

Then click Add.

Clear Workloads

Some workloads need to be deleted before upgrading. This process does not delete any data, but it will cause downtime. If a workload is missing, it means that it was not enabled in your cluster, therefore you can skip it.

To delete a resource, you can select it using the box on the left of the resource name and click the - button on the bottom right corner.

  1. Open the Workloads tab.
  2. From the Deployment section, delete the following deployments:
    • united-manufacturing-hub-barcodereader
    • united-manufacturing-hub-factoryinsight-deployment
    • united-manufacturing-hub-kafkatopostgresql
    • united-manufacturing-hub-mqttkafkabridge
    • united-manufacturing-hub-iotsensorsmqtt
    • united-manufacturing-hub-opcuasimulator-deployment
  3. From the StatefulSet section, delete the following statefulsets:
    • united-manufacturing-hub-mqttbridge
    • united-manufacturing-hub-hivemqce
    • united-manufacturing-hub-nodered
    • united-manufacturing-hub-sensorconnect

Upgrade Helm Chart

Now everything is ready to upgrade the Helm chart.

  1. Navigate to the Helm > Releases tab.

  2. Select the united-manufacturing-hub release and click Upgrade.

  3. In the Helm Upgrade window, make sure that the Upgrade version field contains the version you want to upgrade to.

  4. You can also change the values of the Helm chart, if needed.

    • If you have enabled the Kafka Bridge, find the section _000_commonConfig.kafkaBridge.topicmap and set the value to the following:

      - bidirectional: false
        name: HighIntegrity
        send_direction: to_remote
        topic: ^ia\.(([^r.](\d|-|\w)*)|(r[b-z](\d|-|\w)*)|(ra[^w]))\.(\d|-|\w|_)+\.(\d|-|\w|_)+\.((addMaintenanceActivity)|(addOrder)|(addParentToChild)|(addProduct)|(addShift)|(count)|(deleteShiftByAssetIdAndBeginTimestamp)|(deleteShiftById)|(endOrder)|(modifyProducedPieces)|(modifyState)|(productTag)|(productTagString)|(recommendation)|(scrapCount)|(startOrder)|(state)|(uniqueProduct)|(scrapUniqueProduct))$
      - bidirectional: false
        name: HighThroughput
        send_direction: to_remote
        topic: ^ia\.(([^r.](\d|-|\w)*)|(r[b-z](\d|-|\w)*)|(ra[^w]))\.(\d|-|\w|_)+\.(\d|-|\w|_)+\.(process[V|v]alue).*$
      

      For more information, see the Kafka Bridge configuration

    • If you have enabled Barcodereader, find the barcodereader section and set the following values, adding the missing ones and updating the already existing ones:

      enabled: false
      image:
        pullPolicy: IfNotPresent
      resources:
        requests:
          cpu: "2m"
          memory: "30Mi"
        limits:
          cpu: "10m"
          memory: "60Mi"
      scanOnly: false # Debug mode, will not send data to kafka
      
  5. Click Upgrade.

The upgrade process can take a few minutes. The process is complete when the Status field of the release is Deployed.

7.3 - Administration

This section describes how to manage and configure the United Manufacturing Hub cluster.

In this section, you will find information about how to manage and configure the United Manufacturing Hub cluster, from customizing the cluster to access the different services.

7.3.1 - Access the Database

This page describes how to access the United Manufacturing Hub database to perform SQL operations using a database client or the CLI.

There are multiple ways to access the database. If you want to just visualize data, then using Grafana or a database client is the easiest way. If you need to also perform SQL commands, then using a database client or the CLI are the best options.

Generally, using a database client gives you the most flexibility, since you can both visualize the data and manipulate the database. However, it requires you to install a database client on your machine.

Using the CLI gives you more control over the database, but it requires you to have a good understanding of SQL.

Grafana comes with a pre-configured PostgreSQL datasource, so you can use it to visualize the data.

Before you begin

You need to have a UMH cluster. If you do not already have a cluster, you can create one by following the Getting Started guide.

You also need to access the system where the cluster is running, either by logging into it or by using a remote shell.

Get the database credentials

If you are not using the CLI, you need to know the database credentials. You can find them in the timescale-post-init-pw Secret. Run the following command to get the credentials:

sudo $(which kubectl) get secret timescale-post-init-pw -n united-manufacturing-hub -o go-template='{{range $k,$v := .data}}{{if eq $k "1_set_passwords.sh"}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{"\n"}}{{end}}{{end}}'  --kubeconfig /etc/rancher/k3s/k3s.yaml

This command will print an SQL script that contains the username and password for the different databases.

Access the database using a database client

There are many database clients that you can use to access the database. Here’s a list of some of the most popular database clients:

Database clients
NameFree or PaidPlatforms
pgAdminFreeWindows, macOS, Linux
DataGripPaidWindows, macOS, Linux
DBeaverBothWindows, macOS, Linux

For the sake of this tutorial, pgAdmin will be used as an example, but other clients have similar functionality. Refer to the specific client documentation for more information.

Using pgAdmin

You can use pgAdmin to access the database. To do so, you need to install the pgAdmin client on your machine. For more information, see the pgAdmin documentation.

  1. Once you have installed the client, you can add a new server from the main window.

    pgAdmin main window
    pgAdmin main window

  2. In the General tab, give the server a meaningful name. In the Connection tab, enter the database credentials:

    • The Host name/address is the IP address of your instance.
    • The Port is 5432.
    • The Maintenance database is postgres.
    • The Username and Password are the ones you found in the Secret.
  3. Click Save to save the server.

    pgAdmin connection window
    pgAdmin connection window

You can now connect to the database by double-clicking the server.

Use the side menu to navigate through the server. The tables are listed under the Schemas > public > Tables section of the factoryinsight database.

Refer to the pgAdmin documentation for more information on how to use the client to perform database operations.

Access the database using the command line interface

You can access the database from the command line using the psql command directly from the united-manufacturing-hub-timescaledb-0 Pod.

You will not need credentials to access the database from the Pod’s CLI.

The following steps need to be performed from the machine where the cluster is running, either by logging into it or by using a remote shell.

Open a shell in the database Pod

sudo $(which kubectl) exec -it $(sudo $(which kubectl) get pods --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -l app.kubernetes.io/component=timescaledb -o jsonpath="{.items[0].metadata.name}") --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -- psql -U postgres

This command will open a psql shell connected to the default postgres database.

Perform SQL commands

Once you have a shell in the database, you can perform SQL commands.

  1. For example, to create an index on the processValueTable:

    CREATE INDEX ON processvaluetable (valuename);
    
  2. When you are done, exit the postgres shell:

     exit
    

What’s next

7.3.2 - Access Services From Within the Cluster

This page describes how to access services from within the cluster.

All the services deployed in the cluster are visible to each other. That makes it easy to connect them together.

Before you begin

You need to have a UMH cluster. If you do not already have a cluster, you can create one by following the Getting Started guide.

You also need to access the system where the cluster is running, either by logging into it or by using a remote shell.

Connect to a service from another service

To connect to a service from another service, you can use the service name as the host name.

To get a list of available services and related ports you can run the following command from the instance:

sudo $(which kubectl) get svc -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml

All of them are available from within the cluster. The ones of type LoadBalancer are also available from outside the cluster using the node IP and the port listed in the Ports column.

Use the port on the left side of the colon (:) to connect to the service from outside the cluster. For example, the database is available on port 5432.

Example

The most common use case is to connect to the MQTT Broker from Node-RED.

To do that, when you create the MQTT node, you can use the service name united-manufacturing-hub-mqtt as the host name and one the ports listed in the Ports column.

The MQTT service name has changed since version 0.9.10. If you are using an older version, use united-manufacturing-hub-vernemq instead of united-manufacturing-hub-mqtt.

What’s next

7.3.3 - Access Services Outside the Cluster

This page describe how to access services from outside the cluster.

Some of the microservices in the United Manufacturing Hub are exposed outside the cluster with a LoadBalancer service. A LoadBalancer is a service that exposes a set of Pods on the same network as the cluster, but not necessarily to the entire internet. The LoadBalancer service provides a single IP address that can be used to access the Pods.

Before you begin

You need to have a UMH cluster. If you do not already have a cluster, you can create one by following the Getting Started guide.

You also need to access the system where the cluster is running, either by logging into it or by using a remote shell.

Accessing the services

To get a list of available services and related ports you can run the following command from the instance:

sudo $(which kubectl) get svc -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml

All of them are available from within the cluster. The ones of type LoadBalancer are also available from outside the cluster using the node IP and the port listed in the Ports column.

Use the port on the left side of the colon (:) to connect to the service from outside the cluster. For example, the database is available on port 5432.

Services with LoadBalancer by default

The following services are exposed outside the cluster with a LoadBalancer service by default:

To access Node-RED, you need to use the /nodered path, for example http://192.168.1.100:1880/nodered.

Services with NodePort by default

The Kafka Broker uses the service type NodePort by default.

Follow these steps to access the Kafka Broker outside the cluster:

  1. Access your instance via SSH

  2. Execute this command to check the host port of the Kafka Broker:

    sudo $(which kubectl) get svc united-manufacturing-hub-kafka-external -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml
    
  3. In the PORT(S) column, you should be able to see the port with 9094:<host-port>/TCP.

  4. To access the Kafka Broker, use <instance-ip-address>:<host-port>.

Services with ClusterIP

Some of the microservices in the United Manufacturing Hub are exposed via a ClusterIP service. That means that they are only accessible from within the cluster itself. There are two options for enabling access them from outside the cluster:

  • Creating a LoadBalancer service: A LoadBalancer is a service that exposes a set of Pods on the same network as the cluster, but not necessarily to the entire internet.
  • Port forwarding: You can just forward the port of a service to your local machine.

Port forwarding can be unstable, especially if the connection to the cluster is slow. If you are experiencing issues, try to create a LoadBalancer service instead.

Create a LoadBalancer service

Follow these steps to enable the LoadBalancer service for the corresponding microservice:

  1. Execute the following command to list the services and note the name of the one you want to access.

    sudo $(which kubectl) get svc -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml
    
  2. Start editing the service configuration by running this command:

    sudo $(which kubectl) edit svc <service-name> -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml
    
  3. Find the status.loadBalancer section and update it to the following:

    status:
      loadBalancer:
        ingress:
        - ip: <external-ip>
    

    Replace <external-ip> with the external IP address of the node.

  4. Go to the spec.type section and change the value from ClusterIP to LoadBalancer.

  5. After saving, your changes will be applied automatically and the service will be updated. Now, you can access the service at the configured address.

Port forwarding

  1. Execute the following command to list the services and note the name of the one you want to port-forward and the internal port that it use.

    sudo $(which kubectl) get svc -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml
    
  2. Run the following command to forward the port:

    sudo $(which kubectl) port-forward service/<your-service> <local-port>:<remote-port> -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml
    

    Where <local-port> is the port on the host that you want to use, and <remote-port> is the service port that you noted before. Usually, it’s good practice to pick a high number (greater than 30000) for the host port, in order to avoid conflicts.

  3. You should be able to see logs like:

    Forwarding from 127.0.0.1:31922 -> 9121
    Forwarding from [::1]:31922 -> 9121
    Handling connection for 31922
    

    You can now access the service using the IP address of the node and the port you choose.

Security considerations

MQTT broker

There are some security considerations to keep in mind when exposing the MQTT broker.

By default, the MQTT broker is configured to allow anonymous connections. This means that anyone can connect to the broker without providing any credentials. This is not recommended for production environments.

To secure the MQTT broker, you can configure it to require authentication. For that, you can either enable RBAC or set up HiveMQ PKI (recommended for production environments).

Troubleshooting

LoadBalancer service stuck in Pending state

If the LoadBalancer service is stuck in the Pending state, it probably means that the host port is already in use. To fix this, edit the service and change the section spec.ports.port to a different port number.

What’s next

7.3.4 - Expose Grafana to the Internet

This page describes how to expose Grafana to the Internet.

This page describes how to expose Grafana to the Internet so that you can access it from outside the Kubernetes cluster.

Before you begin

You need to have a UMH cluster. If you do not already have a cluster, you can create one by following the Getting Started guide.

You also need to access the system where the cluster is running, either by logging into it or by using a remote shell.

Enable the ingress

Enable the ingress by upgrading the value in the Helm chart.

To do so, run the following command:

sudo $(which helm) upgrade --set grafana.ingress.enabled=true united-manufacturing-hub united-manufacturing-hub/united-manufacturing-hub -n united-manufacturing-hub --reuse-values --version $(sudo $(which helm) get metadata united-manufacturing-hub -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml -o json | jq '.version') --kubeconfig /etc/rancher/k3s/k3s.yaml

Remember to add a DNS record for your domain name that points to the external IP address of the Kubernetes host.

What’s next

7.3.5 - Install Custom Drivers in NodeRed

This page describes how to install custom drivers in NodeRed.

NodeRed is running on Alpine Linux as non-root user. This means that you can’t install packages with apk. This tutorial shows you how to install packages with proper security measures.

Before you begin

You need to have a UMH cluster. If you do not already have a cluster, you can create one by following the Getting Started guide.

You also need to access the system where the cluster is running, either by logging into it or by using a remote shell.

Change the security context

From the instance’s shell, execute this command:

sudo $(which kubectl) patch statefulset united-manufacturing-hub-nodered -n united-manufacturing-hub -p '{"spec":{"template":{"spec":{"securityContext":{"runAsUser":0,"runAsNonRoot":false,"fsGroup":0}}}}}' --kubeconfig /etc/rancher/k3s/k3s.yaml

Install the packages

  1. Open a shell in the united-manufacturing-hub-nodered-0 pod with:

    sudo $(which kubectl) exec -it united-manufacturing-hub-nodered-0 -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml -- /bin/sh
    
  2. Install the packages with apk:

    apk add <package>
    

    For example, to install unixodbc:

    apk add unixodbc
    

    You can find the list of available packages here.

  3. Exit the shell by typing exit.

Revert the security context

For security reasons, you should revert the security context after you install the packages.

From the instance’s shell, execute this command:

sudo $(which kubectl) patch statefulset united-manufacturing-hub-nodered -n united-manufacturing-hub -p '{"spec":{"template":{"spec":{"securityContext":{"runAsUser":1000,"runAsNonRoot":true,"fsGroup":1000}}}}}' --kubeconfig /etc/rancher/k3s/k3s.yaml

What’s next

7.3.6 - Execute Kafka Shell Scripts

This page describes how to execute Kafka shell scripts.

When working with Kafka, you may need to execute shell scripts to perform administrative tasks. This page describes how to execute Kafka shell scripts.

Before you begin

You need to have a UMH cluster. If you do not already have a cluster, you can create one by following the Getting Started guide.

You also need to access the system where the cluster is running, either by logging into it or by using a remote shell.

Open a shell in the Kafka container

  1. From the instance’s shell, execute this command:

    sudo $(which kubectl) exec -it united-manufacturing-hub-kafka-0 -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml -- /bin/sh
    
  2. Navigate to the Kafka bin directory:

    cd /opt/bitnami/kafka/bin
    
  3. Execute any Kafka shell scripts. For example, to list all topics:

    ./kafka-topics.sh --list --zookeeper zookeeper:2181
    
  4. Exit the shell by typing exit.

What’s next

7.3.7 - Reduce database size

This page describes how to reduce the size of the United Manufacturing Hub database.

Over time, time-series data can consume a large amount of disk space. To reduce the amount of disk space used by time-series data, there are three options:

  • Enable data compression. This reduces the required disk space by applying mathematical compression to the data. This compression is lossless, so the data is not changed in any way. However, it will take more time to compress and decompress the data. For more information, see how TimescaleDB compression works.
  • Enable data retention. This deletes old data that is no longer needed, by setting policies that automatically delete data older than a specified time. This can be beneficial for managing the size of the database, as well as adhering to data retention regulations. However, by definition, data loss will occur. For more information, see how TimescaleDB data retention works.
  • Downsampling. This is a method of reducing the amount of data stored by aggregating data points over a period of time. For example, you can aggregate data points over a 30-minute period, instead of storing each data point. If exact data is not required, downsampling can be useful to reduce database size. However, data may be less accurate.

Before you begin

You need to have a UMH cluster. If you do not already have a cluster, you can create one by following the Getting Started guide.

You also need to access the system where the cluster is running, either by logging into it or by using a remote shell.

Open the database shell

sudo $(which kubectl) exec -it $(sudo $(which kubectl) get pods --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -l app.kubernetes.io/component=timescaledb -o jsonpath="{.items[0].metadata.name}") --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -- psql -U postgres

This command will open a psql shell connected to the default postgres database.

Connect to the corresponding database:


  \c factoryinsight
  


  \c umh_v2
  

Enable data compression

You can find sample SQL commands to enable data compression here.

  1. The first step is to turn on data compression on the target table, and set the compression options. Refer to the TimescaleDB documentation for a full list of options.

    
          -- set "asset_id" as the key for the compressed segments and orders the table by "valuename".
          ALTER TABLE processvaluetable SET (timescaledb.compress, timescaledb.compress_segmentby = 'asset_id', timescaledb.compress_orderby = 'valuename');
        

    
          -- set "asset_id" as the key for the compressed segments and orders the table by "name".
          ALTER TABLE tag SET (timescaledb.compress, timescaledb.compress_segmentby = 'asset_id', timescaledb.compress_orderby = 'name');
        
  2. Then, you have to create the compression policy. The interval determines the age that the chunks of data need to reach before being compressed. Read the official documentation for more information.

    
          -- set a compression policy on the "processvaluetable" table, which will compress data older than 7 days.
          SELECT add_compression_policy('processvaluetable', INTERVAL '7 days');
        

    
          -- set a compression policy on the "tag" table, which will compress data older than 2 weeks.
          SELECT add_compression_policy('tag', INTERVAL '2 weeks');
        

Enable data retention

You can find sample SQL commands to enable data retention here.

Sample command for factoryinsight and umh_v2 databases:

Enabling data retention consists in only adding the policy with the desired retention interval. Refer to the official documentation for more detailed information about these queries.


  -- Set a retention policy on the "processvaluetable" table, which will delete data older than 7 days.
  SELECT add_retention_policy('processvaluetable', INTERVAL '7 days');
  


  -- set a retention policy on the "tag" table, which will delete data older than 3 months.
  SELECT add_retention_policy('tag', INTERVAL '3 months');
  

What’s next

7.3.8 - Use Merge Point To Normalize Kafka Topics

This page describes how to reduce the amount of Kafka Topics in order to lower the overhead by using the merge point feature.

Kafka excels at processing a high volume of messages but can encounter difficulties with excessive topics, which may lead to insufficient memory. The optimal Kafka setup involves minimal topics, utilizing the event key for logical data segregation.

On the contrary, MQTT shines when handling a large number of topics with a small number of messages. But when bridging MQTT to Kafka, the number of topics can become overwhelming. Specifically, with the default configuration, Kafka is able to handle around 100-150 topics. This is because there is a limit of 1000 partitions per broker, and each topic requires has 6 partitions by default.

So, if you are experiencing memory issues with Kafka, you may want to consider combining multiple topics into a single topic with different keys. The diagram below illustrates how this principle simplifies topic management.

graph LR event1(Topic: umh.v1.acme.anytown.foo.bar
Value: 1) event2(Topic: umh.v1.acme.anytown.foo.baz
Value: 2) event3(Topic: umh.v1.acme.anytown
Value: 3) event4(umh.v1.acme.anytown.frob
Value: 4) event1 --> bridge event2 --> bridge event3 --> bridge event4 --> bridge bridge{{Topic merge point: 3}} subgraph Topic: umh.v1.acme gmsg1(Key: anytown.foo.bar
Value: 1) gmsg2(Key: anytown.foo.baz
Value: 2) gmsg3(Key: anytown
Value: 3) gmsg4(Key: anytown.frob
Value: 4) end bridge --> gmsg1 bridge --> gmsg2 bridge --> gmsg3 bridge --> gmsg4
graph LR event1(Topic: umh.v1.acme.anytown.foo.bar
Value: 1) event2(Topic: umh.v1.acme.anytown.foo.baz
Value: 2) event3(Topic: umh.v1.acme.anytown
Value: 3) event4(umh.v1.acme.anytown.frob
Value: 4) event1 --> bridge event2 --> bridge event3 --> bridge event4 --> bridge bridge{{Topic merge point: 3}} subgraph Topic: umh.v1.acme gmsg1(Key: anytown.foo.bar
Value: 1) gmsg2(Key: anytown.foo.baz
Value: 2) gmsg3(Key: anytown
Value: 3) gmsg4(Key: anytown.frob
Value: 4) end bridge --> gmsg1 bridge --> gmsg2 bridge --> gmsg3 bridge --> gmsg4

Before you begin

This tutorial is for advanced users. Contact us if you need assistance.

You need to have a UMH cluster. If you do not already have a cluster, you can create one by following the Getting Started guide.

You also need to access the system where the cluster is running, either by logging into it or by using a remote shell.

There are two configurations for the topic merge point: one in the Companion configuration for Benthos data sources and another in the Helm chart for data bridges.

Data Sources

To adjust the topic merge point for data sources, modify mgmtcompanion-config configmap. This can be easily done with the following command:

sudo $(which kubectl) edit configmap mgmtcompanion-config -n mgmtcompanion --kubeconfig /etc/rancher/k3s/k3s.yaml

This command opens the current configuration in the default editor, allowing you to set the umh_merge_point to your preferred value:

data:
  umh_merge_point: <numeric-value>

Ensure the value is at least 3 and update the lastUpdated field to the current Unix timestamp to trigger the automatic refresh of existing data sources.

Data Bridge

For data bridges, the merge point is defined individually in the Helm chart values for each bridge. Update the Helm chart installation with the new topicMergePoint value for each bridge. See the Helm chart documentation for more details.

Setting the topicMergePoint to -1 disables the merge feature.

7.3.9 - Delete Assets from the Database

This task shows you how to delete assets from the database.

This is useful if you have created assets by mistake, or to delete the ones that are no longer needed.

This task deletes data from the database. Make sure you have a backup of the database before you proceed.

Before you begin

You need to have a UMH cluster. If you do not already have a cluster, you can create one by following the Getting Started guide.

You also need to access the system where the cluster is running, either by logging into it or by using a remote shell.

Also, make sure to backup the database before you proceed. For more information, see Backing Up and Restoring the Database.

Delete assets from factoryinsight

If you want to delete assets from the umh_v2 database, go to this section.

Open the database shell

sudo $(which kubectl) exec -it $(sudo $(which kubectl) get pods --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -l app.kubernetes.io/component=timescaledb -o jsonpath="{.items[0].metadata.name}") --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -- psql -U postgres

This command will open a psql shell connected to the default postgres database.

Connect to the factoryinsight database:

\c factoryinsight

Choose the assets to delete

You have multiple choices to delete assets, like deleting a single asset, or deleting all assets in a location, or deleting all assets with a specific name.

To do so, you can customize the SQL command using different filters. Specifically, a combination of the following filters:

  • assetid
  • location
  • customer

To filter an SQL command, you can use the WHERE clause. For example, using all of the filters:

WHERE assetid = '<asset-id>' AND location = '<location>' AND customer = '<customer>';

You can use any combination of the filters, even just one of them.

Here are some examples:

  • Delete all assets with the same name from any location and any customer:

    WHERE assetid = '<asset-id>'
    
  • Delete all assets in a specific location:

     WHERE location = '<location>'
    
  • Delete all assets with the same name in a specific location:

    WHERE assetid = '<asset-id>' AND location = '<location>'
    
  • Delete all assets with the same name in a specific location for a single customer:

    WHERE assetid = '<asset-id>' AND location = '<location>' AND customer = '<customer>'
    

Delete the assets

Once you know the filters you want to use, you can use the following SQL commands to delete assets:

BEGIN;

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>)
DELETE FROM shifttable WHERE asset_id IN (SELECT id FROM assets_to_be_deleted);

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>)
DELETE FROM counttable WHERE asset_id IN (SELECT id FROM assets_to_be_deleted);

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>)
DELETE FROM ordertable WHERE asset_id IN (SELECT id FROM assets_to_be_deleted);

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>)
DELETE FROM processvaluestringtable WHERE asset_id IN (SELECT id FROM assets_to_be_deleted);

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>)
DELETE FROM processvaluetable WHERE asset_id IN (SELECT id FROM assets_to_be_deleted);

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>)
DELETE FROM producttable WHERE asset_id IN (SELECT id FROM assets_to_be_deleted);

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>)
DELETE FROM shifttable WHERE asset_id IN (SELECT id FROM assets_to_be_deleted);

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>)
DELETE FROM statetable WHERE asset_id IN (SELECT id FROM assets_to_be_deleted);

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>)
DELETE FROM assettable WHERE id IN (SELECT id FROM assets_to_be_deleted);

COMMIT;

Optionally, you can add the following code before the last WITH statement if you used the track&trace feature:

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>), uniqueproducts_to_be_deleted AS (SELECT uniqueproductid FROM uniqueproducttable WHERE asset_id IN (SELECT id FROM assets_to_be_deleted))
DELETE FROM producttagtable WHERE product_uid IN (SELECT uniqueproductid FROM uniqueproducts_to_be_deleted);

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>), uniqueproducts_to_be_deleted AS (SELECT uniqueproductid FROM uniqueproducttable WHERE asset_id IN (SELECT id FROM assets_to_be_deleted))
DELETE FROM producttagstringtable WHERE product_uid IN (SELECT uniqueproductid FROM uniqueproducts_to_be_deleted);

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>), uniqueproducts_to_be_deleted AS (SELECT uniqueproductid FROM uniqueproducttable WHERE asset_id IN (SELECT id FROM assets_to_be_deleted))
DELETE FROM productinheritancetable WHERE parent_uid IN (SELECT uniqueproductid FROM uniqueproducts_to_be_deleted) OR child_uid IN (SELECT uniqueproductid FROM uniqueproducts_to_be_deleted);

WITH assets_to_be_deleted AS (SELECT id FROM assettable <filter>)
DELETE FROM uniqueproducttable WHERE asset_id IN (SELECT id FROM assets_to_be_deleted);

Delete assets from umh_v2

Open the database shell

sudo $(which kubectl) exec -it $(sudo $(which kubectl) get pods --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -l app.kubernetes.io/component=timescaledb -o jsonpath="{.items[0].metadata.name}") --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -- psql -U postgres

This command will open a psql shell connected to the default postgres database.

Connect to the umh_v2 database:

\c umh_v2

Choose the assets to delete

You have multiple choices to delete assets, like deleting a single asset, or deleting all assets in a location, or deleting all assets with a specific name.

To do so, you can customize the SQL command using different filters. Specifically, a combination of the following filters:

  • enterprise
  • site
  • area
  • line
  • workcell
  • origin_id

To filter an SQL command, you can use the WHERE clause. For example, you can filter by enterprise, site, and area:

WHERE enterprise = '<your-enterprise>' AND site = '<your-site>' AND area = '<your-area>';

You can use any combination of the filters, even just one of them.

Delete the assets

Once you know the filters you want to use, you can use the following SQL commands to delete assets:

BEGIN;
WITH assets_to_be_deleted AS (SELECT id FROM asset <filter>)
DELETE FROM tag WHERE asset_id IN (SELECT id FROM assets_to_be_deleted);

WITH assets_to_be_deleted AS (SELECT id FROM asset <filter>)
DELETE FROM tag_string WHERE asset_id IN (SELECT id FROM assets_to_be_deleted);

WITH assets_to_be_deleted AS (SELECT id FROM asset <filter>)
DELETE FROM asset WHERE id IN (SELECT id FROM assets_to_be_deleted);
COMMIT;

What’s next

7.3.10 - Change the Language in Factoryinsight

This page describes how to change the language in Factoryinsight, in order to display the returned text in a different language.

You can change the language in Factoryinsight if you want to localize the returned text, like stop codes, to a different language.

Before you begin

You need to have a UMH cluster. If you do not already have a cluster, you can create one by following the Getting Started guide.

You also need to access the system where the cluster is running, either by logging into it or by using a remote shell.

Access the database shell

sudo $(which kubectl) exec -it $(sudo $(which kubectl) get pods --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -l app.kubernetes.io/component=timescaledb -o jsonpath="{.items[0].metadata.name}") --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -- psql -U postgres

This command will open a psql shell connected to the default postgres database.

Connect to the factoryinsight database:

\c factoryinsight

Change the language

Execute the following command to change the language:

INSERT INTO configurationtable (customer, languagecode) VALUES ('factoryinsight', <code>) ON CONFLICT(customer) DO UPDATE SET languagecode=<code>;

where <code> is the language code. For example, to change the language to German, use 0.

Supported languages

Factoryinsight supports the following languages:

Supported languages
LanguageCode
German0
English1
Turkish2

What’s next

7.3.11 - Explore Cached Data

This page shows how to explore cached data in the United Manufacturing Hub.

When working with the United Manufacturing Hub, you might want to visualize information about the cached data. This page shows how you can access the cache and explore the data.

Before you begin

You need to have a UMH cluster. If you do not already have a cluster, you can create one by following the Getting Started guide.

You also need to access the system where the cluster is running, either by logging into it or by using a remote shell.

Open a shell in the cache Pod

Get access to the instance’s shell and execute the following commands.

  1. Get the cache password

    sudo $(which kubectl) get secret redis-secret -n united-manufacturing-hub -o go-template='{{range $k,$v := .data}}{{printf "%s: " $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{"\n"}}{{end}}'  --kubeconfig /etc/rancher/k3s/k3s.yaml
    
  2. Open a shell in the Pod:

    sudo $(which kubectl) exec -it united-manufacturing-hub-redis-master-0 -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml -- /bin/sh
    
    If you have multiple cache Pods, you can select any of them.
  3. Enter the Redis shell:

    redis-cli -a <cache-password>
    
  4. Now you can execute any command. For example, to get the number of keys in the cache, run:

    KEYS *
    

    Or, to get the cache size, run:

    DBSIZE
    

For more information about Redis commands, see the Redis documentation.

What’s next

7.4 - Backup & Recovery

This section contains information about how to backup and recover various components of the United Manufacturing Hub.

7.4.1 - Backup and Restore the United Manufacturing Hub

This page describes how to backup and restore the entire United Manufacturing Hub.

This page describes how to back up the following:

  • All Node-RED flows
  • All Grafana dashboards
  • The Helm values used for installing the united-manufacturing-hub release
  • All the contents of the United Manufacturing Hub database (factoryinsight and umh_v2)
  • The Management Console Companion’s settings

It does not back up:

  • Additional databases other than the United Manufacturing Hub default database
  • TimescaleDB continuous aggregates: Follow the official documentation to learn how.
  • TimescaleDB policies: Follow the official documentation to learn how.
  • Everything else not included in the previous list

This procedure only works on Windows.

Before you begin

Download the backup scripts and extract the content in a folder of your choice.

For this task, you need to have PostgreSQL installed on your machine.

You also need to have enough space on your machine to store the backup. To check the size of the database, ssh into the system and follow the steps below:

sudo $(which kubectl) exec -it $(sudo $(which kubectl) get pods --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -l app.kubernetes.io/component=timescaledb -o jsonpath="{.items[0].metadata.name}") --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -- psql -U postgres

This command will open a psql shell connected to the default postgres database.

Run the following command to get the size of the database:

SELECT pg_size_pretty(pg_database_size('umh_v2')) AS "umh_v2", pg_size_pretty(pg_database_size('factoryinsight')) AS "factoryinsight";

Backup

Generate Grafana API Key

Create a Grafana API Token for an admin user by following these steps:

  1. Open the Grafana UI in your browser and log in with an admin user.
  2. Click on the Configuration icon in the left sidebar and select API Keys.
  3. Give the API key a name and change its role to Admin.
  4. Optionally set an expiration date.
  5. Click Add.
  6. Copy the generated API key and save it for later.

Stop workloads

To prevent data inconsistencies, you need to temporarily stop the MQTT and Kafka Brokers.

Access the instance’s shell and execute the following commands:

sudo $(which kubectl) scale statefulset united-manufacturing-hub-kafka --replicas=0 -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml
sudo $(which kubectl) scale statefulset united-manufacturing-hub-hivemqce --replicas=0 -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml

Copy kubeconfig file

To run the backup script, you’ll first need to obtain a copy of the Kubernetes configuration file from your instance. This is essential for providing the script with access to the instance.

  1. In the shell of your instance, execute the following command to display the Kubernetes configuration:

    sudo cat /etc/rancher/k3s/k3s.yaml
    

    Make sure to copy the entire output to your clipboard.

    This tutorial is based on the assumption that your kubeconfig file is located at /etc/rancher/k3s/k3s.yaml. Depending on your setup, the actual file location might be different.

  2. Open a text editor, like Notepad, on your local machine and paste the copied content.

  3. In the pasted content, find the server field. It usually defaults to https://127.0.0.1:6443. Replace this with your instance’s IP address

    server: https://<INSTANCE_IP>:6443
    
  4. Save the file as k3s.yaml inside the backup folder you downloaded earlier.

Backup using the script

The backup script is located inside the folder you downloaded earlier.

  1. Open a terminal and navigate inside the folder.

    cd <FOLDER_PATH>
    
  2. Run the script:

    .\backup.ps1 -IP <IP_OF_THE_SERVER> -GrafanaToken <GRAFANA_API_KEY> -KubeconfigPath .\k3s.yaml
    

    You can find a list of all available parameters down below.

    If OutputPath is not set, the backup will be stored in the current folder.

This script might take a while to finish, depending on the size of your database and your connection speed.

If the connection is interrupted, there is currently no option to resume the process, therefore you will need to start again.

Here is a list of all available parameters:

Available parameters
ParameterDescriptionRequiredDefault value
GrafanaTokenGrafana API keyYes
IPIP of the cluster to backupYes
KubeconfigPathPath to the kubeconfig fileYes
DatabaseDatabaseName of the databse to backupNofactoryinsight
DatabasePasswordPassword of the database userNochangeme
DatabasePortPort of the databaseNo5432
DatabaseUserDatabase userNofactoryinsight
DaysPerJobNumber of days worth of data to backup in each parallel jobNo31
EnableGpgEncryptionSet to true if you want to encrypt the backupNofalse
EnableGpgSigningSet to true if you want to sign the backupNofalse
GpgEncryptionKeyIdID of the GPG key used for encryptionNo
GpgSigningKeyIdID of the GPG key used for signingNo
GrafanaPortExternal port of the Grafana serviceNo8080
OutputPathPath to the folder where the backup will be storedNoCurrent folder
ParallelJobsNumber of parallel job backups to runNo4
SkipDiskSpaceCheckSkip checking available disk spaceNofalse
SkipGpgQuestionsSet to true if you want to sign or encrypt the backupNofalse

Restore

Each component of the United Manufacturing Hub can be restored separately, in order to allow for more flexibility and to reduce the damage in case of a failure.

Copy kubeconfig file

To run the backup script, you’ll first need to obtain a copy of the Kubernetes configuration file from your instance. This is essential for providing the script with access to the instance.

  1. In the shell of your instance, execute the following command to display the Kubernetes configuration:

    sudo cat /etc/rancher/k3s/k3s.yaml
    

    Make sure to copy the entire output to your clipboard.

    This tutorial is based on the assumption that your kubeconfig file is located at /etc/rancher/k3s/k3s.yaml. Depending on your setup, the actual file location might be different.

  2. Open a text editor, like Notepad, on your local machine and paste the copied content.

  3. In the pasted content, find the server field. It usually defaults to https://127.0.0.1:6443. Replace this with your instance’s IP address

    server: https://<INSTANCE_IP>:6443
    
  4. Save the file as k3s.yaml inside the backup folder you downloaded earlier.

Cluster configuration

To restore the Kubernetes cluster, execute the .\restore-helm.ps1 script with the following parameters:

.\restore-helm.ps1 -KubeconfigPath .\k3s.yaml -BackupPath <PATH_TO_BACKUP_FOLDER>

Verify that the cluster is up and running by opening UMHLens / OpenLens and checking if the workloads are running.

Grafana dashboards

To restore the Grafana dashboards, you first need to create a Grafana API Key for an admin user in the new cluster by following these steps:

  1. Open the Grafana UI in your browser and log in with an admin user.
  2. Click on the Configuration icon in the left sidebar and select API Keys.
  3. Give the API key a name and change its role to Admin.
  4. Optionally set an expiration date.
  5. Click Add.
  6. Copy the generated API key and save it for later.

Then, on your local machine, execute the .\restore-grafana.ps1 script with the following parameters:

.\restore-grafana.ps1 -FullUrl http://<IP_OF_THE_SERVER>:8080 -Token <GRAFANA_API_KEY> -BackupPath <PATH_TO_BACKUP_FOLDER>

Restore Node-RED flows

To restore the Node-RED flows, execute the .\restore-nodered.ps1 script with the following parameters:

.\restore-nodered.ps1 -KubeconfigPath .\k3s.yaml -BackupPath <PATH_TO_BACKUP_FOLDER>

Restore the database

  1. Check the database password by running the following command in your instance’s shell:

    sudo $(which kubectl) get secret united-manufacturing-hub-credentials --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -o jsonpath="{.data.PATRONI_SUPERUSER_PASSWORD}" | base64 --decode; echo
    
  2. Execute the .\restore-timescale.ps1 and .\restore-timescale-v2.ps1 script with the following parameters to restore factoryinsight and umh_v2 databases:

    .\restore-timescale.ps1 -Ip <IP_OF_THE_SERVER> -BackupPath <PATH_TO_BACKUP_FOLDER> -PatroniSuperUserPassword <DATABASE_PASSWORD>
    .\restore-timescale-v2.ps1 -Ip <IP_OF_THE_SERVER> -BackupPath <PATH_TO_BACKUP_FOLDER> -PatroniSuperUserPassword <DATABASE_PASSWORD>
    

Restore the Management Console Companion

Execute the .\restore-companion.ps1 script with the following parameters to restore the companion:

.\restore-companion.ps1 -KubeconfigPath .\k3s.yaml -BackupPath <FULL_PATH_TO_BACKUP_FOLDER>

Troubleshooting

Unable to connect to the server: x509: certificate signed …

This issue may occur when the device’s IP address changes from DHCP to static after installation. A quick solution is skipping TLS validation. If you want to enable insecure-skip-tls-verify option, run the following command on the instance’s shell before copying kubeconfig on the server:

sudo $(which kubectl) config set-cluster default --insecure-skip-tls-verify=true --kubeconfig /etc/rancher/k3s/k3s.yaml

What’s next

7.4.2 - Backup and Restore Database

This page describes how to backup and restore the database.

Before you begin

For this task, you need to have PostgreSQL installed on your machine. Make sure that its version is compatible with the version installed on the UMH.

Also, enough free space is required on your machine to store the backup. To check the size of the database, ssh into the system and follow the steps below:

sudo $(which kubectl) exec -it $(sudo $(which kubectl) get pods --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -l app.kubernetes.io/component=timescaledb -o jsonpath="{.items[0].metadata.name}") --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -- psql -U postgres

This command will open a psql shell connected to the default postgres database.

Connect to the umh_v2 or factoryinsight database:

\c <database-name>

Run the following command to get the size of the database:

SELECT pg_size_pretty(pg_database_size('<database-name>'));

If you need, check the version of PostgreSQL with this command:

\! psql --version

Backing up the database

Follow these steps to create a backup of the factoryinsight database on your machine:

  1. Open a terminal, and using the cd command, navigate to the folder where you want to store the backup. For example:

    
       cd C:\Users\user\backups
       

    
       cd /Users/user/backups
       

    
       cd /home/user/backups
       

    If the folder does not exist, you can create it using the mkdir command or your file manager.

  2. Run the following command to backup pre-data, which includes table and schema definitions, as well as information on sequences, owners, and settings:

    pg_dump -U factoryinsight -h <remote-host> -p 5432 -Fc -v --section=pre-data --exclude-schema="_timescaledb*" -f dump_pre_data.bak factoryinsight
    

    Then, enter your password. The default for factoryinsight is changeme.

    • <remote-host> is the server’s IP where the database (UMH instance) is running.

    The output of the command does not include Timescale-specific schemas.

  3. Run the following command to connect to the factoryinsight database:

    psql "postgres://factoryinsight:<password>@<server-IP>:5432/factoryinsight?sslmode=require"
    

    The default password is changeme.

  4. Check the table list running \dt and run the following command for each table to save all data to .csv files:

    \COPY (SELECT * FROM <TABLE_NAME>) TO <TABLE_NAME>.csv CSV
    

Grafana and umh_v2 database

If you want to backup the Grafana or umh_v2 database, you can follow the same steps as above, but you need to replace any occurence of factoryinsight with grafana.

In addition, you need to write down the credentials in the grafana-secret Secret, as they are necessary to access the dashboard after restoring the database.

The default username for umh_v2 database is kafkatopostgresqlv2, and the password is changemetoo.

Restoring the database

For this section, we assume that you are restoring the data to a fresh United Manufacturing Hub installation with an empty database.

Temporarly disable kafkatopostrgesql, kafkatopostgresqlv2, and factoryinsight

Since kafkatopostrgesql, kafkatopostgresqlv2, and factoryinsight microservices might write actual data into the database while restoring it, they should be disabled. Connect to your server via SSH and run the following command:

sudo $(which kubectl) scale deployment united-manufacturing-hub-kafkatopostgresql --replicas=0 -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml;
sudo $(which kubectl) scale deployment united-manufacturing-hub-kafkatopostgresqlv2 --replicas=0 -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml;
sudo $(which kubectl) scale deployment united-manufacturing-hub-factoryinsight-deployment --replicas=0 -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml

Restore the database

This section shows an example for restoring factoryinsight. If you want to restore grafana, you need to replace any occurence of factoryinsight with grafana.

For umh_v2, you should use kafkatopostgresqlv2 for the user name and changemetoo for the password.

  1. Make sure that your device is connected to server via SSH and run the following command:

    sudo $(which kubectl) exec -it $(sudo $(which kubectl) get pods --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -l app.kubernetes.io/component=timescaledb -o jsonpath="{.items[0].metadata.name}") --kubeconfig /etc/rancher/k3s/k3s.yaml -n united-manufacturing-hub -- psql -U postgres
       

    This command will open a psql shell connected to the default postgres database.

  2. Drop the existing database:

    DROP DATABASE factoryinsight;
    
  3. Create a new database:

    CREATE DATABASE factoryinsight;
    \c factoryinsight
    CREATE EXTENSION IF NOT EXISTS timescaledb;
    
  4. Put the database in maintenance mode:

    SELECT timescaledb_pre_restore();
    
  5. Now, open a new terminal and restore schemas except Timescale-specific schemas with the following command:

    pg_restore -U factoryinsight -h 10.13.47.205 -p 5432 --no-owner -Fc -v -d factoryinsight <path-to-dump_pre_data.bak>
    
  6. Connect to the database:

    psql "postgres://factoryinsight:<password>@<server-IP>:5432/factoryinsight?sslmode=require"
    
  7. Restore hypertables:

    • Commands for factoryinsight:
      SELECT create_hypertable('productTagTable', 'product_uid', chunk_time_interval => 100000);
      SELECT create_hypertable('productTagStringTable', 'product_uid', chunk_time_interval => 100000);
      SELECT create_hypertable('processValueStringTable', 'timestamp');
      SELECT create_hypertable('stateTable', 'timestamp');
      SELECT create_hypertable('countTable', 'timestamp');
      SELECT create_hypertable('processValueTable', 'timestamp');
      
    • Commands for umh_v2
      SELECT create_hypertable('tag', 'timestamp');
      SELECT create_hypertable('tag_string', 'timestamp');
      
    • Grafana database does not have hypertables by default.
  8. Run the following SQL commands for each table to restore data into database:

    \COPY <table-name> FROM '<table-name>.csv' WITH (FORMAT CSV);
    
  9. Go back to the terminal connected to the server and take the database out of maintenance mode. Make sure that the databsae shell is open:

    SELECT timescaledb_post_restore();
    

Enable kafkatopostgresql, kafkatopostgresqlv2, and factoryinsight

Run the following command to enable kafkatopostgresql, kafkatopostgresqlv2, and factoryinsight:

sudo $(which kubectl) scale deployment united-manufacturing-hub-kafkatopostgresql --replicas=1 -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml;
sudo $(which kubectl) scale deployment united-manufacturing-hub-kafkatopostgresqlv2 --replicas=1 -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml;
sudo $(which kubectl) scale deployment united-manufacturing-hub-factoryinsight-deployment --replicas=2 -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml

What’s next

7.4.3 - Import and Export Node-RED Flows

This page describes how to import and export Node-RED flows.

Export Node-RED Flows

To export Node-RED flows, please follow the steps below:

  1. Access Node-RED by navigating to http://<CLUSTER-IP>:1880/nodered in your browser. Replace <CLUSTER-IP> with the IP address of your cluster, or localhost if you are running the cluster locally.

  2. From the top-right menu, select Export.

  3. From the Export dialog, select wich nodes or flows you want to export.

  4. Click Download to download the exported flows, or Copy to clipboard to copy the exported flows to the clipboard.

    ExportWindow
    ExportWindow

The credentials of the connector nodes are not exported. You will need to re-enter them after importing the flows.

Import Node-RED Flows

To import Node-RED flows, please follow the steps below:

  1. Access Node-RED by navigating to http://<CLUSTER-IP>:1880/nodered in your browser. Replace <CLUSTER-IP> with the IP address of your cluster, or localhost if you are running the cluster locally.

  2. From the top-right menu, select Import.

  3. From the Import dialog, select the file containing the exported flows, or paste the exported flows from the clipboard.

  4. Click Import to import the flows.

    ImportWindow
    ImportWindow

7.5 - Security

This section contains information about how to secure the United Manufacturing Hub.

7.5.1 - Enable RBAC for the MQTT Broker

This page describes how to enable Role-Based Access Control (RBAC) for the MQTT broker.

Enable RBAC

Enable RBAC by upgrading the value in the Helm chart.

To do so, run the following command:

sudo $(which helm) upgrade --set mqtt_broker.rbacEnabled=true united-manufacturing-hub united-manufacturing-hub/united-manufacturing-hub -n united-manufacturing-hub --reuse-values --version $(sudo $(which helm) get metadata united-manufacturing-hub -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml -o json | jq '.version') --kubeconfig /etc/rancher/k3s/k3s.yaml

Now all MQTT connections require password authentication with the following defaults:

  • Username: node-red
  • Password: INSECURE_INSECURE_INSECURE

Change default credentials

  1. Open a shell inside the Pod:

    sudo $(which kubectl) exec -it united-manufacturing-hub-hivemqce-0 -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml -- /bin/sh
    
  2. Navigate to the installation directory of the RBAC extension.

    cd extensions/hivemq-file-rbac-extension/
    
  3. Generate a password hash with this command.

    java -jar hivemq-file-rbac-extension-<version>.jar -p <password>
    
    • Replace <version> with the version of the HiveMQ CE extension. If you are not sure which version is installed, you can press Tab after typing java -jar hivemq-file-rbac-extension- to autocomplete the version.
    • Replace <password> with your desired password. Do not use any whitespaces.
  4. Copy the output of the command. It should look similar to this:

    $2a$10$Q8ZQ8ZQ8ZQ8ZQ8ZQ8ZQ8Zu
    
  5. Exit the shell by typing exit.

  6. Edit the ConfigMap to update the password hash.

    sudo $(which kubectl) edit configmap united-manufacturing-hub-hivemqce-extension -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml
    

    This command will open the default text editor with the ConfigMap contents. Change the value inbetween the <password> tags with the password hash generated in step 4.

    You can use a different password for each different microservice. Just remember that you will need to update the configuration in each one to use the new password.
  7. Save the changes.

  8. Recreate the Pod:

    sudo $(which kubectl) delete pod united-manufacturing-hub-hivemqce-0 -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml
    

What’s next

7.5.2 - Firewall Rules

This page describes how to setup firewall rules for the UMH instances.

Some enterprise networks operate in a whitelist manner, where all outgoing and incoming communication is blocked by default. However, the installation and maintenance of UMH requires internet access for tasks such as downloading the operating system, Docker containers, monitoring via the Management Console, and loading third-party plugins. As dependencies are hosted on various servers and may change based on vendors’ decisions, we’ve simplified the user experience by consolidating all mandatory services under a single domain. Nevertheless, if you wish to install third-party components like Node-RED or Grafana plugins, you’ll need to whitelist additional domains.

Before you begin

The only prerequisite is having a firewall that allows modification of rules. If you’re unsure about this, consider contacting your network administrator.

Firewall Configuration

Once you’re ready and ensured that you have the necessary permissions to configure the firewall, follow these steps:

Whitelist management.umh.app

This mandatory step requires whitelisting management.umh.app on TCP port 443 (HTTPS traffic). Not doing so will disrupt UMH functionality; installations, updates, and monitoring won’t work as expected.

Optional: Whitelist domains for common 3rd party plugins

Include these common external domains and ports in your firewall rules to allow installing Node-RED and Grafana plugins:

  • registry.npmjs.org (required for installing Node-RED plugins)
  • storage.googleapis.com (required for installing Grafana plugins)
  • grafana.com (required for displaying Grafana plugins)
  • catalogue.nodered.org (required for displaying Node-RED plugins, only relevant for the client that is using Node-RED, not the server where it’s installed on).

Depending on your setup, additional domains may need to be whitelisted.

DNS Configuration (Optional)

By default, we are using your DHCP configured DNS servers. If you are using static ip or want to use a different DNS server, contact us for a custom configuration file.

Bring your own containers

Our system tries to fetch all containers from our own registry (management.umh.app) first. If this fails, it will try to fetch docker.io from https://registry-1.docker.io, ghcr.io from https://ghcr.io and quay.io from https://quay.io (and any other from management.umh.app) If you need to use a different registry, edit the /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl to set your own mirror configuration.

Troubleshooting

I’m having connectivity problems. What should I do?

First of all, double-check that your firewall rules are configured as described in this page, especially the step involving our domain. As a quick test, you can use the following command from a different machine within the same network to check if the rules are working:

curl -vvv https://management.umh.app

7.5.3 - Setup PKI for the MQTT Broker

This page describes how to setup the Public Key Infrastructure (PKI) for the MQTT broker.

If you want to use MQTT over TLS (MQTTS) or Secure Web Socket (WSS) you need to setup a Public Key Infrastructure (PKI).

Read the blog article about secure communication in IoT to learn more about encryption and certificates.

Structure overview

The Public Key Infrastructure for HiveMQ consists of two Java Key Stores (JKS):

  • Keystore: The Keystore contains the HiveMQ certificate and private keys. This store must be confidential, since anyone with access to it could generate valid client certificates and read or send messages in your MQTT infrastructure.
  • Truststore: The Truststore contains all the clients public certificates. HiveMQ uses it to verify the authenticity of the connections.

Before you begin

You need to have the following tools installed:

  • OpenSSL. If you are using Windows, you can install it with Chocolatey.
  • Java

Create a Keystore

Open a terminal and run the following command:

keytool -genkey -keyalg RSA -alias hivemq -keystore hivemq.jks -storepass <password> -validity <days> -keysize 4096 -dname "CN=united-manufacturing-hub-mqtt" -ext "SAN=IP:127.0.0.1"

Replace the following placeholders:

  • <password>: The password for the keystore. You can use any password you want.
  • <days>: The number of days the certificate should be valid.

The command runs for a few minutes and generates a file named hivemq.jks in the current directory, which contains the HiveMQ certificate and private key.

If you want to explore the contents of the keystore, you can use Keystore Explorer.

Generate client certificates

Open a terminal and create a directory for the client certificates:

mkdir pki

Follow these steps for each client you want to generate a certificate for.

  1. Create a new key pair:

    openssl req -new -x509 -newkey rsa:4096 -keyout "pki/<servicename>-key.pem" -out "pki/<servicename>-cert.pem" -nodes -days <days> -subj "/CN=<servicename>"
    
  2. Convert the certificate to the correct format:

    openssl x509 -outform der -in "pki/<servicename>-cert.pem" -out "pki/<servicename>.crt"
    
  3. Import the certificate into the Truststore:

    keytool -import -file "pki/<servicename>.crt" -alias "<servicename>" -keystore hivemq-trust-store.jks -storepass <password>
    

Replace the following placeholders:

  • <servicename> with the name of the client. Use the service name from the Network > Services tab in UMHLens / OpenLens.
  • <days> with the number of days the certificate should be valid.
  • <password> with the password for the Truststore. You can use any password you want.

Import the PKI into the United Manufacturing Hub

First you need to encode in base64 the Keystore, the Truststore and all the PEM files. Use the following script to encode everything automatically:

Get-ChildItem .\ -Recurse -Include *.jks,*.pem | ForEach-Object {
    $FileContent = Get-Content $_ -Raw
    $fileContentInBytes = [System.Text.Encoding]::UTF8.GetBytes($FileContent)
    $fileContentEncoded = [System.Convert]::ToBase64String($fileContentInBytes)
    $fileContentEncoded > $_".b64"
    Write-Host $_".b64 File Encoded Successfully!"
}

find ./ -regex '.*\.jks\|.*\.pem' -exec openssl base64 -A -in {} -out {}.b64 \;

You could also do it manually with the following command:

openssl base64 -A -in <filename> -out <filename>.b64

Now you can import the PKI into the United Manufacturing Hub. To do so, create a file named pki.yaml with the following content:

_000_commonConfig:
  infrastructure:
    mqtt:
      tls:
        keystoreBase64: <content of hivemq.jks.b64>
        keystorePassword: <password>
        truststoreBase64: <content of hivemq-trust-store.jks.b64>
        truststorePassword: <password>
        <servicename>.cert: <content of <servicename>-cert.pem.b64>
        <servicename>.key: <content of <servicename>-key.pem.b64>

Now, send copy it to your instance with the following command:

scp pki.yaml <username>@<ip-address>:/tmp

After that, access the instance with SSH and run the following command:

sudo $(which helm) upgrade -f /tmp/pki.yaml united-manufacturing-hub united-manufacturing-hub/united-manufacturing-hub -n united-manufacturing-hub --reuse-values --version $(sudo $(which helm) get metadata united-manufacturing-hub -n united-manufacturing-hub --kubeconfig /etc/rancher/k3s/k3s.yaml -o json | jq '.version') --kubeconfig /etc/rancher/k3s/k3s.yaml

What’s next

8 - What's New

This section contains information about the new features and changes in the United Manufacturing Hub.

For release highlights, deprecations, and breaking changes in the United Manufacturing Hub, refer to these “What’s new” pages for each version.

8.1 - What's New in Version 0.2

This section contains information about the new features and changes in the United Manufacturing Hub introduced in version 0.2.

Welcome to United Manufacturing Hub version 0.2!

It this release we have some exiting changes to the Management Console!

0.2.0

Management Console

  • The Data Connections and Data Sources administration has been revised, and now it’s all in one place called Connection Management. This new concept revolves around the idea of a connection, which is just a link between your UMH instance and a data source. You can then configure the connection to fetch data from the source, and monitor its status. Additionally, you can now edit existing connections and data source configurations, and delete them if you don’t need them anymore.

0.2.2

Management Console

  • The updating functionality has been temporarily disabled, as it gives errors even when the update is successful. We are working on a fix for this issue and will re-enable the functionality as soon as possible.

0.2.3

Centralized all initial installation and continuous updating processes (Docker, k3s, helm, flatcar, …) to interact solely with management.umh.app. This ensures that only one domain is necessary to be allowed in the firewall for these activities.

Data Infrastructure

  • Upgraded the Helm Chart to version 0.10.6, which includes:
    • Transitioned Docker URLs to our internal registry from a single domain (see above)
    • Removed obsolete services: factoryinput, grafanaproxy, custom microservice tester, kafka-state-detector, mqtt-bridge. This change is also reflected in our documentation.
    • Resolved an issue where restarting kafka-to-postgres was necessary when adding a new topic.

Device & Container Infrastructure

  • Modified flatcar provisioning and the installation script to retrieve all necessary binaries from a single domain (see above)

Management Console

  • Addressed a bug that prevented the workspace tab from functioning correctly in the absence of configured connections.

0.2.4

Management Console

  • Addressed multiple bugs in the updater functionality, preventing the frontend from registering a completed update.

0.2.5

Management Console

  • Addressed multiple bugs in the updater functionality, preventing the frontend from registering a completed update.

0.2.6

Management Console

  • Fixed crash on connection loss.
  • Structures for the new data model are now in place.

0.2.7

Management Console

  • Added companion functionality to generate and send v2 formatted tags.
  • Added frontend functionality to retrieve v2 formatted tags.

0.2.8

Management Console

  • Re-enabled the updating functionality, which is now working as expected. You will need to manually update your instances’ Management Companion to the latest version to ensure compatibility. To do so, from you UMH instance, run the following command:

    sudo $(which kubectl) set image statefulset mgmtcompanion mgmtcompanion=management.umh.app/oci/united-manufacturing-hub/mgmtcompanion:0.2.8 -n mgmtcompanion --kubeconfig /etc/rancher/k3s/k3s.yaml
    

0.2.9

Management Console

  • Reduced load on kubernetes api.
  • Fixed issues with module health checking.
  • Fixed issues with system health display.
  • Added panel to change state of flags
  • Fixed bugs in tag browser
  • Updated benthos-umh to 0.1.13
  • Increased log verbosity for failed installations.

0.2.10

Management Console

  • Fixed a bug in the tag browser and enabled it by default.

0.2.11

Management Console

  • Fixed an issue in the updater, which made it crash on startup.

0.2.12

Management Console

  • Resolved issues with kubernetes caching, resulting in failing updates.

0.2.13

Management Console

  • Improved responsiveness and reliability of the companion.
  • Added event chart to the tag browser.
  • Improved simulator accuracy.

0.2.14

Management Console

  • Added history view for tags.
  • Fixed bugs with nil handling in the tag browser.

0.2.15

Management Console

  • Added SQL queries to the Tag Browser for Grafana & TimescaleDB.
  • Fixed a bug that prevented to get the correct health status of the system.
  • Client bug fixes and stability improvements in the Tag Browser.
  • Improved client error handling for the event values history feature.

Data Infrastructure

  • Upgraded the Helm Chart to version 0.13.6 which includes:
    • kafka-to-postgresql-v2 now supports the new data model (both _historian and _analytics topics).
    • Upgraded HiveMQ to 2024.1
    • Upgraded Redis to 7.2.4
    • Upgraded Go to 1.22
    • Upgraded Grafana to 9.5.5

0.2.16

Management Console

  • Added more robust error handling to the companion.

Installer

  • Added new location parameters in preparation for the new network visualization feature.

8.2 - What's New in Version 0.1

This section contains information about the new features and changes in the United Manufacturing Hub introduced in version 0.1.

Welcome to United Manufacturing Hub version 0.1! This marks the first release of the United Manufacturing Hub, even though it’s been available for a while now.

You might have already seen other versions (probably the ones in the archive), but those were only referring to the UMH Helm chart. This new versioning is meant to include the entire United Manufacturing Hub, as defined in the architecture.

So from now on, the United Manufacturing Hub will be versioned as a whole, and will include the Management Console, the Data Infrastructure, and the Device & Container Infrastructure, along with all the other bits and pieces that make up the United Manufacturing Hub.

0.1.0

Data Infrastructure

  • The Helm chart version has been updated to 0.9.34. This marks one of the final steps towards the full integration of the new data model. It is now possible to format the data into the ISA95 standard, send it through the Unified Namespace, and store it in the Historian.

Management Console

There are many features already available in the Management Console, so we’ll only list the most important ones here.

  • Provision the Data Infrastructure
  • Configure and manage connections and data sources
  • Visualize the Unified Namespace and the data flow
  • Upgrade the Management Companion directly from the Management Console. You will first need to manually upgrade it to this version, and then for all the future versions you will be able to do it directly from the Management Console.

Benthos-UMH

  • Connect OPC-UA servers to the United Manufacturing Hub
  • Configure how each node will send data to the Unified Namespace

8.3 - Archive

This section is meant to archive the “What’s new” pages only related to the United Manufacturing Hub’s Helm chart.

8.3.1 - What's New in Version 0.9.15

This section contains information about the new features and changes in the United Manufacturing Hub introduced in version 0.9.15.

Welcome to United Manufacturing Hub version 0.9.15! In this release we added support for the UNS data model, by introducing a new microservice, Data Bridge.

For a complete list of changes, refer to the release notes.

Data Bridge

Data-bridge is a microservice specifically tailored to adhere to the UNS data model. It consumes topics from a message broker, translates them to the proper format and publishes them to the other message broker.

It can consume from and publish to both Kafka and MQTT brokers, whether they are local or remote.

It’s main purpose is to merge messages from multiple topics into a single topic, using the message key to identify the source topic.

Updated dependencies

We updated the following dependencies:

  • RedPanda to version 23.2.8
  • HiveMQ to the community edition version 2023.7

8.3.2 - What's New in Version 0.9.14

This section contains information about the new features and changes in the United Manufacturing Hub introduced in version 0.9.14.

Welcome to United Manufacturing Hub version 0.9.14! In this release we changed the Kafka broker from Apache Kafka to RedPanda, which is a Kafka-compatible event streaming platform. We also started migrating to a different kafka library in our micoservices, which will allow full ARM support in the future. Finally, we tweaked the overall resource usage of the United Manufacturing Hub to improve performance and efficiency, along with some bug fixes.

For a complete list of changes, refer to the release notes.

RedPanda

RedPanda is a Kafka-compatible event streaming platform. It is built with modern hardware in mind and utilizes multi-core CPUs efficiently, which can result in better performance compared to Kafka. RedPanda also offers lower latency and higher throughput, making it a better fit for real-time use cases in IIoT applications. Additionally, RedPanda has a simpler setup and management process compared to Kafka, which can save time and resources for development teams. Finally, RedPanda is fully compatible with Kafka’s API, allowing for a seamless transition for existing Kafka users.

Overall, Redpanda can provide improved performance and efficiency for IIoT applications that require real-time data processing and management with a lower setup and management cost.

Sarama Kafka Library

We started migrating our microservices to use the Sarama Kafka library. This library is written in Go and is fully compatible with RedPanda. This change will allow us to support ARM-based devices in the future, which will be useful for edge computing use cases. Addedd bonus is that Sarama is faster and requires less memory than the previous library.

For now we only migrated the following microservices:

  • barcodereader
  • kafka-init (used as an init container for components that communicate with Kafka)
  • mqtt-kafka-bridge

Resources tweaking

With this release we tweaked the resource requests of each default component of the United Manufacturing Hub to respect the minimum requirements of 4 cores and 8GB of RAM. This allowed us to increase the memory allocated for the MQTT broker, resulting in solving the common Out Of Memory issue that caused the broker to restart.

Be sure to follow the upgrade guide to adjust your resources accordingly.

The following table shows the new resource requests and limits when deploying the United Manufacturing Hub with the default configuration or with all the components enabled. CPU values are expressed in millicores and memory values are expressed in mebibytes.

resources
ResourceRequestsLimits
CPU (default values)1080m (27%)1890m (47%)
Memory (default values)1650Mi (21%)2770Mi (35%)
CPU (all components)2002m (50%)2730m (68%)
Memory (all components)2873Mi (36%)3578Mi (45%)

The requested resources are the ones immediately allocated to the container when it starts, and the limits are the maximum amount of resources that the container can (but is not forced to) use. For more information about Kubernetes resources, refer to the official documentation.

Container registry

We moved our container registry from Docker Hub to GitHub Container Registry. This change won’t affect the way you deploy the United Manufacturing Hub, but it will allow us to better manage our container images and provide a better experience for our developers. For the time being, we will continue to publish our images to Docker Hub, but we will eventually deprecate the old registry.

Others

  • Implemented a new test build to detect race conditions in the codebase. This will help us to improve the stability of the United Manufacturing Hub.
  • All our custom images now run as non-root by default, except for the ones that require root privileges.
  • The custom microservices now allow to change the type of Service used to expose them by setting serviceType field.
  • Added an SQL trigger function that deletes duplicate records from the statetable table after insertion.
  • Enhanced the environment variables validation in the codebase.
  • Added possibility to set the aggregation interval when calculating the throughput of an asset.
  • Various dependencies has been updated to their latest version.

8.3.3 - What's New in Version 0.9.13

This section contains information about the new features and changes in the United Manufacturing Hub introduced in version 0.9.13.

Welcome to United Manufacturing Hub version 0.9.13! This is a minor release that only updates the new metrics feature.

For a complete list of changes, refer to the release notes.

8.3.4 - What's New in Version 0.9.12

This section contains information about the new features and changes in the United Manufacturing Hub introduced in version 0.9.12.

Welcome to United Manufacturing Hub version 0.9.12! Read on to learn about the new features of the UMH Datasource V2 plugin for Grafana, Redis running in standalone mode, and more.

For a complete list of changes, refer to the release notes.

Grafana

New Grafana version

Grafana has been upgraded to version 9.4.3. This introduces new search and navigation features, a redesigned details section of the logs, and a new data source connection page.

Head over to the Grafana release notes to learn more about the new features.

New Node-RED version

We have upgraded Node-RED to version 3.0.2. Checkout the Node-RED release notes for more information.

UMH Datasource V2 plugin

The latest update to the datasource has incorporated typesafe JSON parsing, significantly enhancing the overall performance and dependability of the plugin. This implementation ensures that the parsing process strictly adheres to predefined data types, eliminating the possibility of unexpected errors or data corruption that can occur with loosely-typed JSON parsing.

Redis in standalone mode

Redis, the service used for caching, is now deployed in standalone mode. This change introduces these benefits:

  • Simplicity: Running Redis in standalone mode is simpler than using a master-replica topology with Sentinel. With standalone mode, there is only one Redis instance to manage, whereas with master-replica, you need to manage multiple Redis instances and the Sentinel process. This simplicity can reduce complexity and make it easier to manage Redis instances.
  • Lower Overhead: Standalone mode has lower overhead than using a master-replica topology with Sentinel. In a master-replica topology, there is a communication overhead between the master and the replicas, and Sentinel adds additional overhead for monitoring and failover management. In contrast, standalone mode does not have this overhead.
  • Better Performance: Since standalone mode does not have the overhead of master-replica topology with Sentinel, it can provide better performance. Standalone mode provides faster response times and can handle more requests per second than a master-replica topology with Sentinel.

That being said, it’s important to note that a master-replica topology with Sentinel provides higher availability and failover capabilities than standalone mode.

All basic services are now exposed by a LoadBalancer Service

The MQTT Broker, Kafka Broker, and Kafka Console are now exposed by a LoadBalancer Service, along with the Database, Grafana and Node-RED. This change makes it easier to access these services from outside the cluster, as they are now accessible via the IP address of the cluster.

When installing the United Manufacturing Hub locally, the cluster ports are automatically mapped to the host ports. This means that you can access the services from your browser by using localhost and the port number.

Read more about connecting to the services from outside the cluster in the related documentation.

Metrics

We introduced an optional microservice that can be used to collect metrics about the system, like OS, CPU, memory, hostname and average load. These metrics are then sent to our server for analysis, and are completely anonymous. This microservice is enabled by default, but can be disabled by setting the _000_commonConfig.metrics.enabled value to false in the values.yaml file.

Click to see an example metric
{
   "OS":"linux",
   "Arch":"amd64",
   "Memory":{
      "total":16435666944,
      "available":11555106816,
      "used":4404510720,
      "usedPercent":26.798490958761544,
      "free":574394368,
      "active":3613691904,
      "inactive":10843209728,
      "wired":0,
      "laundry":0,
      "buffers":588361728,
      "cached":10868400128,
      "writeback":0,
      "dirty":122880,
      "writebacktmp":0,
      "shared":155168768,
      "slab":978030592,
      "sreclaimable":766824448,
      "sunreclaim":211206144,
      "pagetables":32157696,
      "swapcached":17887232,
      "commitlimit":12512800768,
      "committedas":16789483520,
      "hightotal":0,
      "highfree":0,
      "lowtotal":0,
      "lowfree":0,
      "swaptotal":4294967296,
      "swapfree":4165865472,
      "mapped":1214676992,
      "vmalloctotal":35184372087808,
      "vmallocused":60112896,
      "vmallocchunk":0,
      "hugepagestotal":0,
      "hugepagesfree":0,
      "hugepagesize":2097152
   },
   "CPUInfo":[
      {
         "cpu":0,
         "vendorId":"AuthenticAMD",
         "family":"25",
         "model":"80",
         "stepping":0,
         "physicalId":"0",
         "coreId":"0",
         "cores":1,
         "modelName":"AMD Ryzen 9 5900HX with Radeon Graphics",
         "mhz":3293.73,
         "cacheSize":512,
         "flags":[
            "fpu",
            "vme",
            "de",
            "pse",
            "tsc",
            "msr",
            "pae",
            "mce",
            "cx8",
            "apic",
            "sep",
            "mtrr",
            "pge",
            "mca",
            "cmov",
            "pat",
            "pse36",
            "clflush",
            "mmx",
            "fxsr",
            "sse",
            "sse2",
            "ht",
            "syscall",
            "nx",
            "mmxext",
            "fxsr_opt",
            "pdpe1gb",
            "rdtscp",
            "lm",
            "constant_tsc",
            "rep_good",
            "nopl",
            "tsc_reliable",
            "nonstop_tsc",
            "cpuid",
            "extd_apicid",
            "pni",
            "pclmulqdq",
            "ssse3",
            "fma",
            "cx16",
            "sse4_1",
            "sse4_2",
            "movbe",
            "popcnt",
            "aes",
            "xsave",
            "avx",
            "f16c",
            "rdrand",
            "hypervisor",
            "lahf_lm",
            "cmp_legacy",
            "svm",
            "cr8_legacy",
            "abm",
            "sse4a",
            "misalignsse",
            "3dnowprefetch",
            "osvw",
            "topoext",
            "perfctr_core",
            "ssbd",
            "ibrs",
            "ibpb",
            "stibp",
            "vmmcall",
            "fsgsbase",
            "bmi1",
            "avx2",
            "smep",
            "bmi2",
            "erms",
            "invpcid",
            "rdseed",
            "adx",
            "smap",
            "clflushopt",
            "clwb",
            "sha_ni",
            "xsaveopt",
            "xsavec",
            "xgetbv1",
            "xsaves",
            "clzero",
            "xsaveerptr",
            "arat",
            "npt",
            "nrip_save",
            "tsc_scale",
            "vmcb_clean",
            "flushbyasid",
            "decodeassists",
            "pausefilter",
            "pfthreshold",
            "v_vmsave_vmload",
            "umip",
            "vaes",
            "vpclmulqdq",
            "rdpid",
            "fsrm"
         ],
         "microcode":"0xffffffff"
      },
      {
         "cpu":1,
         "vendorId":"AuthenticAMD",
         "family":"25",
         "model":"80",
         "stepping":0,
         "physicalId":"0",
         "coreId":"0",
         "cores":1,
         "modelName":"AMD Ryzen 9 5900HX with Radeon Graphics",
         "mhz":3293.73,
         "cacheSize":512,
         "flags":[
            "fpu",
            "vme",
            "de",
            "pse",
            "tsc",
            "msr",
            "pae",
            "mce",
            "cx8",
            "apic",
            "sep",
            "mtrr",
            "pge",
            "mca",
            "cmov",
            "pat",
            "pse36",
            "clflush",
            "mmx",
            "fxsr",
            "sse",
            "sse2",
            "ht",
            "syscall",
            "nx",
            "mmxext",
            "fxsr_opt",
            "pdpe1gb",
            "rdtscp",
            "lm",
            "constant_tsc",
            "rep_good",
            "nopl",
            "tsc_reliable",
            "nonstop_tsc",
            "cpuid",
            "extd_apicid",
            "pni",
            "pclmulqdq",
            "ssse3",
            "fma",
            "cx16",
            "sse4_1",
            "sse4_2",
            "movbe",
            "popcnt",
            "aes",
            "xsave",
            "avx",
            "f16c",
            "rdrand",
            "hypervisor",
            "lahf_lm",
            "cmp_legacy",
            "svm",
            "cr8_legacy",
            "abm",
            "sse4a",
            "misalignsse",
            "3dnowprefetch",
            "osvw",
            "topoext",
            "perfctr_core",
            "ssbd",
            "ibrs",
            "ibpb",
            "stibp",
            "vmmcall",
            "fsgsbase",
            "bmi1",
            "avx2",
            "smep",
            "bmi2",
            "erms",
            "invpcid",
            "rdseed",
            "adx",
            "smap",
            "clflushopt",
            "clwb",
            "sha_ni",
            "xsaveopt",
            "xsavec",
            "xgetbv1",
            "xsaves",
            "clzero",
            "xsaveerptr",
            "arat",
            "npt",
            "nrip_save",
            "tsc_scale",
            "vmcb_clean",
            "flushbyasid",
            "decodeassists",
            "pausefilter",
            "pfthreshold",
            "v_vmsave_vmload",
            "umip",
            "vaes",
            "vpclmulqdq",
            "rdpid",
            "fsrm"
         ],
         "microcode":"0xffffffff"
      },
      {
         "cpu":2,
         "vendorId":"AuthenticAMD",
         "family":"25",
         "model":"80",
         "stepping":0,
         "physicalId":"0",
         "coreId":"1",
         "cores":1,
         "modelName":"AMD Ryzen 9 5900HX with Radeon Graphics",
         "mhz":3293.73,
         "cacheSize":512,
         "flags":[
            "fpu",
            "vme",
            "de",
            "pse",
            "tsc",
            "msr",
            "pae",
            "mce",
            "cx8",
            "apic",
            "sep",
            "mtrr",
            "pge",
            "mca",
            "cmov",
            "pat",
            "pse36",
            "clflush",
            "mmx",
            "fxsr",
            "sse",
            "sse2",
            "ht",
            "syscall",
            "nx",
            "mmxext",
            "fxsr_opt",
            "pdpe1gb",
            "rdtscp",
            "lm",
            "constant_tsc",
            "rep_good",
            "nopl",
            "tsc_reliable",
            "nonstop_tsc",
            "cpuid",
            "extd_apicid",
            "pni",
            "pclmulqdq",
            "ssse3",
            "fma",
            "cx16",
            "sse4_1",
            "sse4_2",
            "movbe",
            "popcnt",
            "aes",
            "xsave",
            "avx",
            "f16c",
            "rdrand",
            "hypervisor",
            "lahf_lm",
            "cmp_legacy",
            "svm",
            "cr8_legacy",
            "abm",
            "sse4a",
            "misalignsse",
            "3dnowprefetch",
            "osvw",
            "topoext",
            "perfctr_core",
            "ssbd",
            "ibrs",
            "ibpb",
            "stibp",
            "vmmcall",
            "fsgsbase",
            "bmi1",
            "avx2",
            "smep",
            "bmi2",
            "erms",
            "invpcid",
            "rdseed",
            "adx",
            "smap",
            "clflushopt",
            "clwb",
            "sha_ni",
            "xsaveopt",
            "xsavec",
            "xgetbv1",
            "xsaves",
            "clzero",
            "xsaveerptr",
            "arat",
            "npt",
            "nrip_save",
            "tsc_scale",
            "vmcb_clean",
            "flushbyasid",
            "decodeassists",
            "pausefilter",
            "pfthreshold",
            "v_vmsave_vmload",
            "umip",
            "vaes",
            "vpclmulqdq",
            "rdpid",
            "fsrm"
         ],
         "microcode":"0xffffffff"
      },
      {
         "cpu":3,
         "vendorId":"AuthenticAMD",
         "family":"25",
         "model":"80",
         "stepping":0,
         "physicalId":"0",
         "coreId":"1",
         "cores":1,
         "modelName":"AMD Ryzen 9 5900HX with Radeon Graphics",
         "mhz":3293.73,
         "cacheSize":512,
         "flags":[
            "fpu",
            "vme",
            "de",
            "pse",
            "tsc",
            "msr",
            "pae",
            "mce",
            "cx8",
            "apic",
            "sep",
            "mtrr",
            "pge",
            "mca",
            "cmov",
            "pat",
            "pse36",
            "clflush",
            "mmx",
            "fxsr",
            "sse",
            "sse2",
            "ht",
            "syscall",
            "nx",
            "mmxext",
            "fxsr_opt",
            "pdpe1gb",
            "rdtscp",
            "lm",
            "constant_tsc",
            "rep_good",
            "nopl",
            "tsc_reliable",
            "nonstop_tsc",
            "cpuid",
            "extd_apicid",
            "pni",
            "pclmulqdq",
            "ssse3",
            "fma",
            "cx16",
            "sse4_1",
            "sse4_2",
            "movbe",
            "popcnt",
            "aes",
            "xsave",
            "avx",
            "f16c",
            "rdrand",
            "hypervisor",
            "lahf_lm",
            "cmp_legacy",
            "svm",
            "cr8_legacy",
            "abm",
            "sse4a",
            "misalignsse",
            "3dnowprefetch",
            "osvw",
            "topoext",
            "perfctr_core",
            "ssbd",
            "ibrs",
            "ibpb",
            "stibp",
            "vmmcall",
            "fsgsbase",
            "bmi1",
            "avx2",
            "smep",
            "bmi2",
            "erms",
            "invpcid",
            "rdseed",
            "adx",
            "smap",
            "clflushopt",
            "clwb",
            "sha_ni",
            "xsaveopt",
            "xsavec",
            "xgetbv1",
            "xsaves",
            "clzero",
            "xsaveerptr",
            "arat",
            "npt",
            "nrip_save",
            "tsc_scale",
            "vmcb_clean",
            "flushbyasid",
            "decodeassists",
            "pausefilter",
            "pfthreshold",
            "v_vmsave_vmload",
            "umip",
            "vaes",
            "vpclmulqdq",
            "rdpid",
            "fsrm"
         ],
         "microcode":"0xffffffff"
      },
      {
         "cpu":4,
         "vendorId":"AuthenticAMD",
         "family":"25",
         "model":"80",
         "stepping":0,
         "physicalId":"0",
         "coreId":"2",
         "cores":1,
         "modelName":"AMD Ryzen 9 5900HX with Radeon Graphics",
         "mhz":3293.73,
         "cacheSize":512,
         "flags":[
            "fpu",
            "vme",
            "de",
            "pse",
            "tsc",
            "msr",
            "pae",
            "mce",
            "cx8",
            "apic",
            "sep",
            "mtrr",
            "pge",
            "mca",
            "cmov",
            "pat",
            "pse36",
            "clflush",
            "mmx",
            "fxsr",
            "sse",
            "sse2",
            "ht",
            "syscall",
            "nx",
            "mmxext",
            "fxsr_opt",
            "pdpe1gb",
            "rdtscp",
            "lm",
            "constant_tsc",
            "rep_good",
            "nopl",
            "tsc_reliable",
            "nonstop_tsc",
            "cpuid",
            "extd_apicid",
            "pni",
            "pclmulqdq",
            "ssse3",
            "fma",
            "cx16",
            "sse4_1",
            "sse4_2",
            "movbe",
            "popcnt",
            "aes",
            "xsave",
            "avx",
            "f16c",
            "rdrand",
            "hypervisor",
            "lahf_lm",
            "cmp_legacy",
            "svm",
            "cr8_legacy",
            "abm",
            "sse4a",
            "misalignsse",
            "3dnowprefetch",
            "osvw",
            "topoext",
            "perfctr_core",
            "ssbd",
            "ibrs",
            "ibpb",
            "stibp",
            "vmmcall",
            "fsgsbase",
            "bmi1",
            "avx2",
            "smep",
            "bmi2",
            "erms",
            "invpcid",
            "rdseed",
            "adx",
            "smap",
            "clflushopt",
            "clwb",
            "sha_ni",
            "xsaveopt",
            "xsavec",
            "xgetbv1",
            "xsaves",
            "clzero",
            "xsaveerptr",
            "arat",
            "npt",
            "nrip_save",
            "tsc_scale",
            "vmcb_clean",
            "flushbyasid",
            "decodeassists",
            "pausefilter",
            "pfthreshold",
            "v_vmsave_vmload",
            "umip",
            "vaes",
            "vpclmulqdq",
            "rdpid",
            "fsrm"
         ],
         "microcode":"0xffffffff"
      },
      {
         "cpu":5,
         "vendorId":"AuthenticAMD",
         "family":"25",
         "model":"80",
         "stepping":0,
         "physicalId":"0",
         "coreId":"2",
         "cores":1,
         "modelName":"AMD Ryzen 9 5900HX with Radeon Graphics",
         "mhz":3293.73,
         "cacheSize":512,
         "flags":[
            "fpu",
            "vme",
            "de",
            "pse",
            "tsc",
            "msr",
            "pae",
            "mce",
            "cx8",
            "apic",
            "sep",
            "mtrr",
            "pge",
            "mca",
            "cmov",
            "pat",
            "pse36",
            "clflush",
            "mmx",
            "fxsr",
            "sse",
            "sse2",
            "ht",
            "syscall",
            "nx",
            "mmxext",
            "fxsr_opt",
            "pdpe1gb",
            "rdtscp",
            "lm",
            "constant_tsc",
            "rep_good",
            "nopl",
            "tsc_reliable",
            "nonstop_tsc",
            "cpuid",
            "extd_apicid",
            "pni",
            "pclmulqdq",
            "ssse3",
            "fma",
            "cx16",
            "sse4_1",
            "sse4_2",
            "movbe",
            "popcnt",
            "aes",
            "xsave",
            "avx",
            "f16c",
            "rdrand",
            "hypervisor",
            "lahf_lm",
            "cmp_legacy",
            "svm",
            "cr8_legacy",
            "abm",
            "sse4a",
            "misalignsse",
            "3dnowprefetch",
            "osvw",
            "topoext",
            "perfctr_core",
            "ssbd",
            "ibrs",
            "ibpb",
            "stibp",
            "vmmcall",
            "fsgsbase",
            "bmi1",
            "avx2",
            "smep",
            "bmi2",
            "erms",
            "invpcid",
            "rdseed",
            "adx",
            "smap",
            "clflushopt",
            "clwb",
            "sha_ni",
            "xsaveopt",
            "xsavec",
            "xgetbv1",
            "xsaves",
            "clzero",
            "xsaveerptr",
            "arat",
            "npt",
            "nrip_save",
            "tsc_scale",
            "vmcb_clean",
            "flushbyasid",
            "decodeassists",
            "pausefilter",
            "pfthreshold",
            "v_vmsave_vmload",
            "umip",
            "vaes",
            "vpclmulqdq",
            "rdpid",
            "fsrm"
         ],
         "microcode":"0xffffffff"
      },
      {
         "cpu":6,
         "vendorId":"AuthenticAMD",
         "family":"25",
         "model":"80",
         "stepping":0,
         "physicalId":"0",
         "coreId":"3",
         "cores":1,
         "modelName":"AMD Ryzen 9 5900HX with Radeon Graphics",
         "mhz":3293.73,
         "cacheSize":512,
         "flags":[
            "fpu",
            "vme",
            "de",
            "pse",
            "tsc",
            "msr",
            "pae",
            "mce",
            "cx8",
            "apic",
            "sep",
            "mtrr",
            "pge",
            "mca",
            "cmov",
            "pat",
            "pse36",
            "clflush",
            "mmx",
            "fxsr",
            "sse",
            "sse2",
            "ht",
            "syscall",
            "nx",
            "mmxext",
            "fxsr_opt",
            "pdpe1gb",
            "rdtscp",
            "lm",
            "constant_tsc",
            "rep_good",
            "nopl",
            "tsc_reliable",
            "nonstop_tsc",
            "cpuid",
            "extd_apicid",
            "pni",
            "pclmulqdq",
            "ssse3",
            "fma",
            "cx16",
            "sse4_1",
            "sse4_2",
            "movbe",
            "popcnt",
            "aes",
            "xsave",
            "avx",
            "f16c",
            "rdrand",
            "hypervisor",
            "lahf_lm",
            "cmp_legacy",
            "svm",
            "cr8_legacy",
            "abm",
            "sse4a",
            "misalignsse",
            "3dnowprefetch",
            "osvw",
            "topoext",
            "perfctr_core",
            "ssbd",
            "ibrs",
            "ibpb",
            "stibp",
            "vmmcall",
            "fsgsbase",
            "bmi1",
            "avx2",
            "smep",
            "bmi2",
            "erms",
            "invpcid",
            "rdseed",
            "adx",
            "smap",
            "clflushopt",
            "clwb",
            "sha_ni",
            "xsaveopt",
            "xsavec",
            "xgetbv1",
            "xsaves",
            "clzero",
            "xsaveerptr",
            "arat",
            "npt",
            "nrip_save",
            "tsc_scale",
            "vmcb_clean",
            "flushbyasid",
            "decodeassists",
            "pausefilter",
            "pfthreshold",
            "v_vmsave_vmload",
            "umip",
            "vaes",
            "vpclmulqdq",
            "rdpid",
            "fsrm"
         ],
         "microcode":"0xffffffff"
      },
      {
         "cpu":7,
         "vendorId":"AuthenticAMD",
         "family":"25",
         "model":"80",
         "stepping":0,
         "physicalId":"0",
         "coreId":"3",
         "cores":1,
         "modelName":"AMD Ryzen 9 5900HX with Radeon Graphics",
         "mhz":3293.73,
         "cacheSize":512,
         "flags":[
            "fpu",
            "vme",
            "de",
            "pse",
            "tsc",
            "msr",
            "pae",
            "mce",
            "cx8",
            "apic",
            "sep",
            "mtrr",
            "pge",
            "mca",
            "cmov",
            "pat",
            "pse36",
            "clflush",
            "mmx",
            "fxsr",
            "sse",
            "sse2",
            "ht",
            "syscall",
            "nx",
            "mmxext",
            "fxsr_opt",
            "pdpe1gb",
            "rdtscp",
            "lm",
            "constant_tsc",
            "rep_good",
            "nopl",
            "tsc_reliable",
            "nonstop_tsc",
            "cpuid",
            "extd_apicid",
            "pni",
            "pclmulqdq",
            "ssse3",
            "fma",
            "cx16",
            "sse4_1",
            "sse4_2",
            "movbe",
            "popcnt",
            "aes",
            "xsave",
            "avx",
            "f16c",
            "rdrand",
            "hypervisor",
            "lahf_lm",
            "cmp_legacy",
            "svm",
            "cr8_legacy",
            "abm",
            "sse4a",
            "misalignsse",
            "3dnowprefetch",
            "osvw",
            "topoext",
            "perfctr_core",
            "ssbd",
            "ibrs",
            "ibpb",
            "stibp",
            "vmmcall",
            "fsgsbase",
            "bmi1",
            "avx2",
            "smep",
            "bmi2",
            "erms",
            "invpcid",
            "rdseed",
            "adx",
            "smap",
            "clflushopt",
            "clwb",
            "sha_ni",
            "xsaveopt",
            "xsavec",
            "xgetbv1",
            "xsaves",
            "clzero",
            "xsaveerptr",
            "arat",
            "npt",
            "nrip_save",
            "tsc_scale",
            "vmcb_clean",
            "flushbyasid",
            "decodeassists",
            "pausefilter",
            "pfthreshold",
            "v_vmsave_vmload",
            "umip",
            "vaes",
            "vpclmulqdq",
            "rdpid",
            "fsrm"
         ],
         "microcode":"0xffffffff"
      },
      {
         "cpu":8,
         "vendorId":"AuthenticAMD",
         "family":"25",
         "model":"80",
         "stepping":0,
         "physicalId":"0",
         "coreId":"4",
         "cores":1,
         "modelName":"AMD Ryzen 9 5900HX with Radeon Graphics",
         "mhz":3293.73,
         "cacheSize":512,
         "flags":[
            "fpu",
            "vme",
            "de",
            "pse",
            "tsc",
            "msr",
            "pae",
            "mce",
            "cx8",
            "apic",
            "sep",
            "mtrr",
            "pge",
            "mca",
            "cmov",
            "pat",
            "pse36",
            "clflush",
            "mmx",
            "fxsr",
            "sse",
            "sse2",
            "ht",
            "syscall",
            "nx",
            "mmxext",
            "fxsr_opt",
            "pdpe1gb",
            "rdtscp",
            "lm",
            "constant_tsc",
            "rep_good",
            "nopl",
            "tsc_reliable",
            "nonstop_tsc",
            "cpuid",
            "extd_apicid",
            "pni",
            "pclmulqdq",
            "ssse3",
            "fma",
            "cx16",
            "sse4_1",
            "sse4_2",
            "movbe",
            "popcnt",
            "aes",
            "xsave",
            "avx",
            "f16c",
            "rdrand",
            "hypervisor",
            "lahf_lm",
            "cmp_legacy",
            "svm",
            "cr8_legacy",
            "abm",
            "sse4a",
            "misalignsse",
            "3dnowprefetch",
            "osvw",
            "topoext",
            "perfctr_core",
            "ssbd",
            "ibrs",
            "ibpb",
            "stibp",
            "vmmcall",
            "fsgsbase",
            "bmi1",
            "avx2",
            "smep",
            "bmi2",
            "erms",
            "invpcid",
            "rdseed",
            "adx",
            "smap",
            "clflushopt",
            "clwb",
            "sha_ni",
            "xsaveopt",
            "xsavec",
            "xgetbv1",
            "xsaves",
            "clzero",
            "xsaveerptr",
            "arat",
            "npt",
            "nrip_save",
            "tsc_scale",
            "vmcb_clean",
            "flushbyasid",
            "decodeassists",
            "pausefilter",
            "pfthreshold",
            "v_vmsave_vmload",
            "umip",
            "vaes",
            "vpclmulqdq",
            "rdpid",
            "fsrm"
         ],
         "microcode":"0xffffffff"
      },
      {
         "cpu":9,
         "vendorId":"AuthenticAMD",
         "family":"25",
         "model":"80",
         "stepping":0,
         "physicalId":"0",
         "coreId":"4",
         "cores":1,
         "modelName":"AMD Ryzen 9 5900HX with Radeon Graphics",
         "mhz":3293.73,
         "cacheSize":512,
         "flags":[
            "fpu",
            "vme",
            "de",
            "pse",
            "tsc",
            "msr",
            "pae",
            "mce",
            "cx8",
            "apic",
            "sep",
            "mtrr",
            "pge",
            "mca",
            "cmov",
            "pat",
            "pse36",
            "clflush",
            "mmx",
            "fxsr",
            "sse",
            "sse2",
            "ht",
            "syscall",
            "nx",
            "mmxext",
            "fxsr_opt",
            "pdpe1gb",
            "rdtscp",
            "lm",
            "constant_tsc",
            "rep_good",
            "nopl",
            "tsc_reliable",
            "nonstop_tsc",
            "cpuid",
            "extd_apicid",
            "pni",
            "pclmulqdq",
            "ssse3",
            "fma",
            "cx16",
            "sse4_1",
            "sse4_2",
            "movbe",
            "popcnt",
            "aes",
            "xsave",
            "avx",
            "f16c",
            "rdrand",
            "hypervisor",
            "lahf_lm",
            "cmp_legacy",
            "svm",
            "cr8_legacy",
            "abm",
            "sse4a",
            "misalignsse",
            "3dnowprefetch",
            "osvw",
            "topoext",
            "perfctr_core",
            "ssbd",
            "ibrs",
            "ibpb",
            "stibp",
            "vmmcall",
            "fsgsbase",
            "bmi1",
            "avx2",
            "smep",
            "bmi2",
            "erms",
            "invpcid",
            "rdseed",
            "adx",
            "smap",
            "clflushopt",
            "clwb",
            "sha_ni",
            "xsaveopt",
            "xsavec",
            "xgetbv1",
            "xsaves",
            "clzero",
            "xsaveerptr",
            "arat",
            "npt",
            "nrip_save",
            "tsc_scale",
            "vmcb_clean",
            "flushbyasid",
            "decodeassists",
            "pausefilter",
            "pfthreshold",
            "v_vmsave_vmload",
            "umip",
            "vaes",
            "vpclmulqdq",
            "rdpid",
            "fsrm"
         ],
         "microcode":"0xffffffff"
      },
      {
         "cpu":10,
         "vendorId":"AuthenticAMD",
         "family":"25",
         "model":"80",
         "stepping":0,
         "physicalId":"0",
         "coreId":"5",
         "cores":1,
         "modelName":"AMD Ryzen 9 5900HX with Radeon Graphics",
         "mhz":3293.73,
         "cacheSize":512,
         "flags":[
            "fpu",
            "vme",
            "de",
            "pse",
            "tsc",
            "msr",
            "pae",
            "mce",
            "cx8",
            "apic",
            "sep",
            "mtrr",
            "pge",
            "mca",
            "cmov",
            "pat",
            "pse36",
            "clflush",
            "mmx",
            "fxsr",
            "sse",
            "sse2",
            "ht",
            "syscall",
            "nx",
            "mmxext",
            "fxsr_opt",
            "pdpe1gb",
            "rdtscp",
            "lm",
            "constant_tsc",
            "rep_good",
            "nopl",
            "tsc_reliable",
            "nonstop_tsc",
            "cpuid",
            "extd_apicid",
            "pni",
            "pclmulqdq",
            "ssse3",
            "fma",
            "cx16",
            "sse4_1",
            "sse4_2",
            "movbe",
            "popcnt",
            "aes",
            "xsave",
            "avx",
            "f16c",
            "rdrand",
            "hypervisor",
            "lahf_lm",
            "cmp_legacy",
            "svm",
            "cr8_legacy",
            "abm",
            "sse4a",
            "misalignsse",
            "3dnowprefetch",
            "osvw",
            "topoext",
            "perfctr_core",
            "ssbd",
            "ibrs",
            "ibpb",
            "stibp",
            "vmmcall",
            "fsgsbase",
            "bmi1",
            "avx2",
            "smep",
            "bmi2",
            "erms",
            "invpcid",
            "rdseed",
            "adx",
            "smap",
            "clflushopt",
            "clwb",
            "sha_ni",
            "xsaveopt",
            "xsavec",
            "xgetbv1",
            "xsaves",
            "clzero",
            "xsaveerptr",
            "arat",
            "npt",
            "nrip_save",
            "tsc_scale",
            "vmcb_clean",
            "flushbyasid",
            "decodeassists",
            "pausefilter",
            "pfthreshold",
            "v_vmsave_vmload",
            "umip",
            "vaes",
            "vpclmulqdq",
            "rdpid",
            "fsrm"
         ],
         "microcode":"0xffffffff"
      },
      {
         "cpu":11,
         "vendorId":"AuthenticAMD",
         "family":"25",
         "model":"80",
         "stepping":0,
         "physicalId":"0",
         "coreId":"5",
         "cores":1,
         "modelName":"AMD Ryzen 9 5900HX with Radeon Graphics",
         "mhz":3293.73,
         "cacheSize":512,
         "flags":[
            "fpu",
            "vme",
            "de",
            "pse",
            "tsc",
            "msr",
            "pae",
            "mce",
            "cx8",
            "apic",
            "sep",
            "mtrr",
            "pge",
            "mca",
            "cmov",
            "pat",
            "pse36",
            "clflush",
            "mmx",
            "fxsr",
            "sse",
            "sse2",
            "ht",
            "syscall",
            "nx",
            "mmxext",
            "fxsr_opt",
            "pdpe1gb",
            "rdtscp",
            "lm",
            "constant_tsc",
            "rep_good",
            "nopl",
            "tsc_reliable",
            "nonstop_tsc",
            "cpuid",
            "extd_apicid",
            "pni",
            "pclmulqdq",
            "ssse3",
            "fma",
            "cx16",
            "sse4_1",
            "sse4_2",
            "movbe",
            "popcnt",
            "aes",
            "xsave",
            "avx",
            "f16c",
            "rdrand",
            "hypervisor",
            "lahf_lm",
            "cmp_legacy",
            "svm",
            "cr8_legacy",
            "abm",
            "sse4a",
            "misalignsse",
            "3dnowprefetch",
            "osvw",
            "topoext",
            "perfctr_core",
            "ssbd",
            "ibrs",
            "ibpb",
            "stibp",
            "vmmcall",
            "fsgsbase",
            "bmi1",
            "avx2",
            "smep",
            "bmi2",
            "erms",
            "invpcid",
            "rdseed",
            "adx",
            "smap",
            "clflushopt",
            "clwb",
            "sha_ni",
            "xsaveopt",
            "xsavec",
            "xgetbv1",
            "xsaves",
            "clzero",
            "xsaveerptr",
            "arat",
            "npt",
            "nrip_save",
            "tsc_scale",
            "vmcb_clean",
            "flushbyasid",
            "decodeassists",
            "pausefilter",
            "pfthreshold",
            "v_vmsave_vmload",
            "umip",
            "vaes",
            "vpclmulqdq",
            "rdpid",
            "fsrm"
         ],
         "microcode":"0xffffffff"
      },
      {
         "cpu":12,
         "vendorId":"AuthenticAMD",
         "family":"25",
         "model":"80",
         "stepping":0,
         "physicalId":"0",
         "coreId":"6",
         "cores":1,
         "modelName":"AMD Ryzen 9 5900HX with Radeon Graphics",
         "mhz":3293.73,
         "cacheSize":512,
         "flags":[
            "fpu",
            "vme",
            "de",
            "pse",
            "tsc",
            "msr",
            "pae",
            "mce",
            "cx8",
            "apic",
            "sep",
            "mtrr",
            "pge",
            "mca",
            "cmov",
            "pat",
            "pse36",
            "clflush",
            "mmx",
            "fxsr",
            "sse",
            "sse2",
            "ht",
            "syscall",
            "nx",
            "mmxext",
            "fxsr_opt",
            "pdpe1gb",
            "rdtscp",
            "lm",
            "constant_tsc",
            "rep_good",
            "nopl",
            "tsc_reliable",
            "nonstop_tsc",
            "cpuid",
            "extd_apicid",
            "pni",
            "pclmulqdq",
            "ssse3",
            "fma",
            "cx16",
            "sse4_1",
            "sse4_2",
            "movbe",
            "popcnt",
            "aes",
            "xsave",
            "avx",
            "f16c",
            "rdrand",
            "hypervisor",
            "lahf_lm",
            "cmp_legacy",
            "svm",
            "cr8_legacy",
            "abm",
            "sse4a",
            "misalignsse",
            "3dnowprefetch",
            "osvw",
            "topoext",
            "perfctr_core",
            "ssbd",
            "ibrs",
            "ibpb",
            "stibp",
            "vmmcall",
            "fsgsbase",
            "bmi1",
            "avx2",
            "smep",
            "bmi2",
            "erms",
            "invpcid",
            "rdseed",
            "adx",
            "smap",
            "clflushopt",
            "clwb",
            "sha_ni",
            "xsaveopt",
            "xsavec",
            "xgetbv1",
            "xsaves",
            "clzero",
            "xsaveerptr",
            "arat",
            "npt",
            "nrip_save",
            "tsc_scale",
            "vmcb_clean",
            "flushbyasid",
            "decodeassists",
            "pausefilter",
            "pfthreshold",
            "v_vmsave_vmload",
            "umip",
            "vaes",
            "vpclmulqdq",
            "rdpid",
            "fsrm"
         ],
         "microcode":"0xffffffff"
      },
      {
         "cpu":13,
         "vendorId":"AuthenticAMD",
         "family":"25",
         "model":"80",
         "stepping":0,
         "physicalId":"0",
         "coreId":"6",
         "cores":1,
         "modelName":"AMD Ryzen 9 5900HX with Radeon Graphics",
         "mhz":3293.73,
         "cacheSize":512,
         "flags":[
            "fpu",
            "vme",
            "de",
            "pse",
            "tsc",
            "msr",
            "pae",
            "mce",
            "cx8",
            "apic",
            "sep",
            "mtrr",
            "pge",
            "mca",
            "cmov",
            "pat",
            "pse36",
            "clflush",
            "mmx",
            "fxsr",
            "sse",
            "sse2",
            "ht",
            "syscall",
            "nx",
            "mmxext",
            "fxsr_opt",
            "pdpe1gb",
            "rdtscp",
            "lm",
            "constant_tsc",
            "rep_good",
            "nopl",
            "tsc_reliable",
            "nonstop_tsc",
            "cpuid",
            "extd_apicid",
            "pni",
            "pclmulqdq",
            "ssse3",
            "fma",
            "cx16",
            "sse4_1",
            "sse4_2",
            "movbe",
            "popcnt",
            "aes",
            "xsave",
            "avx",
            "f16c",
            "rdrand",
            "hypervisor",
            "lahf_lm",
            "cmp_legacy",
            "svm",
            "cr8_legacy",
            "abm",
            "sse4a",
            "misalignsse",
            "3dnowprefetch",
            "osvw",
            "topoext",
            "perfctr_core",
            "ssbd",
            "ibrs",
            "ibpb",
            "stibp",
            "vmmcall",
            "fsgsbase",
            "bmi1",
            "avx2",
            "smep",
            "bmi2",
            "erms",
            "invpcid",
            "rdseed",
            "adx",
            "smap",
            "clflushopt",
            "clwb",
            "sha_ni",
            "xsaveopt",
            "xsavec",
            "xgetbv1",
            "xsaves",
            "clzero",
            "xsaveerptr",
            "arat",
            "npt",
            "nrip_save",
            "tsc_scale",
            "vmcb_clean",
            "flushbyasid",
            "decodeassists",
            "pausefilter",
            "pfthreshold",
            "v_vmsave_vmload",
            "umip",
            "vaes",
            "vpclmulqdq",
            "rdpid",
            "fsrm"
         ],
         "microcode":"0xffffffff"
      },
      {
         "cpu":14,
         "vendorId":"AuthenticAMD",
         "family":"25",
         "model":"80",
         "stepping":0,
         "physicalId":"0",
         "coreId":"7",
         "cores":1,
         "modelName":"AMD Ryzen 9 5900HX with Radeon Graphics",
         "mhz":3293.73,
         "cacheSize":512,
         "flags":[
            "fpu",
            "vme",
            "de",
            "pse",
            "tsc",
            "msr",
            "pae",
            "mce",
            "cx8",
            "apic",
            "sep",
            "mtrr",
            "pge",
            "mca",
            "cmov",
            "pat",
            "pse36",
            "clflush",
            "mmx",
            "fxsr",
            "sse",
            "sse2",
            "ht",
            "syscall",
            "nx",
            "mmxext",
            "fxsr_opt",
            "pdpe1gb",
            "rdtscp",
            "lm",
            "constant_tsc",
            "rep_good",
            "nopl",
            "tsc_reliable",
            "nonstop_tsc",
            "cpuid",
            "extd_apicid",
            "pni",
            "pclmulqdq",
            "ssse3",
            "fma",
            "cx16",
            "sse4_1",
            "sse4_2",
            "movbe",
            "popcnt",
            "aes",
            "xsave",
            "avx",
            "f16c",
            "rdrand",
            "hypervisor",
            "lahf_lm",
            "cmp_legacy",
            "svm",
            "cr8_legacy",
            "abm",
            "sse4a",
            "misalignsse",
            "3dnowprefetch",
            "osvw",
            "topoext",
            "perfctr_core",
            "ssbd",
            "ibrs",
            "ibpb",
            "stibp",
            "vmmcall",
            "fsgsbase",
            "bmi1",
            "avx2",
            "smep",
            "bmi2",
            "erms",
            "invpcid",
            "rdseed",
            "adx",
            "smap",
            "clflushopt",
            "clwb",
            "sha_ni",
            "xsaveopt",
            "xsavec",
            "xgetbv1",
            "xsaves",
            "clzero",
            "xsaveerptr",
            "arat",
            "npt",
            "nrip_save",
            "tsc_scale",
            "vmcb_clean",
            "flushbyasid",
            "decodeassists",
            "pausefilter",
            "pfthreshold",
            "v_vmsave_vmload",
            "umip",
            "vaes",
            "vpclmulqdq",
            "rdpid",
            "fsrm"
         ],
         "microcode":"0xffffffff"
      },
      {
         "cpu":15,
         "vendorId":"AuthenticAMD",
         "family":"25",
         "model":"80",
         "stepping":0,
         "physicalId":"0",
         "coreId":"7",
         "cores":1,
         "modelName":"AMD Ryzen 9 5900HX with Radeon Graphics",
         "mhz":3293.73,
         "cacheSize":512,
         "flags":[
            "fpu",
            "vme",
            "de",
            "pse",
            "tsc",
            "msr",
            "pae",
            "mce",
            "cx8",
            "apic",
            "sep",
            "mtrr",
            "pge",
            "mca",
            "cmov",
            "pat",
            "pse36",
            "clflush",
            "mmx",
            "fxsr",
            "sse",
            "sse2",
            "ht",
            "syscall",
            "nx",
            "mmxext",
            "fxsr_opt",
            "pdpe1gb",
            "rdtscp",
            "lm",
            "constant_tsc",
            "rep_good",
            "nopl",
            "tsc_reliable",
            "nonstop_tsc",
            "cpuid",
            "extd_apicid",
            "pni",
            "pclmulqdq",
            "ssse3",
            "fma",
            "cx16",
            "sse4_1",
            "sse4_2",
            "movbe",
            "popcnt",
            "aes",
            "xsave",
            "avx",
            "f16c",
            "rdrand",
            "hypervisor",
            "lahf_lm",
            "cmp_legacy",
            "svm",
            "cr8_legacy",
            "abm",
            "sse4a",
            "misalignsse",
            "3dnowprefetch",
            "osvw",
            "topoext",
            "perfctr_core",
            "ssbd",
            "ibrs",
            "ibpb",
            "stibp",
            "vmmcall",
            "fsgsbase",
            "bmi1",
            "avx2",
            "smep",
            "bmi2",
            "erms",
            "invpcid",
            "rdseed",
            "adx",
            "smap",
            "clflushopt",
            "clwb",
            "sha_ni",
            "xsaveopt",
            "xsavec",
            "xgetbv1",
            "xsaves",
            "clzero",
            "xsaveerptr",
            "arat",
            "npt",
            "nrip_save",
            "tsc_scale",
            "vmcb_clean",
            "flushbyasid",
            "decodeassists",
            "pausefilter",
            "pfthreshold",
            "v_vmsave_vmload",
            "umip",
            "vaes",
            "vpclmulqdq",
            "rdpid",
            "fsrm"
         ],
         "microcode":"0xffffffff"
      }
   ],
   "Host":{
      "hostname":"7bd935ddb2b727e7ec31c2b17f238cd68a05eddae3de2f3f30df60128cf06e1d82b4643d6a1dfd54310fbc00d0f8e248a4ea2726b7e84bf1a420330d527253ee",
      "uptime":15610,
      "bootTime":1680249794,
      "procs":1,
      "os":"linux",
      "platform":"alpine",
      "platformFamily":"alpine",
      "platformVersion":"3.17.2",
      "kernelVersion":"5.15.90.1-microsoft-standard-WSL2",
      "kernelArch":"x86_64",
      "virtualizationSystem":"docker",
      "virtualizationRole":"guest",
      "hostid":"48f4b69b63d90af6691ee87361c0419af89c13d4504c4c85329599f5e0ea075e1668ed4788d6caaa2ffc299ca461b0d46e32e82c0c5405b23b29dcf8e5a8a1dc"
   },
   "Load":{
      "load1":0.05,
      "load5":0.19,
      "load15":0.49
   }
}

Sarama in MQTT-Kafka Bridge

We replaced confluent kafka with Sarama in the MQTT-Kafka Bridge. This increased the performance & stability and is a first step towards ARM compatibility.

Automated testing

We have added automated end-to-end testing to the United Manufacturing Hub. This includes testing the installation and the upgrading of the United Manufacturing Hub, as well as testing the functionality of the microservices.

Deprecations

Cameraconnect

The Cameraconnect microservice has been deprecated and removed from the United Manufacturing Hub. It’s development has been taken over by Anticipate.

Blob storage

The blob storage service has been deprecated and removed from the United Manufacturing Hub. This includes the MinIO Operator and Tenant, and the MQTT to Blob microservice.

Fixes

Many fixes have been made to the United Manufacturing Hub, including known issues for Sensorconnect and MQTT Bridge.

8.3.5 - What's New in Version 0.9.11

This section contains information about the new features and changes in the United Manufacturing Hub introduced in version 0.9.11.

Welcome to United Manufacturing Hub version 0.9.11! This patch introduces only minor bugfixes for Factoryinput and Sensorconnect.

For a complete list of changes, refer to the release notes.

8.3.6 - What's New in Version 0.9.10

This section contains information about the new features and changes in the United Manufacturing Hub introduced in version 0.9.10.

Welcome to United Manufacturing Hub version 0.9.10! In this release, we have changed the MQTT broker to HiveMQ and the Kafka console to RedPanda Console. A new OPC UA server simulator has been added, along with a new API service to connect to Factoryinsight from outside the cluster, especially tailored for usage with Tulip. Grafana now comes with presinstalled plugins and datasources, and the UMH datasource V2 supports grouping of custom tags.

For a complete list of changes, refer to the release notes.

MQTT Broker

The MQTT broker has been changed from VerneMQ to HiveMQ. This change won’t affect the end user, but it will allow us to better maintain the MQTT broker in the future.

Read our comparison of MQTT brokers to learn more about the differences between VerneMQ and HiveMQ.

Kafka Console

The Kowl project as been acquired by RedPanda and is now called RedPanda Console. The functionalities are mostly the same.

OPC UA Server Simulator

A new data simulator for OPC UA has been added. It is based on the OPC/UA simulator by Amine, and it allows you to simulate OPC UA servers in order to test the United Manufacturing Hub.

Grafana

Default plugins

Grafana now comes with the following plugins preinstalled:

  • ACE.SVG by Andrew Rodgers
  • Button Panel by UMH Systems Gmbh
  • Button Panel by CloudSpout LLC
  • Discrete by Natel Energy
  • Dynamic Text by Marcus Olsson
  • FlowCharting by agent
  • Pareto Chart by isaozler
  • Pie Chart (old) by Grafana Labs
  • Timepicker Buttons Panel by williamvenner
  • UMH Datasource by UMH Systems Gmbh
  • UMH Datasource V2 by UMH Systems Gmbh
  • Untimely by factry
  • Worldmap Panel by Grafana Labs

Grouping of custom tags

The UMH datasource V2 now supports grouping of custom tags. This allows you to group processValues by a common prefix, and then use the group name as a variable in Grafana.

Tulip connector

A new API service has been added to connect to Factoryinsight from outside the cluster. This service is especially tailored for usage with Tulip, and it allows you to connect to Factoryinsight from outside the cluster.

Read more about the Tulip connector.

8.3.7 - What's New in Version 0.9.9

This section contains information about the new features and changes in the United Manufacturing Hub introduced in version 0.9.9.

Welcome to United Manufacturing Hub version 0.9.9! This version introduces the PackML-MQTT-Simulator, to simulate a PackML state machine and publish the state changes to MQTT. It also includes new liveness probes for some of the Pods and minor fixes.

For a complete list of changes, refer to the release notes.

PackML-MQTT-Simulator

The PackML-MQTT-Simulator is is a virtual line that interfaces using PackML implemented over MQTT. It allows you to simulate a PackML state machine and publish the state changes to MQTT.

8.3.8 - What's New in Version 0.9.8

This section contains information about the new features and changes in the United Manufacturing Hub introduced in version 0.9.8.

Welcome to United Manufacturing Hub version 0.9.8! Read on to learn about the new Factoryinsight API V2 and the related datasource plugin for Grafana with support for historian functionalities.

For a complete list of changes, refer to the release notes.

Historian functionalities

The new v2 API of Factoryinsight now supports historian functionalities. This means that you can now query the history of your data and visualize it in Grafana. The new datasource plugin for Grafana supports the Time Bucket Aggregation, which allows you to aggregate your data by values like the average, minimum or maximum.

9 - Reference

This section of the United Manufacturing Hub documentation contains references.

9.1 - Helm Chart

This page describes the Helm Chart of the United Manufacturing Hub and the possible configuration options.

An Helm chart is a package manager for Kubernetes that simplifies the installation, configuration, and deployment of applications and services. It contains all the necessary Kubernetes manifests, configuration files, and dependencies required to run a particular application or service. One of the main advantages of Helm is that it allows to define the configuration of the installed resources in a single YAML file, called values.yaml. Helm provides great documentation on this process.

The Helm Chart of the United Manufacturing Hub is composed of both custom microservices and third-party applications. If you want a more in-depth view of the architecture of the United Manufacturing Hub, you can read the Architecture overview page.

Helm Chart structure

Custom microservices

The Helm Chart of the United Manufacturing Hub is composed of the following custom microservices:

  • barcodereader: reads the input from a barcode reader and sends it to the MQTT broker for further processing.
  • customMicroservice: a template for deploying any number of custom microservices.
  • data-bridge: transfers data between two Kafka or MQTT brokers, transforming the data following the UNS data model.
  • factoryinsight: provides REST endpoints to fetch data and calculate KPIs.
  • MQTT Simulator: simulates sensors and sends the data to the MQTT broker for further processing.
  • kafka-bridge: connects Kafka brokers on different Kubernetes clusters.
  • kafkatopostgresql: stores the data from the Kafka broker in a PostgreSQL database.
  • mqtt-kafka-bridge: connects the MQTT broker and the Kafka broker.
  • opcuasimulator: simulates OPC UA servers and sends the data to the MQTT broker for further processing.
  • packmlmqttsimulator: simulates a PackML state machine and sends the data to the MQTT broker for further processing.
  • sensorconnect: connects to a sensor and sends the data to the MQTT and Kafka brokers for further processing.
  • tulip-connector: exposes internal APIs to the internet, especially tailored for the Tulip platform.

Third-party applications

The Helm Chart of the United Manufacturing Hub is composed of the following third-party applications:

  • Grafana: a visualization and analytics software.
  • HiveMQ: an MQTT broker.
  • Node-RED: a programming tool for wiring together hardware devices, APIs and online services.
  • Redis: an in-memory data structure store, used for cache.
  • RedPanda: a Kafka-compatible distributed event streaming platform.
  • RedPanda Console: a web-based user interface for RedPanda.
  • TimescaleDB: an open-source time-series SQL database.

Configuration options

The Helm Chart of the United Manufacturing Hub can be configured by setting values in the values.yaml file. This file has three main sections that can be used to configure the applications:

  • customers: contains the definition of the customers that will be created during the installation of the Helm Chart. This section is optional, and it’s used only by factoryinsight.
  • _000_commonConfig: contains the basic configuration options to customize the United Manufacturing Hub, and it’s divided into sections that group applications with similar scope, like the ones that compose the infrastructure or the ones responsible for data processing. This is the section that should be mostly used to configure the microservices.
  • _001_customMicroservices: used to define the configuration of custom microservices that are not included in the Helm Chart.

After those three sections, there are the specific sections for each microservice, which contain their advanced configuration. This is the so called Danger Zone, because the values in those sections should not be changed, unlsess you absolutely know what you are doing.

When a parameter contains . (dot) characters, it means that it is a nested parameter. For example, in the tls.factoryinsight.cert parameter the cert parameter is nested inside the tls.factoryinsight section, and the factoryinsight section is nested inside the tls section.

Customers

The customers section contains the definition of the customers that will be created during the installation of the Helm Chart. It’s a simple dictionary where the key is the name of the customer, and the value is the password.

For example, the following snippet creates two customers:

customers:
  customer1: password1
  customer2: password2

Common configuration options

The _000_commonConfig contains the basic configuration options to customize the United Manufacturing Hub, and it’s divided into sections that group applications with similar scope.

The following table lists the configuration options that can be set in the _000_commonConfig section:

_000_commonConfig section parameters
ParameterDescriptionTypeAllowed valuesDefault
datainputThe configuration of the microservices used to input data.objectSee belowSee below
datamodel_v2The configuration related to the UNS data model.objectSee belowSee below
dataprocessingThe configuration of the microservices used to process data.objectSee belowSee below
datasourcesThe configuration of the microservices used to acquire data.objectSee belowSee below
datastorageThe configuration of the microservices used to store data.objectSee belowSee below
debugThe configuration for the debug mode.objectSee belowSee below
infrastructureThe configuration of the microservices used to provide infrastructure services.objectSee belowSee below
kafkaBridgeThe configuration for the Kafka bridge.objectSee belowSee below
metrics.enabledWhether to enable the anonymous metrics service or not.booltrue or falsetrue
serialNumberThe hostname of the device. Used by some microservices to identify the device.stringAnydefault
tulipconnectorThe configuration for the Tulip connector.objectSee belowSee below

Data model v2

The _000_commonConfig.datamodel_v2 section contains the configuration related to the UNS data model.

The following table lists the configuration options that can be set in the _000_commonConfig.datamodel_v2 section:

datamodel_v2 section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether the UNS data model should be used.booltrue, falsetrue
bridgesList of data bridges to create.listSee belowSee below
database.nameThe name of the database to use for the data model v2stringAnyumh_v2
database.hostThe host of the database to use for the data model v2stringAnyunited-manufacturing-hub
grafana.dbreaderThe name of the Grafana read-only database userstringAnygrafanareader
grafana.dbpasswordThe password of the Grafana read-only database userstringAnychangeme
Bridges

The _000_commonConfig.datamodel_v2.bridges section contains a list of configuration options for the data bridge. Each item in the list represents a data bridge instance, and the following table lists the configuration options that can be set in each item:

bridges section parameters
ParameterDescriptionTypeAllowed valuesDefault
modeThe mode of the data bridge.stringmqtt-kafka, kafka-kafka, mqtt-mqttmqtt-kafka
brokerAThe address of the source broker. Can be either MQTT or Kafka, and must include the portstringValid addressunited-manufacturing-hub-mqtt:1883
brokerBThe address of the destination broker. Can be either MQTT or Kafka, and must include the portstringValid addressunited-manufacturing-hub-kafka:9092
topicThe topic to subscribe to. Can be in either MQTT or Kafka form. Wildcards (# for MQTT, .* for Kafka) are allowed in order to subscribe to multiple topicsstringAnyumh.v1..*
topicMergePointThe nth part of the topic to use as the message key. If the topic is umh/v1/acme/anytown/foo/bar/#, and this value is 5, then all the messages wil end up in the topic umh.v1.acme.anytown.foointGreater than 35
partitionsThe number of partitions to use for the destination topic. Only used if the destination broker is Kafka.intGreater than 06
replicationFactorThe replication factor to use for the destination topic. Only used if the destination broker is Kafka.intOdd integer1
mqttEnableTLSWhether to enable TLS for the MQTT connection. Only used with MQTT brokersbooltrue, falsefalse
mqttPasswordThe password to use for the MQTT connection. Only used with MQTT brokersstringAny""
messageLRUSizeThe size of the LRU cache used to avoid message looping. Only used with MQTT brokersintAny1000000

Data sources

The _000_commonConfig.datasources section contains the configuration of the microservices used to acquire data, like the ones that connect to a sensor or simulate data.

The following table lists the configuration options that can be set in the _000_commonConfig.datasources section:

datasources section parameters
ParameterDescriptionTypeAllowed valuesDefault
barcodereaderThe configuration of the barcodereader microservice.objectSee belowSee below
iotsensorsmqttThe configuration of the IoTSensorsMQTT microservice.objectSee belowSee below
opcuasimulatorThe configuration of the opcuasimulator microservice.objectSee belowSee below
packmlmqttsimulatorThe configuration of the packmlsimulator microservice.objectSee belowSee below
sensorconnectThe configuration of the sensorconnect microservice.objectSee belowSee below
Barcode reader

The _000_commonConfig.datasources.barcodereader section contains the configuration of the barcodereader microservice.

The following table lists the configuration options that can be set in the _000_commonConfig.datasources.barcodereader section:

barcodereader section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether the barcodereader microservice is enabled.booltrue, falsefalse
USBDeviceNameThe name of the USB device to use.stringAnyDatalogic ADC, Inc. Handheld Barcode Scanner
USBDevicePathThe path of the USB device to use. It is recommended to use a wildcard (for example, /dev/input/event*) or leave emptystringValid Unix device path""
customerIDThe customer ID to use in the topic structure.stringAnyraw
locationThe location to use in the topic structure.stringAnybarcodereader
machineIDThe asset ID to use in the topic structure.stringAnybarcodereader
IoT Sensors MQTT

The _000_commonConfig.datasources.iotsensorsmqtt section contains the configuration of the IoTSensorsMQTT microservice.

The following table lists the configuration options that can be set in the _000_commonConfig.datasources.iotsensorsmqtt section:

iotsensorsmqtt section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether the IoTSensorsMQTT microservice is enabled.booltrue, falsetrue
OPC UA Simulator

The _000_commonConfig.datasources.opcuasimulator section contains the configuration of the opcuasimulator microservice.

The following table lists the configuration options that can be set in the _000_commonConfig.datasources.opcuasimulator section:

opcuasimulator section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether the opcuasimulator microservice is enabled.booltrue, falsetrue
PackML MQTT Simulator

The _000_commonConfig.datasources.packmlmqttsimulator section contains the configuration of the packmlsimulator microservice.

The following table lists the configuration options that can be set in the _000_commonConfig.datasources.packmlmqttsimulator section:

packmlmqttsimulator section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether the packmlsimulator microservice is enabled.booltrue, falsetrue
Sensor connect

The _000_commonConfig.datasources.sensorconnect section contains the configuration of the sensorconnect microservice.

The following table lists the configuration options that can be set in the _000_commonConfig.datasources.sensorconnect section:

sensorconnect section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether the sensorconnect microservice is enabled.booltrue, falsefalse
iprangeThe IP range of the sensors in CIDR notation.stringValid IP range192.168.10.1/24
enableKafkaWhether the sensorconnect microservice should use Kafka.booltrue, falsetrue
enableMQTTWhether the sensorconnect microservice should use MQTT.booltrue, falsefalse
testModeWhether to enable test mode. Only useful for development.booltrue, falsefalse

Data processing

The _000_commonConfig.dataprocessing section contains the configuration of the microservices used to process data, such as the nodered microservice.

The following table lists the configuration options that can be set in the _000_commonConfig.dataprocessing section:

dataprocessing section parameters
ParameterDescriptionTypeAllowed valuesDefault
noderedThe configuration of the nodered microservice.objectSee belowSee below
Node-RED

The _000_commonConfig.dataprocessing.nodered section contains the configuration of the nodered microservice.

The following table lists the configuration options that can be set in the _000_commonConfig.dataprocessing.nodered section:

nodered section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether the nodered microservice is enabled.booltrue, falsetrue
defaultFlowsWhether the default flows should be used.booltrue, falsefalse

Infrastructure

The _000_commonConfig.infrastructure section contains the configuration of the microservices responsible for connecting all the other microservices, such as the MQTT broker and the Kafka broker.

The following table lists the configuration options that can be set in the _000_commonConfig.infrastructure section:

infrastructure section parameters
ParameterDescriptionTypeAllowed valuesDefault
mqttThe configuration of the MQTT broker.objectSee belowSee below
kafkaThe configuration of the Kafka broker.objectSee belowSee below
MQTT

The _000_commonConfig.infrastructure.mqtt section contains the configuration of the MQTT broker.

The following table lists the configuration options that can be set in the _000_commonConfig.infrastructure.mqtt section:

mqtt section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether the MQTT broker is enabledbooltrue, falsetrue
adminUser.enabledWhether the admin user is enabledbooltrue, falsefalse
adminUser.nameThe name of the admin userstringAny UTF-8 stringadmin-user
adminUser.encryptedPasswordThe encrypted password of the admin userstringAny""
tls.useTLSWhether TLS should be usedbooltrue, falsetrue
tls.insecureSkipVerifyWhether the SSL certificate validation should be skippedbooltrue, falsetrue
tls.keystoreBase64The base64 encoded keystorestringAny""
tls.keystorePasswordThe password of the keystorestringAny""
tls.truststoreBase64The base64 encoded truststorestringAny""
tls.truststorePasswordThe password of the truststorestringAny""
tls.caCertThe CA certificatestringAny""
tls.mqtt_kafka_bridge.certThe certificate used for the mqttkafkabridgestringAny""
tls.mqtt_kafka_bridge.keyThe key used for the mqttkafkabridgestringAny""
tls.sensorconnect.certThe certificate used for the sensorconnect microservicestringAny""
tls.sensorconnect.keyThe key used for the sensorconnect microservicestringAny""
tls.iotsensorsmqtt.certThe certificate used for the iotsensorsmqtt microservicestringAny""
tls.iotsensorsmqtt.keyThe key used for the iotsensorsmqtt microservicestringAny""
tls.packmlsimulator.certThe certificate used for the packmlsimulator microservicestringAny""
tls.packmlsimulator.keyThe key used for the packmlsimulator microservicestringAny""
tls.nodered.certThe certificate used for the nodered microservicestringAny""
tls.nodered.keyThe key used for the nodered microservicestringAny""
Kafka

The _000_commonConfig.infrastructure.kafka section contains the configuration of the Kafka broker and related services, like mqttkafkabridge, kafkatopostgresql and the Kafka console.

The following table lists the configuration options that can be set in the _000_commonConfig.infrastructure.kafka section:

kafka section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether the Kafka broker and related services are enabledbooltrue, falsetrue
useSSLWhether SSL should be usedbooltrue, falsetrue
defaultTopicsThe default topics that should be createdstringSemicolon separated list of valid Kafka topicsia.test.test.test.processValue;ia.test.test.test.count;umh.v1.kafka.newTopic
tls.CACertThe CA certificatestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafka.certThe certificate used for the kafka brokerstringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafka.privkeyThe private key of the certificate for the Kafka brokerstringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.barcodereader.sslKeyPasswordThe encrypted password of the SSL key for the barcodereader microservice. If empty, no password is usedstringAny""
tls.barcodereader.sslKeyPemThe private key for the SSL certificate of the barcodereader microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.barcodereader.sslCertificatePemThe private SSL certificate for the barcodereader microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkabridge.sslKeyPasswordLocalThe encrypted password of the SSL key for the local kafkabridge broker. If empty, no password is usedstringAny""
tls.kafkabridge.sslKeyPemLocalThe private key for the SSL certificate of the local kafkabridge brokerstringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkabridge.sslCertificatePemLocalThe private SSL certificate for the local kafkabridge brokerstringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkabridge.sslCACertRemoteThe CA certificate for the remote kafkabridge brokerstringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkabridge.sslCertificatePemRemoteThe private SSL certificate for the remote kafkabridge brokerstringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkabridge.sslKeyPasswordRemoteThe encrypted password of the SSL key for the remote kafkabridge broker. If empty, no password is usedstringAny""
tls.kafkabridge.sslKeyPemRemoteThe private key for the SSL certificate of the remote kafkabridge brokerstringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkadebug.sslKeyPasswordThe encrypted password of the SSL key for the kafkadebug microservice. If empty, no password is usedstringAny""
tls.kafkadebug.sslKeyPemThe private key for the SSL certificate of the kafkadebug microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkadebug.sslCertificatePemThe private SSL certificate for the kafkadebug microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkainit.sslKeyPasswordThe encrypted password of the SSL key for the kafkainit microservice. If empty, no password is usedstringAny""
tls.kafkainit.sslKeyPemThe private key for the SSL certificate of the kafkainit microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkainit.sslCertificatePemThe private SSL certificate for the kafkainit microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkatopostgresql.sslKeyPasswordThe encrypted password of the SSL key for the kafkatopostgresql microservice. If empty, no password is usedstringAny""
tls.kafkatopostgresql.sslKeyPemThe private key for the SSL certificate of the kafkatopostgresql microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkatopostgresql.sslCertificatePemThe private SSL certificate for the kafkatopostgresql microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kowl.sslKeyPasswordThe encrypted password of the SSL key for the kowl microservice. If empty, no password is usedstringAny""
tls.kowl.sslKeyPemThe private key for the SSL certificate of the kowl microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kowl.sslCertificatePemThe private SSL certificate for the kowl microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.mqttkafkabridge.sslKeyPasswordThe encrypted password of the SSL key for the mqttkafkabridge microservice. If empty, no password is usedstringAny""
tls.mqttkafkabridge.sslKeyPemThe private key for the SSL certificate of the mqttkafkabridge microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.mqttkafkabridge.sslCertificatePemThe private SSL certificate for the mqttkafkabridge microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.nodered.sslKeyPasswordThe encrypted password of the SSL key for the nodered microservice. If empty, no password is usedstringAny""
tls.nodered.sslKeyPemThe private key for the SSL certificate of the nodered microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.nodered.sslCertificatePemThe private SSL certificate for the nodered microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.sensorconnect.sslKeyPasswordThe encrypted password of the SSL key for the sensorconnect microservice. If empty, no password is usedstringAny""
tls.sensorconnect.sslKeyPemThe private key for the SSL certificate of the sensorconnect microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.sensorconnect.sslCertificatePemThe private SSL certificate for the sensorconnect microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–

Data storage

The _000_commonConfig.datastorage section contains the configuration of the microservices used to store data. Specifically, it controls the following microservices:

If you want to specifically configure one of these microservices, you can do so in their respective sections in the Danger Zone.

The following table lists the configurable parameters of the _000_commonConfig.datastorage section.

datastorage section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the data storage microservicesbooltrue, falsetrue
db_passwordThe password for the database. Used by all the microservices that need to connect to the databasestringAnychangeme

Kafka Bridge

The _000_commonConfig.kafkaBridge section contains the configuration of the kafka-bridge microservice, responsible for bridging Kafka brokers in different Kubernetes clusters.

The following table lists the configurable parameters of the _000_commonConfig.kafkaBridge section.

kafkaBridge section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the kafka-bridge microservicebooltrue, falsefalse
remotebootstrapServerThe URL of the remote Kafka brokerstringAny""
topicCreationLocalListThe list of topics to create locallystringSemicolon separated list of valid Kafka topicsia.test.test.test.processValue;ia.test.test.test.count;umh.v1.kafka.newTopic
topicCreationRemoteListThe list of topics to create remotelystringSemicolon separated list of valid Kafka topicsia.test.test.test.processValue;ia.test.test.test.count;umh.v1.kafka.newTopic
topicmapThe list of topic maps of topics to forwardobjectSee belowempty
Topic Map

The topicmap parameter is a list of topic maps, each of which contains the following parameters:

topicmap section parameters
ParameterDescriptionTypeAllowed values
bidirectionalWhether to enable bidirectional communication for that topicbooltrue, false
nameThe name of the mapstringHighIntegrity, HighThroughput
send_directionThe direction of the communication for that topicstringto_remote, to_local
topicThe topic to forward. A regex can be used to match multiple topics.stringAny valid Kafka topic

For more information about the topic maps, see the kafka-bridge documentation.

Debug

The _000_commonConfig.debug section contains the debug configuration for all the microservices. This values should not be enabled in production.

The following table lists the configurable parameters of the _000_commonConfig.debug section.

debug section parameters
ParameterDescriptionTypeAllowed valuesDefault
enableFGTraceWhether to enable the foreground tracebooltrue, falsefalse

Tulip Connector

The _000_commonConfig.tulipconnector section contains the configuration of the tulip-connector microservice, responsible for connecting a Tulip instance with the United Manufacturing Hub.

The following table lists the configurable parameters of the _000_commonConfig.tulipconnector section.

tulipconnector section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the tulip-connector microservicebooltrue, falsefalse
domainThe domain name pointing to you clusterstringAny valid domain nametulip-connector.changme.com

Custom microservices configuration

The _001_customConfig section contains a list of custom microservices definitions. It can be used to deploy any application of your choice, which can be configured using the following parameters:

Custom microservices configuration parameters
ParameterDescriptionTypeAllowed valuesDefault
nameThe name of the microservicestringAnyexample
imageThe image and tag of the microservicestringAnyhello-world:latest
enabledWhether to enable the microservicebooltrue, falsefalse
imagePullPolicyThe image pull policy of the microservicestring“Always”, “IfNotPresent”, “Never”“Always”
envThe list of environment variables to set for the microserviceobjectAny[{name: LOGGING_LEVEL, value: PRODUCTION}]
portThe internal port of the microservice to targetintAny80
externalPortThe host port to which expose the internal portintAny8080
probePortThe port to use for the liveness and startup probesintAny9091
startupProbeThe interval in seconds for the startup probeintAny200
livenessProbeThe interval in seconds for the liveness probeintAny500
statefulEnabledCreate a PersistentVolumeClaim for the microservice and mount it in /databooltrue, falsefalse

Danger zone

The next sections contain a more advanced configuration of the microservices. Usually, changing the values of the previous sections is enough to run the United Manufacturing Hub. However, you may need to adjust some of the values below if you want to change the default behavior of the microservices.

Everything below this point should not be changed, unless you know what you are doing.
Danger zone advanced configuration
SectionDescription
barcodereaderConfiguration for barcodereader
databridgeConfiguration for databridge
factoryinsightConfiguration for factoryinsight
grafanaConfiguration for Grafana
iotsensorsmqttConfiguration for the IoTSensorsMQTT simulator
kafkabridgeConfiguration for kafka-bridge
kafkatopostgresqlConfiguration for kafka-to-postgresql
kafkatopostgresqlv2Configuration for kafka-to-postgresql-v2
metricsConfiguration for the metrics
mqtt_brokerConfiguration for the MQTT broker
mqttkafkabridgeConfiguration for mqtt-kafka-bridge
noderedConfiguration for Node-RED
opcuasimulatorConfiguration for the OPC UA simulator
packmlmqttsimulatorConfiguration for the PackML MQTT simulator
redisConfiguration for Redis
redpandaConfiguration for the Kafka broker
sensorconnectConfiguration for sensorconnect
serviceAccountConfiguration for the service account used by the microservices
timescaledb-singleConfiguration for TimescaleDB
tulipconnectorConfiguration for tulip-connector

Sections

barcodereader

The barcodereader section contains the advanced configuration of the barcodereader microservice.

barcodereader advanced section parameters
ParameterDescriptionTypeAllowed valuesDefault
annotationsAnnotations to add to the Kubernetes resourcesobjectAny{}
enabledWhether to enable the barcodereader microservicebooltrue, falsefalse
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the barcodereader microservicestringAnyghcr.io/united-manufacturing-hub/barcodereader
image.tagThe tag of the barcodereader microservice. Defaults to Chart version if not setstringAny
resources.limits.cpuThe CPU limitstringAny10m
resources.limits.memoryThe memory limitstringAny60Mi
resources.requests.cpuThe CPU requeststringAny2m
resources.requests.memoryThe memory requeststringAny30Mi
scanOnlyWhether to only scan without sending the data to the Kafka brokerbooltrue, falsefalse

databridge

The databridge section contains the advanced configuration of the databridge microservice.

databridge advanced section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the databridge microservicebooltrue, falsefalse
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the databridge microservicestringAnyghcr.io/united-manufacturing-hub/databridge
image.tagThe tag of the databridge microservice. Defaults to Chart version if not setstringAny
pdb.enabledWhether to enable a PodDisruptionBudgetbooltrue, falsetrue
pdb.minAvailableThe minimum number of available podsintAny1
replicasThe number of Pod replicasintAny1
resources.limits.cpuThe CPU limitstringAny400m
resources.limits.memoryThe memory limitstringAny300Mi
resources.requests.cpuThe CPU requeststringAny500m
resources.requests.memoryThe memory requeststringAny450Mi

factoryinsight

The factoryinsight section contains the advanced configuration of the factoryinsight microservice.

factoryinsight advanced section parameters
ParameterDescriptionTypeAllowed valuesDefault
db_databaseThe database namestringAnyfactoryinsight
db_hostThe host of the databasestringAny[i18n] resource_service_database
db_userThe database userstringAnyfactoryinsight
enabledWhether to enable the factoryinsight microservicebooltrue, falsefalse
hpa.enabledWhether to enable a HorizontalPodAutoscalerbooltrue, falsefalse
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the factoryinsight microservicestringAnyghcr.io/united-manufacturing-hub/factoryinsight
image.tagThe tag of the factoryinsight microservice. Defaults to Chart version if not setstringAny
ingress.enabledWhether to enable an Ingressbooltrue, falsefalse
ingress.publicHostSecretNameThe secret name of the public host of the IngressstringAny""
ingress.publicHostThe public host of the IngressstringAny""
insecure_no_authWhether to enable the insecure_no_auth modebooltrue, falsefalse
pdb.enabledWhether to enable a PodDisruptionBudgetbooltrue, falsefalse
redis.URIThe URI of the Redis instancestringAnyunited-manufacturing-hub-redis-headless:6379
replicasThe number of Pod replicasintAny2
resources.limits.cpuThe CPU limitstringAny200m
resources.limits.memoryThe memory limitstringAny200Mi
resources.requests.cpuThe CPU requeststringAny50m
resources.requests.memoryThe memory requeststringAny50Mi
service.annotationsAnnotations to add to the factoryinsight ServiceobjectAny{}
userThe user of factoryinsightstringAnyfactoryinsight
versionThe version of the API used. Each version also enables all the previous onesintAny2

grafana

The grafana section contains the advanced configuration of the grafana microservice. This is based on the official Grafana Helm chart. For more information about the parameters, please refer to the official documentation.

Here are only the values different from the default ones.

grafana advanced section parameters
ParameterDescriptionTypeAllowed valuesDefault
admin.existingSecretThe name of the secret containing the admin passwordstringAnygrafana-secret
admin.passwordKeyThe key of the admin password in the secretstringAnyadminpassword
admin.userKeyThe key of the admin password in the secretstringAnyadminuser
datasourcesThe datasources configuration.objectAnySee datasources section
envValueFromEnvironment variables to add to the Pod, from a secret or a configmapobjectAnySee envValueFrom section
envEnvironment variables to add to the PodobjectAnySee env section
extraInitContainersExtra init containers to add to the PodobjectAnySee extraInitContainers section
grafana.iniThe grafana.ini configuration.objectAnySee grafana.ini section
initChownData.enabledWhether to enable the initChownData job, to reset data ownership at startupbooltrue, falsetrue
persistence.enabledWhether to enable persistencebooltrue, falsetrue
persistence.sizeThe size of the persistent volumestringAny5Gi
podDisruptionBudget.minAvailableThe minimum number of available podsintAny1
service.portThe port of the ServiceintAny8080
service.typeThe type of Service to exposestringClusterIP, LoadBalancerLoadBalancer
serviceAccount.createWhether to create a ServiceAccountbooltrue, falsefalse
testFramework.enabledWhether to enable the test frameworkbooltrue, falsefalse
datasources

The datasources section contains the configuration of the datasources provisioning. See the Grafana documentation for more information.

datasources.yaml:
  apiVersion: 1
  datasources:
    - name: umh-v2-datasource
      # <string, required> datasource type. Required
      type: umh-v2-datasource
      # <string, required> access mode. proxy or direct (Server or Browser in the UI). Required
      access: proxy
      # <int> org id. will default to orgId 1 if not specified
      orgId: 1
      url: "http://united-manufacturing-hub-factoryinsight-service/"
      jsonData:
        customerID: $FACTORYINSIGHT_CUSTOMERID
        apiKey: $FACTORYINSIGHT_PASSWORD
        baseURL: "http://united-manufacturing-hub-factoryinsight-service/"
        apiKeyConfigured: true
      version: 1
      # <bool> allow users to edit datasources from the UI.
      isDefault: false
      editable: false
    # <string, required> name of the datasource. Required
    - name: umh-datasource
      # <string, required> datasource type. Required
      type: umh-datasource
      # <string, required> access mode. proxy or direct (Server or Browser in the UI). Required
      access: proxy
      # <int> org id. will default to orgId 1 if not specified
      orgId: 1
      url: "http://united-manufacturing-hub-factoryinsight-service/"
      jsonData:
        customerId: $FACTORYINSIGHT_CUSTOMERID
        apiKey: $FACTORYINSIGHT_PASSWORD
        serverURL: "http://united-manufacturing-hub-factoryinsight-service/"
        apiKeyConfigured: true
      version: 1
      # <bool> allow users to edit datasources from the UI.
      isDefault: false
      editable: false
    - name: UMH TimescaleDB 
      type: postgres
      url: united-manufacturing-hub:5432
      user: $GRAFANAREADER_USER
      isDefault: true
      secureJsonData:
        password: $GRAFANAREADER_PASSWORD
      jsonData:
        database: umh_v2
        sslmode: 'require' # disable/require/verify-ca/verify-full
        maxOpenConns: 100 # Grafana v5.4+
        maxIdleConns: 100 # Grafana v5.4+
        maxIdleConnsAuto: true # Grafana v9.5.1+
        connMaxLifetime: 14400 # Grafana v5.4+
        postgresVersion: 1300 # 903=9.3, 904=9.4, 905=9.5, 906=9.6, 1000=10
        timescaledb: true
envValueFrom

The envValueFrom section contains the configuration of the environment variables to add to the Pod, from a secret or a configmap.

grafana envValueFrom section parameters
ParameterDescriptionValue fromNameKey
FACTORYINSIGHT_APIKEYThe API key to use to authenticate to the Factoryinsight APIsecretKeyReffactoryinsight-secretapiKey
FACTORYINSIGHT_BASEURLThe base URL of the Factoryinsight APIsecretKeyReffactoryinsight-secretbaseURL
FACTORYINSIGHT_CUSTOMERIDThe customer ID to use to authenticate to the Factoryinsight APIsecretKeyReffactoryinsight-secretcustomerID
FACTORYINSIGHT_PASSWORDThe password to use to authenticate to the Factoryinsight APIsecretKeyReffactoryinsight-secretpassword
GRAFANAREADER_USERThe name of the Grafana read-only user for the data model v2secretKeyRef"grafana-secretgrafanareader
GRAFANAREADER_PASSWORDThe password of the Grafana read-only user for the data model v2secretKeyRef"grafana-secretgrafanareaderpassword
env

The env section contains the configuration of the environment variables to add to the Pod.

grafana env section parameters
ParameterDescriptionTypeAllowed valuesDefault
GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINSList of plugin identifiers to allow loading even if they lack a valid signaturestringComma separated listumh-datasource, umh-v2-datasource
extraInitContainers

The extraInitContainers section contains the configuration of the extra init containers to add to the Pod.

The init-plugins container is used to install the default plugins shipped with the UMH version of Grafana without the need to have an internet connection. See the documentation for a list of the plugins.

- image: unitedmanufacturinghub/grafana-umh:1.2.0
  name: init-plugins
  imagePullPolicy: IfNotPresent
  command: ['sh', '-c', 'cp -r /plugins /var/lib/grafana/']
  volumeMounts:
    - name: storage
      mountPath: /var/lib/grafana
grafana.ini

The grafana.ini section contains the configuration of the grafana.ini file. See the Grafana documentation for more information.

paths:
  data: /var/lib/grafana/data
  logs: /var/log/grafana
  plugins: /var/lib/grafana/plugins
  provisioning: /etc/grafana/provisioning
database:
  host: united-manufacturing-hub
  user: "grafana"
  name: "grafana"
  password: "changeme"
  ssl_mode: require
  type: postgres

iotsensorsmqtt

The iotsensorsmqtt section contains the configuration of the IoT Sensors MQTT microservice.

iotsensorsmqtt section parameters
ParameterDescriptionTypeAllowed valuesDefault
imageThe image of the iotsensorsmqtt microservicestringAnyamineamaach/sensors-mqtt
mqtt.encryptedPasswordThe encrypted password of the MQTT brokerstringAnyBase 64 encrypted password
mqtt.passwordThe password of the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
resources.limits.cpuThe CPU limitstringAny30m
resources.limits.memoryThe memory limitstringAny50Mi
resources.requests.cpuThe CPU requeststringAny10m
resources.requests.memoryThe memory requeststringAny20Mi
tagThe tag of the iotsensorsmqtt microservice. Defaults to latest if not setstringAnyv1.0.0

kafkabridge

The kafkabridge section contains the configuration of the Kafka bridge.

kafkabridge section parameters
ParameterDescriptionTypeAllowed valuesDefault
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the kafka-bridge microservicestringAnyghcr.io/united-manufacturing-hub/kafka-bridge
image.tagThe tag of the kafka-bridge microservice. Defaults to Chart version if not setstringAny
initContainer.pullPolicyThe image pull policy of the init containerstringAlways, IfNotPresent, NeverIfNotPresent
initContainer.repositoryThe image of the init containerstringAnyghcr.io/united-manufacturing-hub/kafka-init
initContainer.tagThe tag of the init container. Defaults to Chart version if not setstringAny

kafkatopostgresql

The kafkatopostgresql section contains the configuration of the Kafka to PostgreSQL microservice.

kafkatopostgresql section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the Kafka to PostgreSQL microservicebooltrue, falsetrue
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the kafkatopostgresql microservicestringAnyghcr.io/united-manufacturing-hub/kafka-to-postgresql
image.tagThe tag of the kafkatopostgresql microservice. Defaults to Chart version if not setstringAny
initContainer.pullPolicyThe image pull policy of the init containerstringAlways, IfNotPresent, NeverIfNotPresent
initContainer.repositoryThe image of the init containerstringAnyghcr.io/united-manufacturing-hub/kafka-init
initContainer.tagThe tag of the init container. Defaults to Chart version if not setstringAny
replicasThe number of Pod replicasintAny1
resources.limits.cpuThe CPU limitstringAny200m
resources.limits.memoryThe memory limitstringAny300Mi
resources.requests.cpuThe CPU requeststringAny50m
resources.requests.memoryThe memory requeststringAny150Mi

kafkatopostgresqlv2

The kafkatopostgresqlv2 section contains the configuration of the Kafka to PostgreSQL v2 microservice.

kafkatopostgresqlv2 section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the Kafka to PostgreSQL v2 microservicebooltrue, falsetrue
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the kafkatopostgresqlv2 microservicestringAnyghcr.io/united-manufacturing-hub/kafka-to-postgresql-v2
image.tagThe tag of the kafkatopostgresqlv2 microservice. Defaults to Chart version if not setstringAny
replicasThe number of Pod replicasintAny1
resources.limits.cpuThe CPU limitstringAny200m
resources.limits.memoryThe memory limitstringAny300Mi
resources.requests.cpuThe CPU requeststringAny50m
resources.requests.memoryThe memory requeststringAny150Mi
probes.startup.failureThresholdThe failure threshold of the startup probeintAny30
probes.startup.initialDelaySecondsThe initial delay of the startup probeintAny10
probes.startup.periodSecondsThe period of the startup probeintAny10
probes.liveness.periodSecondsThe period of the liveness probeintAny10
probes.readiness.periodSecondsThe period of the readiness probeintAny10
logging.levelThe logging level of the microservicestringPRODUCTION, DEVELOPMENTPRODUCTION
asset.cache.lru.sizeThe size of the LRU cacheintAny1000
workers.channel.sizeSize in messages for each worker’s channelintAny10000
workers.goroutines.multiplierThe multiplier of the number of goroutines. The total number of goroutines is determined by the CPU count times the multiplierintAny16
database.userThe name of the database user for the Kafka to PostgreSQL v2 microservicestringAnykafkatopostgresqlv2
database.passwordThe password of the database user for the Kafka to PostgreSQL v2 microservicestringAnychangemetoo

metrics

The metrics section contains the configuration of the metrics CronJob that sends anonymous usage data.

metrics section parameters
ParameterDescriptionTypeAllowed valuesDefault
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the metrics microservicestringAnyghcr.io/united-manufacturing-hub/metrics
cronJob.scheduleThe schedule of the CronJobstringAny0 */4 * * * (every 4 hours)

mqtt_broker

The mqtt_broker section contains the configuration of the MQTT broker.

mqtt_broker section parameters
ParameterDescriptionTypeAllowed valuesDefault
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the mqtt_broker microservicestringAnyhivemq/hivemq-ce
image.tagThe tag of the mqtt_broker microservice. Defaults to 2022.1 if not setstringAny2022.1
initContainerThe init container configurationobjectAnySee initContainer section
persistence.extension.sizeThe size of the persistence volume for the extensionsstringAny100Mi
persistence.storage.sizeThe size of the persistence volume for the storagestringAny2Gi
rbacEnabledWhether to enable RBACbooltrue, falsefalse
resources.limits.cpuThe CPU limitstringAny700m
resources.limits.memoryThe memory limitstringAny1700Mi
resources.requests.cpuThe CPU requeststringAny300m
resources.requests.memoryThe memory requeststringAny1000Mi
service.mqtt.enabledWhether to enable the MQTT servicebooltrue, falsetrue
service.mqtt.portThe port of the MQTT serviceintAny1883
service.mqtts.cipher_suitesThe ciphersuites to enablestring arrayAnyTLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA
service.mqtts.enabledWhether to enable the MQTT over TLS servicebooltrue, falsetrue
service.mqtts.portThe port of the MQTT over TLS serviceintAny8883
service.mqtts.tls_versionsThe TLS versions to enablestring arrayAnyTLSv1.3, TLSv1.2
service.ws.enabledWhether to enable the WebSocket servicebooltrue, falsefalse
service.ws.portThe port of the WebSocket serviceintAny8080
service.wss.cipher_suitesThe ciphersuites to enablestring arrayAnyTLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA
service.wss.enabledWhether to enable the WebSocket over TLS servicebooltrue, falsefalse
service.wss.portThe port of the WebSocket over TLS serviceintAny8443
service.wss.tls_versionsThe TLS versions to enablestring arrayAnyTLSv1.3, TLSv1.2
initContainer

The initContainer section contains the configuration for the init containers. By default, the hivemqextensioninit container is used to initialize the HiveMQ extensions.

initContainer:
  hivemqextensioninit:
    image:
      repository: unitedmanufacturinghub/hivemq-init
      tag: 2.0.0
      pullPolicy: IfNotPresent

mqttkafkabridge

The mqttkafkabridge section contains the configuration of the MQTT-Kafka bridge.

mqttkafkabridge section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the MQTT-Kafka bridgebooltrue, falsefalse
image.pullPolicyThe pull policy of the mqtt-kafka-bridge microservicestringAnyIfNotPresent
image.repositoryThe image of the mqtt-kafka-bridge microservicestringAnyghcr.io/united-manufacturing-hub/mqtt-kafka-bridge
image.tagThe tag of the mqtt-kafka-bridge microservice. Defaults to Chart version if not setstringAny
initContainer.pullPolicyThe pull policy of the init containerstringAnyIfNotPresent
initContainer.repositoryThe image of the init containerstringAnyghcr.io/united-manufacturing-hub/kafka-init
initContainer.tagThe tag of the init container. Defaults to Chart version if not setstringAny
kafkaAcceptNoOriginAllow access to the Kafka broker without a valid x-tracebooltrue, falsefalse
kafkaSenderThreadsThe number of threads for sending messages to the Kafka brokerintAny1
messageLRUSizeThe size of the LRU cache for messagesintAny100000
mqtt.encryptedPasswordThe encrypted password of the MQTT brokerstringAnyBase 64 encrypted password
mqtt.passwordThe password of the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
mqttSenderThreadsThe number of threads for sending messages to the MQTT brokerintAny1
pdb.enabledWhether to enable the pod disruption budgetbooltrue, falsetrue
pdb.minAvailableThe minimum number of pods that must be availableintAny1
rawMessageLRUSizeThe size of the LRU cache for raw messagesintAny100000
resources.limits.cpuThe CPU limitstringAny500m
resources.limits.memoryThe memory limitstringAny450Mi
resources.requests.cpuThe CPU requeststringAny400m
resources.requests.memoryThe memory requeststringAny300Mi

nodered

The nodered section contains the configuration of the Node-RED microservice.

nodered section parameters
ParameterDescriptionTypeAllowed valuesDefault
envEnvironment variables to add to the PodobjectAnySee env section
flowsA JSON string containing the flows to import into Node-REDstringAnySee the documentation
ingress.enabledWhether to enable the ingressbooltrue, falsefalse
ingress.publicHostSecretNameThe secret name of the public host of the IngressstringAny""
ingress.publicHostThe public host of the IngressstringAny""
mqtt.encryptedPasswordThe encrypted password of the MQTT brokerstringAnyBase 64 encrypted password
portThe port of the Node-RED serviceintAny1880
serviceTypeThe type of the servicestringClusterIP, LoadBalancerLoadBalancer
settingsA JSON string containing the settings of Node-REDstringAnySee the documentation
storageRequestThe amount of storage for the PersistentVolumeClaimstringAny1Gi
tagThe Node-RED versionstringAny2.0.6
timezoneThe timezonestringAnyBerlin/Europe
env

The env section contains the environment variables to add to the Pod.

env section parameters
ParameterDescriptionTypeAllowed valuesDefault
NODE_RED_ENABLE_SAVE_MODEWhether to enable the save modebooltrue, falsefalse

opcuasimulator

The opcuasimulator section contains the configuration of the OPC UA Simulator microservice.

opcuasimulator section parameters
ParameterDescriptionTypeAllowed valuesDefault
certadds.hostsHosts to add to the certificatestringAnyunited-manufacturing-hub-opcuasimulator-service
certadds.ipsIPs to add to the certificatestringAny""
imageThe image of the OPC UA Simulator microservicestringAnyghcr.io/united-manufacturing-hub/opcuasimulator
resources.limits.cpuThe CPU limitstringAny30m
resources.limits.memoryThe memory limitstringAny50Mi
resources.requests.cpuThe CPU requeststringAny10m
resources.requests.memoryThe memory requeststringAny20Mi
service.annotationsThe annotations of the serviceobjectAny{}
tagThe tag of the OPC UA Simulator microservice. Defaults to latest if not setstringAny0.1.0

packmlmqttsimulator

The packmlmqttsimulator section contains the configuration of the PackML MQTT Simulator microservice.

packmlmqttsimulator section parameters
ParameterDescriptionTypeAllowed valuesDefault
image.repositoryThe image of the PackML MQTT Simulator microservicestringAnyspruiktec/packml-simulator
image.hashThe hash of the image of the PackML MQTT Simulator microservicestringAny01e2f0da3542f1b4e0de830a8d24135de03fd9174dce184ed329bed3ee688e19
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
replicasThe number of replicasintAny1
resources.limits.cpuThe CPU limitstringAny30m
resources.limits.memoryThe memory limitstringAny50Mi
resources.requests.cpuThe CPU requeststringAny10m
resources.requests.memoryThe memory requeststringAny20Mi
envEnvironment variables to add to the PodobjectAnySee env section
env

The env section contains the environment variables to add to the Pod.

env section parameters
ParameterDescriptionTypeAllowed valuesDefault
areaISA-95 area name of the linestringAnyDefaultArea
productionLineISA-95 line name of the linestringAnyDefaultProductionLine
siteISA-95 site name of the linestringAnytestLocation
mqtt.passwordThe password of the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
mqtt.encryptedPasswordThe encrypted password of the MQTT brokerstringAnyBase 64 encrypted password

redis

The redis section contains the configuration of the Redis microservice. This is based on the official Redis Helm chart. For more information about the parameters, see the official documentation.

Here are only the values different from the default ones.

redis section parameters
ParameterDescriptionTypeAllowed valuesDefault
architectureRedis architecturestringstandalone, replicationstandalone
auth.existingSecretPasswordKeyPassword key to be retrieved from existing secretstringAnyredispassword
auth.existingSecretThe name of the existing secret with Redis credentialsstringAnyredis-secret
commonConfigurationCommon configuration to be added into the ConfigMapstringAnySee commonConfiguration section
master.extraFlagsArray with additional command line flags for Redis masterstring arrayAny–maxmemory 200mb
master.livenessProbe.initialDelaySecondsThe initial delay before the liveness probe startsintAny5
master.readinessProbe.initialDelaySecondsThe initial delay before the readiness probe startsintAny120
master.resources.limits.cpuThe CPU limitstringAny100m
master.resources.limits.memoryThe memory limitstringAny100Mi
master.resources.requests.cpuThe CPU requeststringAny50m
master.resources.requests.memoryThe memory requeststringAny50Mi
metrics.enabledStart a sidecar prometheus exporter to expose Redis metricsbooltrue, falsetrue
pdb.createWhether to create a Pod Disruption Budgetbooltrue, falsetrue
pdb.minAvailableMin number of pods that must still be available after the evictionintAny2
serviceAccount.createWhether to create a service accountbooltrue, falsefalse
commonConfiguration

The commonConfiguration section contains the common configuration to be added into the ConfigMap. For more information, see the documentation.

# Enable AOF https://redis.io/topics/persistence#append-only-file
appendonly yes
# Disable RDB persistence, AOF persistence already enabled.
save ""
# Backwards compatability with Redis version 6.*
replica-ignore-disk-write-errors yes

redpanda

The redpanda section contains the configuration of the Kafka broker. This is based on the RedPanda chart. For more information about the parameters, see the official documentation.

Here are only the values different from the default ones.

kafka section parameters
ParameterDescriptionTypeAllowed valuesDefault
config.cluster.auto_create_topics_enabledWhether to enable auto creation of topicsbooltrue, falsetrue
consoleThe configuration for RedPanda ConsoleobjectAnySee console section
external.typeThe type of Service for external accessstringNodePort, LoadBalancerNodePort
fullnameOverrideThe full name overridestringAnyunited-manufacturing-hub-kafka
listeners.kafka.portThe port of the Kafka listenerintAny9092
rbac.enableWhether to enable RBACbooltrue, falsetrue
resources.cpu.coresThe number of CPU cores to allocate to the Kafka brokerintAny1
resources.memory.container.maxMaximum memory count for each brokerstringAny2Gi
resources.memory.enable_memory_lockingWhether to enable memory lockingbooltrue, falsetrue
serviceAccount.createWhether to create a service accountbooltrue, falsefalse
statefulset.replicasThe number of brokersintAny1
storage.persistentVolume.sizeThe size of the persistent volumestringAny10Gi
tls.enabledWhether to enable TLSbooltrue, falsefalse
console

The console section contains the configuration of the RedPanda Console.

For more information about the parameters, see the official documentation.

console section parameters
ParameterDescriptionTypeAllowed valuesDefault
console.config.kafka.brokersThe list of Kafka brokerslistAnyunited-manufacturing-hub-kafka:9092
service.portThe port of the Service to exposeintAny8090
service.targetPortThe target port of the Service to exposeintAny8080
service.typeThe type of Service to exposestringClusterIp, NodePort, LoadBalancerLoadBalancer
serviceAccount.createWhether to create a service accountbooltrue, falsefalse

sensorconnect

The sensorconnect section contains the configuration of the Sensorconnect microservice.

sensorconnect section parameters
ParameterDescriptionTypeAllowed valuesDefault
additionalSleepTimePerActivePortMsAdditional sleep time between pollings for each active port in millisecondsfloatAny0.0
additionalSlowDownMapJSON map of values, allows to slow down and speed up the polling time of specific sensorsJSONAny{}
allowSubTwentyMsWhether to allow sub 20ms polling time. Set to 1 to enable. Not recommendedint0, 10
deviceFinderTimeSecTime interval in second between new device discoveryintAny20
deviceFinderTimeoutSecTimeout in second for device discovery. Never set lower than deviceFinderTimeSecintAny1
imageThe image of the sensorconnect microservicestringAnyghcr.io/united-manufacturing-hub/sensorconnect
ioddfilepathThe path to the IODD filesstringAny/ioddfiles
lowerPollingTimeThe lower polling time in millisecondsintAny100
maxSensorErrorCountThe maximum number of sensor errors before the sensor is marked as not respondingintAny50
mqtt.encryptedPasswordThe encrypted password of the MQTT brokerstringAnyBase 64 encrypted password
mqtt.passwordThe password of the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
pollingSpeedStepDownMsThe time to subtract from the polling time in milliseconds when a sensor is respondingintAny1
pollingSpeedStepUpMsThe time to add to the polling time in milliseconds when a sensor is not respondingintAny20
resources.limits.cpuThe CPU limitstringAny100m
resources.limits.memoryThe memory limitstringAny200Mi
resources.requests.cpuThe CPU requeststringAny10m
resources.requests.memoryThe memory requeststringAny75Mi
storageRequestThe amount of storage for the PersistentVolumeClaimstringAny1Gi
tagThe tag of the sensorconnect microservice. Defaults to Chart version if not setstringAny
upperPollingTimeThe upper polling time in millisecondsintAny1000

serviceAccount

The serviceAccount section contains the configuration of the service account. See the Kubernetes documentation for more information.

serviceAccount section parameters
ParameterDescriptionTypeAllowed valuesDefault
createWhether to create a service accountbooltrue, falsetrue

timescaledb-single

The timescaledb-single section contains the configuration of the TimescaleDB microservice. This is based on the official TimescaleDB Helm chart. For more information about the parameters, see the official documentation.

Here are only the values different from the default ones.

timescaledb-single section parameters
ParameterDescriptionTypeAllowed valuesDefault
replicaCountThe number of replicasintAny1
image.repositoryThe image of the TimescaleDB microservicestringAnyghcr.io/united-manufacturing-hub/timescaledb
image.tagThe Timescaledb-ha versionstringAnypg13.8-ts2.8.0-p1
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
patroni.postgresql.create_replica_methodsThe replica creation methodstring arrayAnybasebackup
postInitA list of sources that contain post init scriptsobject arrayAnySee postInit
service.primary.typeThe type of the primary servicestringClusterIP, NodePort, LoadBalancerLoadBalancer
serviceAccount.createWhether to create a service accountbooltrue, falsefalse
postInit

The postInit parameter is a list of references to sources that contain post init scripts. The scripts are executed after the database is initialized.

postInit:
  - configMap:
      name: {{ resource type="configmap" name="database" }}
      optional: false
  - secret:
      name: {{ resource type="secret" name="database" }}
      optional: false

tulipconnector

The tulipconnector section contains the configuration of the Tulip Connector microservice.

tulipconnector section parameters
ParameterDescriptionTypeAllowed valuesDefault
image.repositoryThe image of the Tulip Connector microservicestringAnyghcr.io/united-manufacturing-hub/tulip-connector
image.tagThe tag of the Tulip Connector microservice. Defaults to latest if not setstringAny0.1.0
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
replicasThe number of Pod replicasintAny1
envThe environment variablesobjectAnySee env
resources.limits.cpuThe CPU limitstringAny30m
resources.limits.memoryThe memory limitstringAny50Mi
resources.requests.cpuThe CPU requeststringAny10m
resources.requests.memoryThe memory requeststringAny20Mi
env

The env section contains the configuration of the environment variables to add to the Pod.

env section parameters
ParameterDescriptionTypeAllowed valuesDefault
modeIn which mode to run the Tulip Connectorstringdev, prodprod

What’s next

9.2 - Microservices

This section contains the reference documentation for the microservices that can be found in the United Manufacturing Hub.

This section contains the technical documentation for the microservices that compose the United Manufacturing Hub.

9.2.1 - Barcodereader

The technical documentation of the barcodereader microservice, which reads barcodes and sends the data to the Kafka broker.

Kubernetes resources

  • Deployment: united-manufacturing-hub-barcodereader
  • Secret: united-manufacturing-hub-barcodereader-secrets

Configuration

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
ASSET_IDThe asset ID, which is used for the topic structurestringAnybarcodereader
CUSTOMER_IDThe customer ID, which is used for the topic structurestringAnyraw
DEBUG_ENABLE_FGTRACEEnables the use of the fgtrace library. Not recommended for productionstringtrue, falsefalse
INPUT_DEVICE_NAMEThe name of the USB device to usestringAnyDatalogic ADC, Inc. Handheld Barcode Scanner
INPUT_DEVICE_PATHThe path of the USB device to use. It is recommended to use a wildcard (for example, /dev/input/event*) or leave emptystringValid Unix device path""
KAFKA_BOOTSTRAP_SERVERURL of the Kafka broker used, port is requiredstringAnyunited-manufacturing-hub-kafka:9092
LOCATIONThe location, which is used for the topic structurestringAnybarcodereader
LOGGING_LEVELDefines which logging level is used, mostly relevant for developers.stringPRODUCTION, DEVELOPMENTPRODUCTION
MICROSERVICE_NAMEName of the microservice (used for tracing)stringAnyunited-manufacturing-hub-barcodereader
SCAN_ONLYPrevent message broadcasting if enabledbooltrue, falsefalse
SERIAL_NUMBERSerial number of the cluster (used for tracing)stringAnydefalut

9.2.2 - Cache

The technical documentation of the redis microservice, which is used as a cache for the other microservices.

Kubernetes resources

  • StatefulSet: united-manufacturing-hub-redis-master
  • Service:
    • Internal ClusterIP:
      • Redis: united-manufacturing-hub-redis-master at port 6379
      • Headless: united-manufacturing-hub-redis-headless at port 6379
      • Metrics: united-manufacturing-hub-redis-metrics at port 6379
  • ConfigMap:
    • Configuration: united-manufacturing-hub-redis-configuration
    • Health: united-manufacturing-hub-redis-health
    • Scripts: united-manufacturing-hub-redis-scripts
  • Secret: redis-secret
  • PersistentVolumeClaim: redis-data-united-manufacturing-hub-redis-master-0

Configuration

You shouldn’t need to configure the cache manually, as it’s configured automatically when the cluster is deployed. However, if you need to change the configuration, you can do it by editing the redis section of the Helm chart values file.

You can consult the Bitnami Redis chart for more information about the available configuration options.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
ALLOW_EMPTY_PASSWORDAllow empty passwordbooltrue, falsefalse
BITNAMI_DEBUGSpecify if debug values should be setbooltrue, falsefalse
REDIS_PASSWORDRedis passwordstringAnyRandom UUID
REDIS_PORTRedis port numberintAny6379
REDIS_REPLICATION_MODERedis replication modestringmaster, slavemaster
REDIS_TLS_ENABLEDEnable TLSbooltrue, falsefalse

9.2.3 - Data Bridge

The technical documentation of the data-bridge microservice, which transfers data between two Kafka or MQTT brokers, tranforming the data following the UNS data model.

Kubernetes resources

  • Deployment: united-manufacturing-hub-databridge-0
  • Secret: united-manufacturing-hub-databridge-mqtt-secrets

Configuration

You shouldn’t need to configure the environment variables directly, as they are set by the Helm chart. If you need to change them, you can do so by editing the values in the Helm chart.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
BROKER_AThe address of the source broker.stringAny""
BROKER_BThe address of the destination broker.stringAny""
LOGGING_LEVELThe logging level to use.stringPRODUCTION, DEVELOPMENTPRODUCTION
MESSAGE_LRU_SIZEThe size of the LRU cache used to avoid message looping. Only used with MQTT brokersintAny1000000
MICROSERVICE_NAMEName of the microservice. Used for tracing.stringAnyunited-manufacturing-hub-databridge
MQTT_ENABLE_TLSWhether to enable TLS for the MQTT connection.booltrue, falsefalse
MQTT_PASSWORDThe password to use for the MQTT connection.stringAny""
PARTITIONSThe number of partitions to use for the destination topic. Only used if the destination broker is Kafka.intGreater than 06
POD_NAMEName of the pod. Used for tracing.stringAnyunited-manufacturing-hub-databridge
REPLICATION_FACTORThe replication factor to use for the destination topic. Only used if the destination broker is Kafka.intOdd integer3
SERIAL_NUMBERSerial number of the cluster. Used for tracing.stringAnydefault
SPLITThe nth part of the topic to use as the message key. If the topic is umh/v1/acme/anytown/foo/bar, and SPLIT is 4, then the message key will be foo.barintGreater than 3-1
TOPICThe topic to subscribe to. Can be in either MQTT or Kafka form. Wildcards (# for MQTT, .* for Kafka) are allowed in order to subscribe to multiple topicsstringAny""

9.2.4 - Database

The technical documentation of the database microservice, which stores the data of the application.

Kubernetes resources

  • StatefulSet: united-manufacturing-hub-timescaledb
  • Service:
    • Internal ClusterIP for the replicas: united-manufacturing-hub-replica at port 5432
    • Internal ClusterIP for the config: united-manufacturing-hub-config at port 8008
    • External LoadBalancer: united-manufacturing-hub at port 5432
  • ConfigMap:
    • Patroni: united-manufacturing-hub-timescaledb-patroni
    • Post init: timescale-post-init
    • Postgres BackRest: united-manufacturing-hub-timescaledb-pgbackrest
    • Scripts: united-manufacturing-hub-timescaledb-scripts
  • Secret:
    • Certificate: united-manufacturing-hub-certificate
    • Patroni credentials: united-manufacturing-hub-credentials
    • Users passwords: timescale-post-init-pw
  • PersistentVolumeClaim:
    • Data: storage-volume-united-manufacturing-hub-timescaledb-0
    • WAL-E: wal-volume-united-manufacturing-hub-timescaledb-0

Configuration

There is only one parameter that usually needs to be changed: the password used to connect to the database. To do so, set the value of the db_password key in the _000_commonConfig.datastorage section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
BOOTSTRAP_FROM_BACKUPWhether to bootstrap the database from a backup or not.int0, 10
PATRONI_KUBERNETES_LABELSThe labels to use to find the pods of the StatefulSet.stringAny{app: united-manufacturing-hub-timescaledb, cluster-name: united-manufacturing-hub, release: united-manufacturing-hub}
PATRONI_KUBERNETES_NAMESPACEThe namespace in which the StatefulSet is deployed.stringAnyunited-manufacturing-hub
PATRONI_KUBERNETES_POD_IPThe IP address of the pod.stringAnyRandom IP
PATRONI_KUBERNETES_PORTSThe ports to use to connect to the pods.stringAny[{"name": "postgresql", "port": 5432}]
PATRONI_NAMEThe name of the pod.stringAnyunited-manufacturing-hub-timescaledb-0
PATRONI_POSTGRESQL_CONNECT_ADDRESSThe address to use to connect to the database.stringAny$(PATRONI_KUBERNETES_POD_IP):5432
PATRONI_POSTGRESQL_DATA_DIRThe directory where the database data is stored.stringAny/var/lib/postgresql/data
PATRONI_REPLICATION_PASSWORDThe password to use to connect to the database as a replica.stringAnyRandom 16 characters
PATRONI_REPLICATION_USERNAMEThe username to use to connect to the database as a replica.stringAnystandby
PATRONI_RESTAPI_CONNECT_ADDRESSThe address to use to connect to the REST API.stringAny$(PATRONI_KUBERNETES_POD_IP):8008
PATRONI_SCOPEThe name of the cluster.stringAnyunited-manufacturing-hub
PATRONI_SUPERUSER_PASSWORDThe password to use to connect to the database as the superuser.stringAnyRandom 16 characters
PATRONI_admin_OPTIONSThe options to use for the admin user.stringComma separated list of optionscreaterole,createdb
PATRONI_admin_PASSWORDThe password to use to connect to the database as the admin user.stringAnyRandom 16 characters
PGBACKREST_CONFIGThe path to the configuration file for Postgres BackRest.stringAny/etc/pgbackrest/pgbackrest.conf
PGDATAThe directory where the database data is stored.stringAny$(PATRONI_POSTGRESQL_DATA_DIR)
PGHOSTThe directory of the runnning databasestringAny/var/run/postgresql

9.2.5 - Factoryinsight

The technical documentation of the Factoryinsight microservice, which exposes a set of APIs to access the data from the database.

Kubernetes resources

  • Deployment: united-manufacturing-hub-factoryinsight-deployment
  • Service:
  • Secret: factoryinsight-secret

Configuration

You shouldn’t need to configure Factoryinsight manually, as it’s configured automatically when the cluster is deployed. However, if you need to change the configuration, you can do it by editing the factoryinsight section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
CUSTOMER_NAME_{NUMBER}Specifies a user for the REST API. Multiple users can be setstringAny""
CUSTOMER_PASSWORD_{NUMBER}Specifies the password of the user for the REST APIstringAny""
DEBUG_ENABLE_FGTRACEEnables the use of the fgtrace library. Not recommended for productionstringtrue, falsefalse
DRY_RUNIf enabled, data wont be stored in databasebooltrue, falsefalse
FACTORYINSIGHT_PASSWORDSpecifies the password for the admin user for the REST APIstringAnyRandom UUID
FACTORYINSIGHT_USERSpecifies the admin user for the REST APIstringAnyfactoryinsight
INSECURE_NO_AUTHIf enabled, no authentication is required for the REST API. Not recommended for productionbooltrue, falsefalse
LOGGING_LEVELDefines which logging level is used, mostly relevant for developersstringPRODUCTION, DEVELOPMENTPRODUCTION
MICROSERVICE_NAMEName of the microservice. Used for tracingstringAnyunited-manufacturing-hub-factoryinsight
POSTGRES_DATABASESpecifies the database name to usestringAnyfactoryinsight
POSTGRES_HOSTSpecifies the database DNS name or IP addressstringAnyunited-manufacturing-hub
POSTGRES_PASSWORDSpecifies the database password to usestringAnychangeme
POSTGRES_PORTSpecifies the database portintValid port number5432
POSTGRES_USERSpecifies the database user to usestringAnyfactoryinsight
REDIS_PASSWORDPassword to access the redis sentinelstringAnyRandom UUID
REDIS_URIThe URI of the Redis instancestringAnyunited-manufacturing-hub-redis-headless:6379
SERIAL_NUMBERSerial number of the cluster. Used for tracingstringAnydefault
VERSIONThe version of the API used. Each version also enables all the previous onesintAny2

API documentation

9.2.6 - Grafana

The technical documentation of the grafana microservice, which is a web application that provides visualization and analytics capabilities.

Kubernetes resources

  • Deployment: united-manufacturing-hub-grafana
  • Service:
    • External LoadBalancer: united-manufacturing-hub-grafana at port 8080
  • ConfigMap: united-manufacturing-hub-grafana
  • Secret: grafana-secret
  • PersistentVolumeClaim: united-manufacturing-hub-grafana

Configuration

Grafana is configured through its user interface. The default credentials are found in the grafana-secret Secret.

The Grafana installation that is provided by the United Manufacturing Hub is shipped with a set of preinstalled plugins:

  • ACE.SVG by Andrew Rodgers
  • Button Panel by CloudSpout LLC
  • Button Panel by UMH Systems Gmbh
  • Discrete by Natel Energy
  • Dynamic Text by Marcus Olsson
  • FlowCharting by agent
  • Pareto Chart by isaozler
  • Pie Chart (old) by Grafana Labs
  • Timepicker Buttons Panel by williamvenner
  • UMH Datasource by UMH Systems Gmbh
  • UMH Datasource v2 by UMH Systems Gmbh
  • Untimely by factry
  • Worldmap Panel by Grafana Labs

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
FACTORYINSIGHT_APIKEYThe API key to use to authenticate to the Factoryinsight APIstringAnyBase64 encoded string
FACTORYINSIGHT_BASEURLThe base URL of the Factoryinsight APIstringAnyunited-manufacturing-hub-factoryinsight-service
FACTORYINSIGHT_CUSTOMERIDThe customer ID to use to authenticate to the Factoryinsight APIstringAnyfactoryinsight
FACTORYINSIGHT_PASSWORDThe password to use to authenticate to the Factoryinsight APIstringAnyRandom UUID
GF_PATHS_DATAThe path where Grafana will store its datastringAny/var/lib/grafana/data
GF_PATHS_LOGSThe path where Grafana will store its logsstringAny/var/log/grafana
GF_PATHS_PLUGINSThe path where Grafana will store its pluginsstringAny/var/lib/grafana/plugins
GF_PATHS_PROVISIONINGThe path where Grafana will store its provisioning configurationstringAny/etc/grafana/provisioning
GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINSList of plugin identifiers to allow loading even if they lack a valid signaturestringComma separated listumh-datasource,umh-factoryinput-panel,umh-v2-datasource
GF_SECURITY_ADMIN_PASSWORDThe password of the admin userstringAnyRandom UUID
GF_SECURITY_ADMIN_USERThe username of the admin userstringAnyadmin

9.2.7 - Kafka Bridge

The technical documentation of the kafka-bridge microservice, which acts as a communication bridge between two Kafka brokers.

Kubernetes resources

  • Deployment: united-manufacturing-hub-kafkabridge
  • Secret:
    • Local broker: united-manufacturing-hub-kafkabridge-secrets-local
    • Remote broker: united-manufacturing-hub-kafkabridge-secrets-remote

Configuration

You can configure the kafka-bridge microservice by setting the following values in the _000_commonConfig.kafkaBridge section of the Helm chart values file.

  kafkaBridge:
    enabled: true
    remotebootstrapServer: ""
    topicmap:
      - bidirectional: false
        name: HighIntegrity
        send_direction: to_remote
        topic: ^ia\..+\..+\..+\.((addMaintenanceActivity)|(addOrder)|(addParentToChild)|(addProduct)|(addShift)|(count)|(deleteShiftByAssetIdAndBeginTimestamp)|(deleteShiftById)|(endOrder)|(modifyProducedPieces)|(modifyState)|(productTag)|(productTagString)|(recommendation)|(scrapCount)|(startOrder)|(state)|(uniqueProduct)|(scrapUniqueProduct))$
      - bidirectional: false
        name: HighThroughput
        send_direction: to_remote
        topic: ^ia\..+\..+\..+\.(processValue).*$

Topic Map schema

The topic map is a list of objects, each object represents a topic (or a set of topics) that should be forwarded. The following JSON schema describes the structure of the topic map:

{
    "$schema": "http://json-schema.org/draft-07/schema",
    "type": "array",
    "title": "Kafka Topic Map",
    "description": "This schema validates valid Kafka topic maps.",
    "default": [],
    "additionalItems": true,
    "items": {
        "$id": "#/items",
        "anyOf": [
            {
                "$id": "#/items/anyOf/0",
                "type": "object",
                "title": "Unidirectional Kafka Topic Map with send direction",
                "description": "This schema validates entries, that are unidirectional and have a send direction.",
                "default": {},
                "examples": [
                    {
                        "name": "HighIntegrity",
                        "topic": "^ia\\..+\\..+\\..+\\.(?!processValue).+$",
                        "bidirectional": false,
                        "send_direction": "to_remote"
                    }
                ],
                "required": [
                    "name",
                    "topic",
                    "bidirectional",
                    "send_direction"
                ],
                "properties": {
                    "name": {
                        "$id": "#/items/anyOf/0/properties/name",
                        "type": "string",
                        "title": "Entry Name",
                        "description": "Name of the map entry, only used for logging & tracing.",
                        "default": "",
                        "examples": [
                            "HighIntegrity"
                        ]
                    },
                    "topic": {
                        "$id": "#/items/anyOf/0/properties/topic",
                        "type": "string",
                        "title": "The topic to listen on",
                        "description": "The topic to listen on, this can be a regular expression.",
                        "default": "",
                        "examples": [
                            "^ia\\..+\\..+\\..+\\.(?!processValue).+$"
                        ]
                    },
                    "bidirectional": {
                        "$id": "#/items/anyOf/0/properties/bidirectional",
                        "type": "boolean",
                        "title": "Is the transfer bidirectional?",
                        "description": "When set to true, the bridge will consume and produce from both brokers",
                        "default": false,
                        "examples": [
                            false
                        ]
                    },
                    "send_direction": {
                        "$id": "#/items/anyOf/0/properties/send_direction",
                        "type": "string",
                        "title": "Send direction",
                        "description": "Can be either 'to_remote' or 'to_local'",
                        "default": "",
                        "examples": [
                            "to_remote",
                            "to_local"
                        ]
                    }
                },
                "additionalProperties": true
            },
            {
                "$id": "#/items/anyOf/1",
                "type": "object",
                "title": "Bi-directional Kafka Topic Map with send direction",
                "description": "This schema validates entries, that are bi-directional.",
                "default": {},
                "examples": [
                    {
                        "name": "HighIntegrity",
                        "topic": "^ia\\..+\\..+\\..+\\.(?!processValue).+$",
                        "bidirectional": true
                    }
                ],
                "required": [
                    "name",
                    "topic",
                    "bidirectional"
                ],
                "properties": {
                    "name": {
                        "$id": "#/items/anyOf/1/properties/name",
                        "type": "string",
                        "title": "Entry Name",
                        "description": "Name of the map entry, only used for logging & tracing.",
                        "default": "",
                        "examples": [
                            "HighIntegrity"
                        ]
                    },
                    "topic": {
                        "$id": "#/items/anyOf/1/properties/topic",
                        "type": "string",
                        "title": "The topic to listen on",
                        "description": "The topic to listen on, this can be a regular expression.",
                        "default": "",
                        "examples": [
                            "^ia\\..+\\..+\\..+\\.(?!processValue).+$"
                        ]
                    },
                    "bidirectional": {
                        "$id": "#/items/anyOf/1/properties/bidirectional",
                        "type": "boolean",
                        "title": "Is the transfer bidirectional?",
                        "description": "When set to true, the bridge will consume and produce from both brokers",
                        "default": false,
                        "examples": [
                            true
                        ]
                    }
                },
                "additionalProperties": true
            }
        ]
    },
    "examples": [
   {
      "name":"HighIntegrity",
      "topic":"^ia\\..+\\..+\\..+\\.(?!processValue).+$",
      "bidirectional":true
   },
   {
      "name":"HighThroughput",
      "topic":"^ia\\..+\\..+\\..+\\.(processValue).*$",
      "bidirectional":false,
      "send_direction":"to_remote"
   }
]
}

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
DEBUG_ENABLE_FGTRACEEnables the use of the fgtrace library, do not enable in productionstringtrue, falsefalse
KAFKA_GROUP_ID_SUFFIXIdentifier appended to the kafka group ID, usually a serial numberstringAnydefalut
KAFKA_SSL_KEY_PASSWORD_LOCALPassword for the SSL key pf the local brokerstringAny""
KAFKA_SSL_KEY_PASSWORD_REMOTEPassword for the SSL key of the remote brokerstringAny""
KAFKA_TOPIC_MAPA json map of the kafka topics should be forwardedJSONSee below{}
KAKFA_USE_SSLEnables the use of SSL for the kafka connectionstringtrue, falsefalse
LOCAL_KAFKA_BOOTSTRAP_SERVERURL of the local kafka broker, port is requiredstringAny valid URLunited-manufacturing-hub-kafka:9092
LOGGING_LEVELDefines which logging level is used, mostly relevant for developers.stringPRODUCTION, DEVELOPMENTPRODUCTION
MICROSERVICE_NAMEName of the microservice (used for tracing)stringAnyunited-manufacturing-hub-kafka-bridge
REMOTE_KAFKA_BOOTSTRAP_SERVERURL of the remote kafka brokerstringAny valid URL""
SERIAL_NUMBERSerial number of the cluster (used for tracing)stringAnydefalut

9.2.8 - Kafka Broker

The technical documentation of the kafka-broker microservice, which handles the communication between the microservices.

Kubernetes resources

  • StatefulSet: united-manufacturing-hub-kafka
  • Service:
    • Internal ClusterIP (headless): united-manufacturing-hub-kafka
    • External NodePort: united-manufacturing-hub-kafka-external at port 9094 for the Kafka API listener, port 9644 for the Admin API listener, port 8083 for the HTTP Proxy listener, and port 8081 for the Schema Registry listener.
  • ConfigMap: united-manufacturing-hub-kafka
  • Secret: united-manufacturing-hub-kafka-sts-lifecycle
  • PersistentVolumeClaim: datadir-united-manufacturing-hub-kafka-0

Configuration

You shouldn’t need to configure the Kafka broker manually, as it’s configured automatically when the cluster is deployed. However, if you need to change the configuration, you can do it by editing the redpanda section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
HOST_IPThe IP address of the host machine.stringAnyRandom IP
POD_IPThe IP address of the pod.stringAnyRandom IP
SERVICE_NAMEThe name of the service.stringAnyunited-manufacturing-hub-kafka

9.2.9 - Kafka Console

The technical documentation of the kafka-console microservice, which provides a GUI to interact with the Kafka broker.

Kubernetes resources

  • Deployment: united-manufacturing-hub-console
  • Service:
    • External LoadBalancer: united-manufacturing-hub-console at port 8090
  • ConfigMap: united-manufacturing-hub-console
  • Secret: united-manufacturing-hub-console

Configuration

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
LOGIN_JWTSECRETThe secret used to authenticate the communication to the backend.stringAnyRandom string

9.2.10 - Kafka to Postgresql

The technical documentation of the kafka-to-postgresql microservice, which consumes messages from a Kafka broker and writes them in a PostgreSQL database.

Kubernetes resources

  • Deployment: united-manufacturing-hub-kafkatopostgresql
  • Secret: united-manufacturing-hub-kafkatopostgresql-certificates

Configuration

You shouldn’t need to configure kafka-to-postgresql manually, as it’s configured automatically when the cluster is deployed. However, if you need to change the configuration, you can do it by editing the kafkatopostgresql section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
DEBUG_ENABLE_FGTRACEEnables the use of the fgtrace library. Not recommended for productionstringtrue, falsefalse
DRY_RUNIf set to true, the microservice will not write to the databasebooltrue, falsefalse
KAFKA_BOOTSTRAP_SERVERURL of the Kafka broker used, port is requiredstringAnyunited-manufacturing-hub-kafka:9092
KAFKA_SSL_KEY_PASSWORDKey password to decode the SSL private keystringAny""
LOGGING_LEVELDefines which logging level is used, mostly relevant for developersstringPRODUCTION, DEVELOPMENTPRODUCTION
MEMORY_REQUESTMemory request for the message cachestringAny50Mi
MICROSERVICE_NAMEName of the microservice (used for tracing)stringAnyunited-manufacturing-hub-kafkatopostgresql
POSTGRES_DATABASEThe name of the PostgreSQL databasestringAnyfactoryinsight
POSTGRES_HOSTHostname of the PostgreSQL databasestringAnyunited-manufacturing-hub
POSTGRES_PASSWORDThe password to use for PostgreSQL connectionsstringAnychangeme
POSTGRES_SSLMODEIf set to true, the PostgreSQL connection will use SSLstringAnyrequire
POSTGRES_USERThe username to use for PostgreSQL connectionsstringAnyfactoryinsight

9.2.11 - Kafka to Postgresql v2

The technical documentation of the kafka-to-postgresql-v2 microservice, which consumes messages from a Kafka broker and writes them in a PostgreSQL database by following the UMH data model v2.

Kubernetes resources

  • Deployment: united-manufacturing-hub-kafkatopostgresqlv2

Configuration

You shouldn’t need to configure kafka-to-postgresql-v2 manually, as it’s configured automatically when the cluster is deployed. However, if you need to change the configuration, you can do it by editing the kafkatopostgresqlv2 section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
KAFKA_BROKERSSpecifies the URLs and required ports of Kafka brokers using the Kafka protocol.stringAnyunited-manufacturing-hub-kafka:9092
KAFKA_HTTP_BROKERSSpecifies the URLs and required ports of Kafka brokers using the HTTP protocol.stringAnyunited-manufacturing-hub-kafka:8082
LOGGING_LEVELDetermines the verbosity of the logging output, primarily used for development purposes.stringPRODUCTION, DEVELOPMENTPRODUCTION
POSTGRES_DATABASEDesignates the name of the target PostgreSQL database.stringAnyumh_v2
POSTGRES_HOSTIdentifies the hostname for the PostgreSQL database server.stringAnyunited-manufacturing-hub
POSTGRES_LRU_CACHE_SIZEDetermines the size of the Least Recently Used (LRU) cache for asset ID storage. This cache is optimized for minimal memory usage.stringAny1000
POSTGRES_PASSWORDSets the password for accessing the PostgreSQL databasestringAnychangemetoo
POSTGRES_PORTSpecifies the network port for the PostgreSQL database server.stringAny5432
POSTGRES_SSL_MODEConfigures the PostgreSQL connection to use SSL if set to ’true'.stringAnyrequire
POSTGRES_USERDefines the username for PostgreSQL database access.stringAnykafkatopostgresqlv2
VALUE_CHANNEL_SIZESets the size of the channel for message storage prior to insertion. This parameter is significant for memory consumptionstringAny10000
WORKER_MULTIPLIERThis multiplier affects the number of workers converting Kafka messages into the PostgreSQL schema. Total workers = cores * multiplier.stringAny16

9.2.12 - MQTT Broker

The technical documentation of the mqtt-broker microservice, which forwards MQTT messages between the other microservices.

Kubernetes resources

  • StatefulSet: united-manufacturing-hub-hivemqce
  • Service:
    • Internal ClusterIP:
      • HiveMQ local: united-manufacturing-hub-hivemq-local-service at port 1883 (MQTT) and 8883 (MQTT over TLS)
      • VerneMQ (for backwards compatibility): united-manufacturing-hub-vernemq at port 1883 (MQTT) and 8883 (MQTT over TLS)
      • VerneMQ local (for backwards compatibility): united-manufacturing-hub-vernemq-local-service at port 1883 (MQTT) and 8883 (MQTT over TLS)
    • External LoadBalancer: united-manufacturing-hub-mqtt at port 1883 (MQTT) and 8883 (MQTT over TLS)
  • ConfigMap:
    • Configuration: united-manufacturing-hub-hivemqce-hive
    • Credentials: united-manufacturing-hub-hivemqce-extension
  • Secret: united-manufacturing-hub-hivemqce-secret-keystore
  • PersistentVolumeClaim:
    • Data: united-manufacturing-hub-hivemqce-claim-data
    • Extensions: united-manufacturing-hub-hivemqce-claim-extensions

Configuration

Most of the configuration is done through the XML files in the ConfigMaps. The default configuration should be sufficient for most use cases.

The HiveMQ installation of the United Manufacturing Hub comes with these extensions:

If you want to add more extensions, or to change the configuration, visit the HiveMQ documentation.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
HIVEMQ_ALLOW_ALL_CLIENTSWhether to allow all clients to connect to the brokerbooltrue, falsetrue

9.2.13 - MQTT Kafka Bridge

The technical documentation of the mqtt-kafka-bridge microservice, which transfers messages from MQTT brokers to Kafka Brokers and vice versa.

Kubernetes resources

  • Deployment: united-manufacturing-hub-mqttkafkabridge
  • Secret:
    • Kafka: united-manufacturing-hub-mqttkafkabridge-kafka-secrets
    • MQTT: united-manufacturing-hub-mqttkafkabridge-mqtt-secrets

Configuration

You shouldn’t need to configure mqtt-kafka-bridge manually, as it’s configured automatically when the cluster is deployed. However, if you need to change the configuration, you can do it by editing the mqttkafkabridge section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
DEBUG_ENABLE_FGTRACEEnables the use of the fgtrace library. Not recommended for productionstringtrue, falsefalse
INSECURE_SKIP_VERIFYSkip TLS certificate verificationbooltrue, falsetrue
KAFKA_BASE_TOPICThe Kafka base topicstringAnyia
KAFKA_BOOTSTRAP_SERVERURL of the Kafka broker used, port is requiredstringAnyunited-manufacturing-hub-kafka:9092
KAFKA_LISTEN_TOPICKafka topic to subscribe to. Accept regex valuesstringAny^ia.+
KAFKA_SENDER_THREADSNumber of threads used to send messages to KafkaintAny1
LOGGING_LEVELDefines which logging level is used, mostly relevant for developersstringPRODUCTION, DEVELOPMENTPRODUCTION
MESSAGE_LRU_SIZESize of the LRU cache used to store messages. This is used to prevent duplicate messages from being sent to Kafka.intAny100000
MICROSERVICE_NAMEName of the microservice (used for tracing)stringAnyunited-manufacturing-hub-mqttkafkabridge
MQTT_BROKER_URLThe MQTT broker URLstringAnyunited-manufacturing-hub-mqtt:1883
MQTT_CERTIFICATE_NAMESet to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryptionstringUSE_TLS, NO_CERTUSE_TLS
MQTT_PASSWORDPassword for the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
MQTT_SENDER_THREADSNumber of threads used to send messages to MQTTintAny1
MQTT_TOPICMQTT topic to subscribe to. Accept regex valuesstringAnyia/#
POD_NAMEName of the pod. Used for tracingstringAnyunited-manufacturing-hub-mqttkafkabridge-Random-ID
RAW_MESSSAGE_LRU_SIZESize of the LRU cache used to store raw messages. This is used to prevent duplicate messages from being sent to Kafka.intAny100000
SERIAL_NUMBERSerial number of the cluster (used for tracing)stringAnydefault

9.2.14 - MQTT Simulator

The technical documentation of the iotsensorsmqtt microservice, which simulates sensors sending data to the MQTT broker.

Kubernetes resources

  • Deployment: united-manufacturing-hub-iotsensorsmqtt
  • ConfigMap: united-manufacturing-hub-iotsensors-mqtt

Configuration

You can change the configuration of the microservice by updating the config.json file in the ConfigMap.

9.2.15 - MQTT to Postgresql

The technical documentation of the mqtt-to-postgresql microservice, which consumes messages from an MQTT broker and writes them in a PostgreSQL database.

9.2.16 - Node-RED

The technical documentation of the nodered microservice, which wires together hardware devices, APIs and online services.

Kubernetes resources

  • StatefulSet: united-manufacturing-hub-nodered
  • Service:
    • External LoadBalancer: united-manufacturing-hub-nodered-service at port 1880
  • ConfigMap:
    • Configuration: united-manufacturing-hub-nodered-config
    • Flows: united-manufacturing-hub-nodered-flows
  • Secret: united-manufacturing-hub-nodered-secrets
  • PersistentVolumeClaim: united-manufacturing-hub-nodered-claim

Configuration

You can enable the nodered microservice and decide if you want to use the default flows in the _000_commonConfig.dataprocessing.nodered section of the Helm chart values.

All the other values are set by default and you can find them in the Danger Zone section of the Helm chart values.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
NODE_RED_ENABLE_SAFE_MODEEnable safe mode, useful in case of broken flowsbooleantrue, falsefalse
TZThe timezone used by Node-REDstringAnyBerlin/Europe

9.2.17 - OPCUA Simulator

The technical documentation of the opcua-simulator microservice, which simulates OPCUA devices.

Kubernetes resources

  • Deployment: united-manufacturing-hub-opcuasimulator-deployment
  • Service:
    • External LoadBalancer: united-manufacturing-hub-opcuasimulator-service at port 46010
  • ConfigMap: united-manufacturing-hub-opcuasimulator-config

Configuration

You can change the configuration of the microservice by updating the config.json file in the ConfigMap.

9.2.18 - PackML Simulator

The technical documentation of the packml-simulator microservice, which simulates a manufacturing line using PackML over MQTT.

Kubernetes resources

  • Deployment: united-manufacturing-hub-packmlmqttsimulator

Configuration

You shouldn’t need to configure PackML Simulator manually, as it’s configured automatically when the cluster is deployed. However, if you need to change the configuration, you can do it by editing the packmlmqttsimulator section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
AREAISA-95 area name of the linestringAnyDefaultArea
LINEISA-95 line name of the linestringAnyDefaultProductionLine
MQTT_PASSWORDPassword for the MQTT broker. Leave empty if the server does not manage permissionsstringAnyINSECURE_INSECURE_INSECURE
MQTT_URLServer URL of the MQTT serverstringAnymqtt://united-manufacturing-hub-mqtt:1883
MQTT_USERNAMEName for the MQTT broker. Leave empty if the server does not manage permissionsstringAnyPACKMLSIMULATOR
SITEISA-95 site name of the linestringAnytestLocation

9.2.19 - Sensorconnect

The technical documentation of the sensorconnect microservice, which reads data from sensors and sends them to the MQTT or Kafka broker.

Kubernetes resources

  • StatefulSet: united-manufacturing-hub-sensorconnect
  • Secret:
    • Kafka: united-manufacturing-hub-sensorconnect-kafka-secrets
    • MQTT: united-manufacturing-hub-sensorconnect-mqtt-secrets
  • PersistentVolumeClaim: united-manufacturing-hub-sensorconnect-claim

Configuration

You can configure the IP range to scan for gateways, and which message broker to use, by setting the values of the parameters in the _000_commonConfig.datasources.sensorconnect section of the Helm chart values file.

The default values of the other parameters are usually good for most use cases, but you can change them in the Danger Zone section of the Helm chart values file.

If you want to increase the polling speed of the sensors, you can do so by setting the sensorconnect.lowerPollingTime parameter to a lower value. This can cause the ifm IO-link master to become unresponsive, if its firmware is not up to date.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
ADDITIONAL_SLEEP_TIME_PER_ACTIVE_PORT_MSAdditional sleep time between pollings for each active portfloatAny0.0
ADDITIONAL_SLOWDOWN_MAPJSON map of values, allows to slow down and speed up the polling time of specific sensorsJSONSee below[]
DEBUG_ENABLE_FGTRACEEnables the use of the fgtrace library. Not recommended for productionstringtrue, falsefalse
DEVICE_FINDER_TIMEOUT_SECHTTP timeout in seconds for finding new devicesintAny1
DEVICE_FINDER_TIME_SECTime interval in seconds for finding new devicesintAny20
IODD_FILE_PATHFilesystem path where to store IODD filesstringAny valid Unix path/ioddfiles
IP_RANGEThe IP range to scan for new sensorstringAny valid IP in CIDR notation192.168.10.1/24
KAFKA_BOOTSTRAP_SERVERURL of the Kafka broker. Port is requiredstringAnyunited-manufacturing-hub-kafka:9092
KAFKA_SSL_KEY_PASSWORDThe encrypted password of the SSL key. If empty, no password is usedstringAny""
KAFKA_USE_SSLSet to true to use SSL encryption for the connection to the Kafka brokerstringtrue, falsefalse
LOGGING_LEVELDefines which logging level is used, mostly relevant for developersstringPRODUCTION, DEVELOPMENTPRODUCTION
LOWER_POLLING_TIME_MSTime in milliseconds to define the lower bound of time between sensor pollingintAny100
MAX_SENSOR_ERROR_COUNTAmount of errors before a sensor is temporarily disabledintAny50
MICROSERVICE_NAMEName of the microservice (used for tracing)stringAnyunited-manufacturing-hub-sensorconnect
MQTT_BROKER_URLURL of the MQTT broker. Port is requiredstringAnyunited-manufacturing-hub-mqtt:1883
MQTT_CERTIFICATE_NAMESet to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryptionstringUSE_TLS, NO_CERTUSE_TLS
MQTT_PASSWORDPassword for the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
POD_NAMEName of the pod (used for tracing)stringAnyunited-manufacturing-hub-sensorconnect-0
POLLING_SPEED_STEP_DOWN_MSTime in milliseconds subtracted from the polling interval after a successful pollingintAny1
POLLING_SPEED_STEP_UP_MSTime in milliseconds added to the polling interval after a failed pollingintAny20
SENSOR_INITIAL_POLLING_TIME_MSAmount of time in milliseconds before starting to request sensor data. Must be higher than LOWER_POLLING_TIME_MSintAny100
SUB_TWENTY_MSSet to 1 to allow LOWER_POLLING_TIME_MS of under 20 ms. This is not recommended as it might lead to the gateway becoming unresponsive until a manual rebootint0, 10
TESTIf enabled, the microservice will use a test IODD file from the filesystem to use with a mocked sensor. Only useful for development.stringtrue, falsefalse
TRANSMITTERIDSerial number of the cluster (used for tracing)stringAnydefault
UPPER_POLLING_TIME_MSTime in milliseconds to define the upper bound of time between sensor pollingintAny1000
USE_KAFKAIf enabled, uses Kafka as a message brokerstringtrue, falsetrue
USE_MQTTIf enabled, uses MQTT as a message brokerstringtrue, falsefalse

Slowdown map

The ADDITIONAL_SLOWDOWN_MAP environment variable allows you to slow down and speed up the polling time of specific sensors. It is a JSON array of values, with the following structure:

[
  {
    "serialnumber": "000200610104",
    "slowdown_ms": -10
  },
  {
    "url": "http://192.168.0.13",
    "slowdown_ms": 20
  },
  {
    "productcode": "AL13500",
    "slowdown_ms": 20.01
  }
]

9.2.20 - Tulip Connector

The technical documentation of the tulip-connector microservice, which exposes internal APIs, such as factoryinsight, to the internet. Specifically designed to communicate with Tulip.

Kubernetes resources

  • Deployment: united-manufacturing-hub-tulip-connector-deployment
  • Service:
    • Internal ClusterIP: united-manufacturing-hub-tulip-connector-service at port 80
  • Ingress: united-manufacturing-hub-tulip-connector-ingress

Configuration

You can enable the tulip-connector and set the domain for the ingress by editing the values in the _000_commonConfig.tulipconnector section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
FACTORYINSIGHT_PASSWORDSpecifies the password for the admin user for the REST APIstringAnyRandom UUID
FACTORYINSIGHT_URLSpecifies the URL of the factoryinsight microservice.stringAnyhttp://united-manufacturing-hub-factoryinsight-service
FACTORYINSIGHT_USERSpecifies the admin user for the REST APIstringAnyfactoryinsight
MODESpecifies the mode that the service will run in. Change only during developmentstringdev, prodprod

API documentation

10 - Development

These pages describe advanced topics for developers.

10.1 - Contribute

Learn how to contribute to the United Manufacturing Hub project.

Welcome

Welcome to the United Manufacturing Hub project! We’re excited that you want to contribute to the project. The following documents cover some important aspects of contributing to the United Manufacturing Hub or its documentation.

UMH Systems welcomes improvements from all contributors, new and experienced!

The first place to start is the Getting Started With Contributing page. It provides a high-level overview of the contribution process.

10.1.1 - Getting Started With Contributing

A small list of things that you should read and be familiar with before you get started with contributing.

Welcome

This document is the single source of truth for how to contribute to the code base. Feel free to browse the open issues and file new ones, all feedback is welcome!

Prerequisites

Before you begin contributing, you should first complete the following prerequisites:

Create a GitHub account

Before you get started, you will need to sign up for a GitHub user account.

Sign the Contributor License Agreement

Before you can contribute to United Manufacturing Hub, you will need to sign the Contributor License Agreement.

Code of Conduct

Please make sure to read and observe the Code of Conduct.

Setting up your development environment

The development environment changes depending on the type of contribution you want to make.

If you plan to contribute documentation changes, you can use the GitHub UI to edit the files. Otherwise, you can follow the instructions in the documentation to set up your environment.

If you plan to contribute code changes, review the developer resources page for how to set up your environment.

Find something to work on

The first step to getting starting contributing to United Manufacturing Hub is to find something to work on. Help is always welcome, and no contribution is too small!

Here are some things you can do today to get started contributing:

  • Help improve the United Manufacturing Hub documentation
  • Clarify code, variables, or functions that can be renamed or commented on
  • Write test coverage
  • If the above suggestions don’t appeal to you, you can browse the issues labeled as a good first issue to see who is looking for help.

Look at the issue section of any of our repositories to find issues that are currently open. Don’t be afraid to ask questions if you are interested in contributing to a specific issue. When you find something you want to work on, you can assign the issue to yourself.

Make your changes

Once you have found something to work on, you can start making your changes. Follow the contributing guidelines.

Open a pull request

Once you have made your changes, you can submit them for review. You can do this by creating a pull request (PR) against the main branch of the repository.

Code review

Once you have submitted your changes, a maintainer will review your changes and provide feedback.

As a community we believe in the value of code review for all contributions. Code review increases both the quality and readability of our codebase, which in turn produces high quality software.

See the pull request documentation for more information on code review.

Expect reviewers to request that you avoid common go style mistakes in your PRs.

Best practices

  • Write clear and meaningful git commit messages.
  • If the PR will completely fix a specific issue, include fixes #123 in the PR body (where 123 is the specific issue number the PR will fix). This will automatically close the issue when the PR is merged.
  • Make sure you don’t include @mentions or fixes keywords in your git commit messages. These should be included in the PR body instead.
  • When you make a PR for small change (such as fixing a typo, style change, or grammar fix), please squash your commits so that we can maintain a cleaner git history.
  • Make sure you include a clear and detailed PR description explaining the reasons for the changes, and ensuring there is sufficient information for the reviewer to understand your PR.
  • Additional Readings:

Testing

Testing is the responsibility of all contributors. It is important to ensure that all code is tested and that all tests pass. This ensures that the code base is stable and reliable.

There are multiple type of tests. The location of the test code vaires with type, as do the specifics of the environment needed to succesfully run the test:

  • Unit: these confirm that a particular function behaves as intended. Golang includes a native ability for unit testing via the testing package. Unit test source code can be found adjacent tot the corresponding source code within a given package. These are easily run by any developer on any OS.
  • Integration: these tests cover interactions of package components or interactions between UMH components and some external system. An example would be testing whether a piece of code can correctly store data in tha database. Running these tests can require the developer to set up additional functionality on their development system.
  • End-to-end (“e2e”): these are broad test of overall system behavior and coherence. These are more complicated as they require a functional Kubernetes cluster. There are some e2e tests running in pipelines, and if your changes require e2e tests, you will need to add them to the pipeline. You can find more information about the CI pipelines in the CI documentation.

Documentation

Documentation is an important part of any project. It is important to ensure that all code is documented and that all documentation is up to date.

Learn more about how to contribute to the documentation on the documentation contributor guide.

10.1.2 - Contributing new content

Learn more in-depth about how to contribute new content to the United Manufacturing Hub.

10.1.2.1 - GitHub Workflow

This document is an overview of the GitHub workflow used by the United Manufacturing Hub project. It includes tips and suggestions on keeping your local environment in sync with upstream and how to maintain good commit hygiene.

1. Fork in the cloud

  1. Go to the repository page.
  2. Click Fork button (top right) to establish a cloud-based fork.

2. Clone fork to local storage

Per Go’s workspace instructions, place United Manufacturing Hub’s code on your GOPATH using the following cloning procedure.

In your shell, define a local working directory as working_dir. If your GOPATH has multiple paths, pick just one and use it instead of $GOPATH. You must follow exactly this pattern, neither $GOPATH/src/github.com/${your github profile name}/ nor any other pattern will work.

The following instructions assume you are using a bash shell. If you are using a different shell, you will need to adjust the commands accordingly.
export working_dir="$(go env GOPATH)/src/github.com/"

Set user to match your github profile name:

export user=<your github profile name>

Both $working_dir and $user are mentioned in the figure above.

Create your clone:

mkdir -p $working_dir
cd $working_dir
git clone https://github.com/$user/united-manufacturing-hub.git
# or: git clone [email protected]:$user/united-manufacturing-hub.git

cd $working_dir/united-manufacturing-hub
git remote add origin https://github.com/united-manufacturing-hub/united-manufacturing-hub.git
# or: git remote add upstream [email protected]:united-manufacturing-hub/united-manufacturing-hub.git

# Never push to upstream master
git remote set-url --push origin no_push

# Confirm that your remotes make sense:
git remote -v

3. Create a Working Branch

Get your local master up to date.

cd $working_dir/united-manufacturing-hub
git fetch origin
git checkout main
git rebase origin/main

Create your new branch.

git checkout -b myfeature

You may now edit files on the myfeature branch.

4. Keep your branch in sync

You will need to periodically fetch changes from the origin repository to keep your working branch in sync.

Make sure your local repository is on your working branch and run the following commands to keep it in sync:

git fetch origin
git rebase origin/main

Please don’t use git pull instead of the above fetch and rebase. Since git pull executes a merge, it creates merge commits. These make the commit history messy and violate the principle that commits ought to be individually understandable and useful (see below).

You might also consider changing your .git/config file via git config branch.autoSetupRebase always to change the behavior of git pull, or another non-merge option such as git pull --rebase.

5. Commit Your Changes

You will probably want to regularly commit your changes. It is likely that you will go back and edit, build, and test multiple times. After a few cycles of this, you might amend your previous commit.

git commit

6. Push to GitHub

When your changes are ready for review, push your working branch to your fork on GitHub.

git push -f <your_remote_name> myfeature

7. Create a Pull Request

  1. Visit your fork at https://github.com/<user>/united-manufacturing-hub
  2. Click the Compare & Pull Request button next to your myfeature branch.
  3. Check out the pull request process for more details and advice.

Get a code review

Once your pull request has been opened it will be assigned to one or more reviewers. Those reviewers will do a thorough code review, looking for correctness, bugs, opportunities for improvement, documentation and comments, and style.

Commit changes made in response to review comments to the same branch on your fork.

Very small PRs are easy to review. Very large PRs are very difficult to review.

Squash commits

After a review, prepare your PR for merging by squashing your commits.

All commits left on your branch after a review should represent meaningful milestones or units of work. Use commits to add clarity to the development and review process.

Before merging a PR, squash the following kinds of commits:

  • Fixes/review feedback
  • Typos
  • Merges and rebases
  • Work in progress

Aim to have every commit in a PR compile and pass tests independently if you can, but it’s not a requirement. In particular, merge commits must be removed, as they will not pass tests.

To squash your commits, perform an interactive rebase:

  1. Check your git branch:

    git status
    

    The output should be similar to this:

    On branch your-contribution
    Your branch is up to date with 'origin/your-contribution'.
    
  2. Start an interactive rebase using a specific commit hash, or count backwards from your last commit using HEAD~<n>, where <n> represents the number of commits to include in the rebase.

    git rebase -i HEAD~3
    

    The output should be similar to this:

    pick 2ebe926 Original commit
    pick 31f33e9 Address feedback
    pick b0315fe Second unit of work
    
    # Rebase 7c34fc9..b0315ff onto 7c34fc9 (3 commands)
    #
    # Commands:
    # p, pick <commit> = use commit
    # r, reword <commit> = use commit, but edit the commit message
    # e, edit <commit> = use commit, but stop for amending
    # s, squash <commit> = use commit, but meld into previous commit
    # f, fixup <commit> = like "squash", but discard this commit's log message
    
    ...
    
  3. Use a command line text editor to change the word pick to squash for the commits you want to squash, then save your changes and continue the rebase:

    pick 2ebe926 Original commit
    squash 31f33e9 Address feedback
    pick b0315fe Second unit of work
    
    ...
    

    The output after saving changes should look similar to this:

    [detached HEAD 61fdded] Second unit of work
     Date: Thu Mar 5 19:01:32 2020 +0100
     2 files changed, 15 insertions(+), 1 deletion(-)
    
     ...
    
    Successfully rebased and updated refs/heads/master.
    
  4. Force push your changes to your remote branch:

    git push --force
    

For mass automated fixups such as automated doc formatting, use one or more commits for the changes to tooling and a final commit to apply the fixup en masse. This makes reviews easier.

By squashing locally, you control the commit message(s) for your work, and can separate a large PR into logically separate changes. For example: you have a pull request that is code complete and has 24 commits. You rebase this against the same merge base, simplifying the change to two commits. Each of those two commits represents a single logical change and each commit message summarizes what changes. Reviewers see that the set of changes are now understandable, and approve your PR.

Merging a commit

Once you’ve received review and approval, your commits are squashed, your PR is ready for merging.

Merging happens automatically after both a Reviewer and Approver have approved the PR. If you haven’t squashed your commits, they may ask you to do so before approving a PR.

Reverting a commit

In case you wish to revert a commit, use the following instructions.

If you have upstream write access, please refrain from using the Revert button in the GitHub UI for creating the PR, because GitHub will create the PR branch inside the main repository rather than inside your fork.

  • Create a branch and sync it with upstream.

    # create a branch
    git checkout -b myrevert
    
    # sync the branch with upstream
    git fetch origin
    git rebase origin/main
    
  • If the commit you wish to revert is a merge commit, use this command:

    # SHA is the hash of the merge commit you wish to revert
    git revert -m 1 <SHA>
    

    If it is a single commit, use this command:

    # SHA is the hash of the single commit you wish to revert
    git revert <SHA>
    
  • This will create a new commit reverting the changes. Push this new commit to your remote.

    git push <your_remote_name> myrevert
    
  • Finally, create a Pull Request using this branch.

10.1.2.2 - Pull Request Process

Explains the process and best practices for submitting a pull request to the United Manufacturing Hub project and its associated sub-repositories. It should serve as a reference for all contributors, and be useful especially to new or infrequent submitters.

This doc explains the process and best practices for submitting a pull request to the United Manufacturing Hub project and its associated sub-repositories. It should serve as a reference for all contributors, and be useful especially to new and infrequent submitters.

Before You Submit a Pull Request

This guide is for contributors who already have a pull request to submit. If you’re looking for information on setting up your developer environment and creating code to contribute to United Manufacturing Hub, or you are a first-time contributor, see the Contributor Guide to get started.

Make sure your pull request adheres to our best practices. These include following project conventions, making small pull requests, and commenting thoroughly. Please read the more detailed section on Best Practices for Faster Reviews at the end of this doc.

The Pull Request Submit Process

Merging a pull request requires the following steps to be completed before the pull request will be merged.

Marking Unfinished Pull Requests

If you want to solicit reviews before the implementation of your pull request is complete, you should hold your pull request to ensure that a maintainer does not merge it prematurely.

There are three methods to achieve this:

  1. You may add the status: in-progress or status: on-hold labels
  2. You may add or remove a WIP or [WIP] prefix to your pull request title
  3. You may open your pull request in a draft state

While either method is acceptable, we recommend using the status: in-progress label.

How the e2e Tests Work

United Manufacturing Hub runs a set of end-to-end tests (e2e tests) on pull requests. You can find an overview of the tests in the CI documentation.

Why was my pull request closed?

Closed pull requests are easy to recreate, and little work is lost by closing a pull request that subsequently needs to be reopened. We want to limit the total number of pull requests in flight to:

  • Maintain a clean project
  • Remove old pull requests that would be difficult to rebase as the underlying code has changed over time
  • Encourage code velocity

Best Practices for Faster Reviews

Most of this section is not specific to United Manufacturing Hub, but it’s good to keep these best practices in mind when you’re making a pull request.

You’ve just had a brilliant idea on how to make United Manufacturing Hub better. Let’s call that idea Feature-X. Feature-X is not even that complicated. You have a pretty good idea of how to implement it. You jump in and implement it, fixing a bunch of stuff along the way. You send your pull request - this is awesome! And it sits. And sits. A week goes by and nobody reviews it. Finally, someone offers a few comments, which you fix up and wait for more review. And you wait. Another week or two go by. This is horrible.

Let’s talk about best practices so your pull request gets reviewed quickly.

Familiarize yourself with project conventions

Is the feature wanted? File a United Manufacturing Hub Enhancement Proposal

Are you sure Feature-X is something the UMH team wants or will accept? Is it implemented to fit with other changes in flight? Are you willing to bet a few days or weeks of work on it?

It’s better to get confirmation beforehand.

Even for small changes, it is often a good idea to gather feedback on an issue you filed, or even simply ask in UMH Discord channel to invite discussion and feedback from code owners.

KISS, YAGNI, MVP, etc

Sometimes we need to remind each other of core tenets of software design - Keep It Simple, You Aren’t Gonna Need It, Minimum Viable Product, and so on. Adding a feature “because we might need it later” is antithetical to software that ships. Add the things you need NOW and (ideally) leave room for things you might need later - but don’t implement them now.

Smaller Is Better: Small Commits, Small Pull Requests

Small commits and small pull requests get reviewed faster and are more likely to be correct than big ones.

Attention is a scarce resource. If your pull request takes 60 minutes to review, the reviewer’s eye for detail is not as keen in the last 30 minutes as it was in the first. It might not get reviewed at all if it requires a large continuous block of time from the reviewer.

Breaking up commits

Break up your pull request into multiple commits, at logical break points.

Making a series of discrete commits is a powerful way to express the evolution of an idea or the different ideas that make up a single feature. Strive to group logically distinct ideas into separate commits.

For example, if you found that Feature-X needed some prefactoring to fit in, make a commit that JUST does that prefactoring. Then make a new commit for Feature-X.

Strike a balance with the number of commits. A pull request with 25 commits is still very cumbersome to review, so use your best judgment.

Breaking up Pull Requests

Or, going back to our prerefactoring example, you could also fork a new branch, do the prefactoring there and send a pull request for that. If you can extract whole ideas from your pull request and send those as pull requests of their own, you can avoid the painful problem of continually rebasing.

Multiple small pull requests are often better than multiple commits. Don’t worry about flooding us with pull requests. We’d rather have 100 small, obvious pull requests than 10 unreviewable monoliths.

We want every pull request to be useful on its own, so use your best judgment on what should be a pull request vs. a commit.

As a rule of thumb, if your pull request is directly related to Feature-X and nothing else, it should probably be part of the Feature-X pull request. If you can explain why you are doing seemingly no-op work (“it makes the Feature-X change easier, I promise”) we’ll probably be OK with it. If you can imagine someone finding value independently of Feature-X, try it as a pull request. (Do not link pull requests by # in a commit description, because GitHub creates lots of spam. Instead, reference other pull requests via the pull request your commit is in.)

Open a Different Pull Request for Fixes and Generic Features

Put changes that are unrelated to your feature into a different pull request

Often, as you are implementing Feature-X, you will find bad comments, poorly named functions, bad structure, weak type-safety, etc.

You absolutely should fix those things (or at least file issues, please) - but not in the same pull request as your feature. Otherwise, your diff will have way too many changes, and your reviewer won’t see the forest for the trees.

Look for opportunities to pull out generic features

For example, if you find yourself touching a lot of modules, think about the dependencies you are introducing between packages. Can some of what you’re doing be made more generic and moved up and out of the Feature-X package? Do you need to use a function or type from an otherwise unrelated package? If so, promote! We have places for hosting more generic code.

Likewise, if Feature-X is similar in form to Feature-W which was checked in last month, and you’re duplicating some tricky stuff from Feature-W, consider prerefactoring the core logic out and using it in both Feature-W and Feature-X. (Do that in its own commit or pull request, please.)

Comments Matter

In your code, if someone might not understand why you did something (or you won’t remember why later), comment it. Many code-review comments are about this exact issue.

If you think there’s something pretty obvious that we could follow up on, add a TODO.

Read up on GoDoc - follow those general rules for comments.

Test

Nothing is more frustrating than starting a review, only to find that the tests are inadequate or absent. Very few pull requests can touch the code and NOT touch tests.

If you don’t know how to test Feature-X, please ask! We’ll be happy to help you design things for easy testing or to suggest appropriate test cases.

Squashing

Your reviewer has finally sent you feedback on Feature-X.

Make the fixups, and don’t squash yet. Put them in a new commit, and re-push. That way your reviewer can look at the new commit on its own, which is much faster than starting over.

We might still ask you to clean up your commits at the very end for the sake of a more readable history, but don’t do this until asked: typically at the point where the pull request would otherwise be tagged LGTM.

Each commit should have a good title line (<70 characters) and include an additional description paragraph describing in more detail the change intended.

For more information, see squash commits.

General squashing guidelines

  • Sausage => squash

    Do squash when there are several commits to fix bugs in the original commit(s), address reviewer feedback, etc. Really we only want to see the end state, and commit message for the whole pull request.

  • Layers => don’t squash

    Don’t squash when there are independent changes layered to achieve a single goal. For instance, writing a code munger could be one commit, applying it could be another, and adding a precommit check could be a third. One could argue they should be separate pull requests, but there’s really no way to test/review the munger without seeing it applied, and there needs to be a precommit check to ensure the munged output doesn’t immediately get out of date.

Commit Message Guidelines

PR comments are not represented in the commit history. Commits and their commit messages are the “permanent record” of the changes being done in your PR and their commit messages should accurately describe both what and why it is being done.

Commit messages are comprised of two parts; the subject and the body.

The subject is the first line of the commit message and is often the only part that is needed for small or trivial changes. Those may be done as “one liners” with the git commit -m or the --message flag, but only if the what and especially why can be fully described in that few words.

The commit message body is the portion of text below the subject when you run git commit without the -m flag which will open the commit message for editing in your preferred editor. Typing a few further sentences of clarification is a useful investment in time both for your reviews and overall later project maintenance.

This is the commit message subject

Any text here is the commit message body
Some text
Some more text
...

# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
#
# On branch example
# Changes to be committed:
#   ...
#

Use these guidelines below to help craft a well formatted commit message. These can be largely attributed to the previous work of Chris Beams, Tim Pope, Scott Chacon and Ben Straub.

Follow the conventional commit format

The conventional commit format is a lightweight convention on top of commit messages. It provides an easy set of rules for creating an explicit commit history; which makes it easier to write automated tools on top of.

The commit message should be structured as follows:

<type>[optional scope]: <description>

[optional body]

[optional footer(s)]

The type and description fields are mandatory, the scope field is optional. The body and footer are optional and can be used to provide additional context.

Find more information on the conventional commits website.

Try to keep the subject line to 50 characters or less; do not exceed 72 characters

The 50 character limit for the commit message subject line acts as a focus to keep the message summary as concise as possible. It should be just enough to describe what is being done.

The hard limit of 72 characters is to align with the max body size. When viewing the history of a repository with git log, git will pad the body text with additional blank spaces. Wrapping the width at 72 characters ensures the body text will be centered and easily viewable on an 80-column terminal.

Do not end the commit message subject with a period

This is primary intended to serve as a space saving measure, but also aids in driving the subject line to be as short and concise as possible.

Use imperative mood in your commit message subject

Imperative mood can be be thought of as a “giving a command”; it is a present-tense statement that explicitly describes what is being done.

Good Examples:

  • fix: x error in y
  • feat: add foo to bar
  • Revert commit “baz”
  • docs: update pull request guidelines

Bad Examples:

  • fix: Fixed x error in y
  • feat: Added foo to bar
  • Reverting bad commit “baz”
  • docs: Updating the pull request guidelines
  • Fixing more things

Add a single blank line before the commit message body

Git uses the blank line to determine which portion of the commit message is the subject and body. Text preceding the blank line is the subject, and text following is considered the body.

Wrap the commit message body at 72 characters

The default column width for git is 80 characters. Git will pad the text of the message body with an additional 4 spaces when viewing the git log. This would leave you with 76 available spaces for text, however the text would be “lop-sided”. To center the text for better viewing, the other side is artificially padded with the same amount of spaces, resulting in 72 usable characters per line. Think of them as the margins in a word doc.

Do not use GitHub keywords or (@)mentions within your commit message

GitHub Keywords

Using GitHub keywords followed by a #<issue number> reference within your commit message will automatically apply the do-not-merge/invalid-commit-message label to your PR preventing it from being merged.

GitHub keywords in a PR to close issues is considered a convenience item, but can have unexpected side-effects when used in a commit message; often closing something they shouldn’t.

Blocked Keywords:

  • close
  • closes
  • closed
  • fix
  • fixes
  • fixed
  • resolve
  • resolves
  • resolved
(@)Mentions

(@)mentions within the commit message will send a notification to that user, and will continually do so each time the PR is updated.

Use the commit message body to explain the what and why of the commit

Commits and their commit messages are the “permanent record” of the changes being done in your PR. Describing why something has changed and what effects it may have. You are providing context to both your reviewer and the next person that has to touch your code.

If something is resolving a bug, or is in response to a specific issue, you can link to it as a reference with the message body itself. These sorts of breadcrumbs become essential when tracking down future bugs or regressions and further help explain the “why” the commit was made.

Additional Resources:

It’s OK to Push Back

Sometimes reviewers make mistakes. It’s OK to push back on changes your reviewer requested. If you have a good reason for doing something a certain way, you are absolutely allowed to debate the merits of a requested change. Both the reviewer and reviewee should strive to discuss these issues in a polite and respectful manner.

You might be overruled, but you might also prevail. We’re pretty reasonable people.

Another phenomenon of open-source projects (where anyone can comment on any issue) is the dog-pile - your pull request gets so many comments from so many people it becomes hard to follow. In this situation, you can ask the primary reviewer (assignee) whether they want you to fork a new pull request to clear out all the comments. You don’t HAVE to fix every issue raised by every person who feels like commenting, but you should answer reasonable comments with an explanation.

Common Sense and Courtesy

No document can take the place of common sense and good taste. Use your best judgment, while you put a bit of thought into how your work can be made easier to review. If you do these things your pull requests will get merged with less friction.

Trivial Edits

Each incoming Pull Request needs to be reviewed, checked, and then merged.

While automation helps with this, each contribution also has an engineering cost. Therefore it is appreciated if you do NOT make trivial edits and fixes, but instead focus on giving the entire file a review.

If you find one grammatical or spelling error, it is likely there are more in that file, you can really make your Pull Request count by checking the formatting, checking for broken links, and fixing errors and then submitting all the fixes at once to that file.

Some questions to consider:

  • Can the file be improved further?
  • Does the trivial edit greatly improve the quality of the content?

10.1.2.3 - Adding Documentation

Learn how to add documentation to the United Manufacturing Hub.

To contribute new content pages or improve existing content pages, open a pull request (PR). Make sure you follow all the general contributing guidelines in the Getting started section, as well as the documentation specific guidelines.

If your change is small, or you’re unfamiliar with git, read Changes using GitHub to learn how to edit a page.

If your changes are large, read Work from a local fork to learn how to make changes locally on your computer.

Contributing basics

  • Write United Manufacturing Hub documentation in Markdown and build the UMH docs site using Hugo.
  • The source is in GitHub.
  • Page content types describe the presentation of documentation content in Hugo.
  • You can use Docsy shortcodes or custom Hugo shortcodes to contribute to UMH documentation.
  • In addition to the standard Hugo shortcodes, we use a number of custom Hugo shortcodes in our documentation to control the presentation of content.
  • Documentation source is available in multiple languages in /content/. Each language has its own folder with a two-letter code determined by the ISO 639-1 standard . For example, English documentation source is stored in /content/en/docs/.
  • For more information about contributing to documentation in multiple languages or starting a new translation, see localization.

Changes using GitHub

If you’re less experienced with git workflows, here’s an easier method of opening a pull request. Figure 1 outlines the steps and the details follow.

flowchart LR A([fa:fa-user New
Contributor]) --- id1[(umh/umh.docs.umh.app
GitHub)] subgraph tasks[Changes using GitHub] direction TB 0[ ] -.- 1[1. Edit this page] --> 2[2. Use GitHub markdown
editor to make changes] 2 --> 3[3. fill in Propose file change] end subgraph tasks2[ ] direction TB 4[4. select Propose file change] --> 5[5. select Create pull request] --> 6[6. fill in Open a pull request] 6 --> 7[7. select Create pull request] end id1 --> tasks --> tasks2 classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px; classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold classDef k8s fill:#326ce5,stroke:#fff,stroke-width:1px,color:#fff; classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000 class A,1,2,3,4,5,6,7 grey class 0 spacewhite class tasks,tasks2 white class id1 k8s
flowchart LR A([fa:fa-user New
Contributor]) --- id1[(umh/umh.docs.umh.app
GitHub)] subgraph tasks[Changes using GitHub] direction TB 0[ ] -.- 1[1. Edit this page] --> 2[2. Use GitHub markdown
editor to make changes] 2 --> 3[3. fill in Propose file change] end subgraph tasks2[ ] direction TB 4[4. select Propose file change] --> 5[5. select Create pull request] --> 6[6. fill in Open a pull request] 6 --> 7[7. select Create pull request] end id1 --> tasks --> tasks2 classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px; classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold classDef k8s fill:#326ce5,stroke:#fff,stroke-width:1px,color:#fff; classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000 class A,1,2,3,4,5,6,7 grey class 0 spacewhite class tasks,tasks2 white class id1 k8s

Figure 1. Steps for opening a PR using GitHub.

  1. On the page where you see the issue, select the Edit this page option in the right-hand side navigation panel.

  2. Make your changes in the GitHub markdown editor.

  3. Below the editor, fill in the Propose file change form. In the first field, give your commit message a title. In the second field, provide a description.

    Do not use any GitHub Keywords in your commit message. You can add those to the pull request description later.

  4. Select Propose file change.

  5. Select Create pull request.

  6. The Open a pull request screen appears. Fill in the form:

    • The Subject field of the pull request defaults to the commit summary. You can change it if needed.
    • The Body contains your extended commit message, if you have one, and some template text. Add the details the template text asks for, then delete the extra template text.
    • Leave the Allow edits from maintainers checkbox selected.

    PR descriptions are a great way to help reviewers understand your change. For more information, see Opening a PR.

  7. Select Create pull request.

Addressing feedback in GitHub

Before merging a pull request, UMH community members review and approve it. If you have someone specific in mind, leave a comment with their GitHub username in it.

If a reviewer asks you to make changes:

  1. Go to the Files changed tab.
  2. Select the pencil (edit) icon on any files changed by the pull request.
  3. Make the changes requested.
  4. Commit the changes.

When your review is complete, a reviewer merges your PR and your changes go live a few minutes later.

Work from a local fork

If you’re more experienced with git, or if your changes are larger than a few lines, work from a local fork.

Make sure you setup your local environment before you start.

Figure 2 shows the steps to follow when you work from a local fork. The details for each step follow.

flowchart LR 1[Fork the umh/umh.docs.umh.app
repository] --> 2[Create local clone
and set upstream] subgraph changes[Your changes] direction TB S[ ] -.- 3[Create a branch
example: my_new_branch] --> 3a[Make changes using
text editor] --> 4["Preview your changes
locally using Hugo
(localhost:1313)"] end subgraph changes2[Commit / Push] direction TB T[ ] -.- 5[Commit your changes] --> 6[Push commit to
origin/my_new_branch] end 2 --> changes --> changes2 classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px; classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold classDef k8s fill:#326ce5,stroke:#fff,stroke-width:1px,color:#fff; classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000 class 1,2,3,3a,4,5,6 grey class S,T spacewhite class changes,changes2 white
flowchart LR 1[Fork the umh/umh.docs.umh.app
repository] --> 2[Create local clone
and set upstream] subgraph changes[Your changes] direction TB S[ ] -.- 3[Create a branch
example: my_new_branch] --> 3a[Make changes using
text editor] --> 4["Preview your changes
locally using Hugo
(localhost:1313)"] end subgraph changes2[Commit / Push] direction TB T[ ] -.- 5[Commit your changes] --> 6[Push commit to
origin/my_new_branch] end 2 --> changes --> changes2 classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px; classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold classDef k8s fill:#326ce5,stroke:#fff,stroke-width:1px,color:#fff; classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000 class 1,2,3,3a,4,5,6 grey class S,T spacewhite class changes,changes2 white

Figure 2. Working from a local fork to make your changes.

Fork the united-manufacturing-hub/umh.docs.umh.app repository

  1. Navigate to the united-manufacturing-hub/umh.docs.umh.app repository.
  2. Select Fork.

Fetch commits

Before proceeding, verify that your environment is setup correctly.

  1. Confirm your origin and upstream repositories:

    git remote -v
    

    Output is similar to:

    origin  https://github.com/<github_username>/umh.docs.umh.app.git (fetch)
    origin  https://github.com/<github_username>/umh.docs.umh.app.git (push)
    upstream        https://github.com/united-manufacturing-hub/umh.docs.umh.app.git (fetch)
    upstream        no_push (push)
    
  2. Fetch commits from your fork’s origin/main and united-manufacturing-hub/umh.docs.umh.app’s upstream/main:

    git fetch origin
    git fetch upstream
    

    This makes sure your local repository is up to date before you start making changes.

Create a branch

  1. Decide which branch base to your work on:

    • For improvements to existing content, use upstream/main.
    • For new content about existing features, use upstream/main.
    • For localized content, use the localization’s conventions. For more information, see localizing United Manufacturing Hub documentation.
    • It is helpful to name branches like [Purpose]/[ID]/[Title] where Purpose is docs, feat, or fix and ID is the issue identifier (or xxx if there is no related issue).

    If you need help choosing a branch, reach out on the Discord channel.

  2. Create a new branch based on the branch identified in step 1. This example assumes the base branch is upstream/main:

    git checkout -b <my_new_branch> upstream/main
    
  3. Make your changes using a text editor.

At any time, use the git status command to see what files you’ve changed.

Commit your changes

When you are ready to submit a pull request, commit your changes.

  1. In your local repository, check which files you need to commit:

    git status
    

    Output is similar to:

    On branch <my_new_branch>
    Your branch is up to date with 'origin/<my_new_branch>'.
    
    Changes not staged for commit:
    (use "git add <file>..." to update what will be committed)
    (use "git checkout -- <file>..." to discard changes in working directory)
    
    modified:   content/en/docs/development/contribute/new-content/add-documentation.md
    
    no changes added to commit (use "git add" and/or "git commit -a")
    
  2. Add the files listed under Changes not staged for commit to the commit:

    git add <your_file_name>
    

    Repeat this for each file.

  3. After adding all the files, create a commit:

    git commit -m "Your commit message"
    

    Do not use any GitHub Keywords in your commit message. You can add those to the pull request description later.

  4. Push your local branch and its new commit to your remote fork:

    git push origin <my_new_branch>
    

Preview your changes locally

It’s a good idea to preview your changes locally before pushing them or opening a pull request. A preview lets you catch build errors or markdown formatting problems.

Install and use the hugo command on your computer:

  1. Install Hugo.

  2. If you have not updated your website repository, the website/themes/docsy directory is empty. The site cannot build without a local copy of the theme. To update the website theme, run:

    git submodule update --init --recursive --depth 1
    
  3. In a terminal, go to your United Manufacturing Hub website repository and start the Hugo server:

    cd <path_to_your_repo>/umh.docs.umh.app
    hugo server --buildFuture
    

    Alternatively, if you have installed GNU make and GNU awk:

    cd <path_to_your_repo>
    make serve
    
  4. In a web browser, navigate to https://localhost:1313. Hugo watches the changes and rebuilds the site as needed.

  5. To stop the local Hugo instance, go back to the terminal and type Ctrl+C, or close the terminal window.

Open a pull request from your fork to united-manufacturing-hub/umh.docs.umh.app

Figure 3 shows the steps to open a PR from your fork to the umh/umh.docs.umh.app. The details follow.

flowchart LR subgraph first[ ] direction TB 1[1. Go to umh/umh.docs.umh.app repository] --> 2[2. Select New Pull Request] 2 --> 3[3. Select compare across forks] 3 --> 4[4. Select your fork from
head repository drop-down menu] end subgraph second [ ] direction TB 5[5. Select your branch from
the compare drop-down menu] --> 6[6. Select Create Pull Request] 6 --> 7[7. Add a description
to your PR] 7 --> 8[8. Select Create pull request] end first --> second classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px; classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold class 1,2,3,4,5,6,7,8 grey class first,second white
flowchart LR subgraph first[ ] direction TB 1[1. Go to umh/umh.docs.umh.app repository] --> 2[2. Select New Pull Request] 2 --> 3[3. Select compare across forks] 3 --> 4[4. Select your fork from
head repository drop-down menu] end subgraph second [ ] direction TB 5[5. Select your branch from
the compare drop-down menu] --> 6[6. Select Create Pull Request] 6 --> 7[7. Add a description
to your PR] 7 --> 8[8. Select Create pull request] end first --> second classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px; classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold class 1,2,3,4,5,6,7,8 grey class first,second white

Figure 3. Steps to open a PR from your fork to the umh/umh.docs.umh.app.

  1. In a web browser, go to the united-manufacturing-hub/umh.docs.umh.app repository.

  2. Select New Pull Request.

  3. Select compare across forks.

  4. From the head repository drop-down menu, select your fork.

  5. From the compare drop-down menu, select your branch.

  6. Select Create Pull Request.

  7. Add a description for your pull request:

    • Title (50 characters or less): Summarize the intent of the change.

    • Description: Describe the change in more detail.

      • If there is a related GitHub issue, include Fixes #12345 or Closes #12345 in the description. GitHub’s automation closes the mentioned issue after merging the PR if used. If there are other related PRs, link those as well.
      • If you want advice on something specific, include any questions you’d like reviewers to think about in your description.
  8. Select the Create pull request button.

Congratulations! Your pull request is available in Pull requests.

After opening a PR, GitHub runs automated tests and tries to deploy a preview using Cloudflare Pages.

  • If the Cloudflare Page build fails, select Details for more information.
  • If the Cloudflare Page build succeeds, select Details opens a staged version of the United Manufacturing Hub website with your changes applied. This is how reviewers check your changes.

You should also add labels to your PR.

Addressing feedback locally

  1. After making your changes, amend your previous commit:

    git commit -a --amend
    
    • -a: commits all changes
    • --amend: amends the previous commit, rather than creating a new one
  2. Update your commit message if needed.

  3. Use git push origin <my_new_branch> to push your changes and re-run the Cloudflare tests.

    If you use git commit -m instead of amending, you must squash your commits before merging.

Changes from reviewers

Sometimes reviewers commit to your pull request. Before making any other changes, fetch those commits.

  1. Fetch commits from your remote fork and rebase your working branch:

    git fetch origin
    git rebase origin/<your-branch-name>
    
  2. After rebasing, force-push new changes to your fork:

    git push --force-with-lease origin <your-branch-name>
    

Merge conflicts and rebasing

For more information, see Git Branching - Basic Branching and Merging, Advanced Merging, or ask in the Discord channel for help.

If another contributor commits changes to the same file in another PR, it can create a merge conflict. You must resolve all merge conflicts in your PR.

  1. Update your fork and rebase your local branch:

    git fetch origin
    git rebase origin/<your-branch-name>
    

    Then force-push the changes to your fork:

    git push --force-with-lease origin <your-branch-name>
    
  2. Fetch changes from united-manufacturing-hub/umh.docs.umh.app’s upstream/main and rebase your branch:

    git fetch upstream
    git rebase upstream/main
    
  3. Inspect the results of the rebase:

    git status
    

    This results in a number of files marked as conflicted.

  4. Open each conflicted file and look for the conflict markers: >>>, <<<, and ===. Resolve the conflict and delete the conflict marker.

    For more information, see How conflicts are presented.

  5. Add the files to the changeset:

    git add <filename>
    
  6. Continue the rebase:

    git rebase --continue
    
  7. Repeat steps 2 to 5 as needed.

    After applying all commits, the git status command shows that the rebase is complete.

  8. Force-push the branch to your fork:

    git push --force-with-lease origin <your-branch-name>
    

    The pull request no longer shows any conflicts.

Squashing commits

For more information, see Git Tools - Rewriting History, or ask in the Discord channel for help.

If your PR has multiple commits, you must squash them into a single commit before merging your PR. You can check the number of commits on your PR’s Commits tab or by running the git log command locally.

This topic assumes vim as the command line text editor.

  1. Start an interactive rebase:

    git rebase -i HEAD~<number_of_commits_in_branch>
    

    Squashing commits is a form of rebasing. The -i switch tells git you want to rebase interactively. HEAD~<number_of_commits_in_branch indicates how many commits to look at for the rebase.

    Output is similar to:

    pick d875112ca Original commit
    pick 4fa167b80 Address feedback 1
    pick 7d54e15ee Address feedback 2
    
    # Rebase 3d18sf680..7d54e15ee onto 3d183f680 (3 commands)
    
    ...
    
    # These lines can be re-ordered; they are executed from top to bottom.
    

    The first section of the output lists the commits in the rebase. The second section lists the options for each commit. Changing the word pick changes the status of the commit once the rebase is complete.

    For the purposes of rebasing, focus on squash and pick.

    For more information, see Interactive Mode.

  2. Start editing the file.

    Change the original text:

    pick d875112ca Original commit
    pick 4fa167b80 Address feedback 1
    pick 7d54e15ee Address feedback 2
    

    To:

    pick d875112ca Original commit
    squash 4fa167b80 Address feedback 1
    squash 7d54e15ee Address feedback 2
    

    This squashes commits 4fa167b80 Address feedback 1 and 7d54e15ee Address feedback 2 into d875112ca Original commit, leaving only d875112ca Original commit as a part of the timeline.

  3. Save and exit your file.

  4. Push your squashed commit:

    git push --force-with-lease origin <branch_name>
    

10.1.2.4 - Suggesting content improvements

This page describes how to suggest improvements to the United Manufacturing Hub project.

If you notice an issue with the United Manufacturing Hub or one of its components, like the documentation, or have an idea for new content, then open an issue. All you need is a GitHub account and a web browser.

In most cases, new work on the United Manufacturing Hub begins with an issue in GitHub. UMH maintainers then review, categorize and tag issues as needed. Next, you or another member of the United Manufacturing Hub community open a pull request with changes to resolve the issue.

Opening an issue

If you want to suggest improvements to existing content or notice an error, then open an issue.

  1. Go to the GitHub repository for the content you want to improve, like the main repository or the documentation repository.
  2. Click Issues, then click New issue.
  3. There are multiple issue templates to choose from. Choose the one that best describes your issue.
  4. Fill out the issue template with as many details as you can. If you have a specific suggestion for how to resolve the issue, include it in the issue description.
  5. Click Submit new issue.

After submitting, check in on your issue occasionally or turn on GitHub notifications. Reviewers and other community members might ask questions before they can take action on your issue.

How to file great issues

Keep the following in mind when filing an issue:

  • Provide a clear issue description. Describe what specifically is missing, out of date, wrong, or needs improvement.
  • Explain the specific impact the issue has on users.
  • Limit the scope of a given issue to a reasonable unit of work. For problems with a large scope, break them down into smaller issues. For example, “Fix the security docs” is too broad, but “Add details to the ‘Restricting network access’ topic” is specific enough to be actionable.
  • Search the existing issues to see if there’s anything related or similar to the new issue.
  • If the new issue relates to another issue or pull request, refer to it either by its full URL or by the issue or pull request number prefixed with a # character. For example, Introduced by #987654.
  • Follow the Code of Conduct. Respect your fellow contributors. For example, “The docs are terrible” is not helpful or polite feedback.

10.1.3 - United Manufacturing Hub

Learn how to contribute to the United Manufacturing Hub.

10.1.3.1 - Setup Local Environment

This document describes how to set up your local environment for contributing to the United Manufacturing Hub.

The following instructions describe how to set up your local environment for contributing to the United Manufacturing Hub.

You can use any text editor or IDE. However, we recommend using JetBrains GoLand.

Requirements

The following tools are required to contribute to the United Manufacturing Hub. Use the links to install the correct version for your operating system. We recommend using a package manager where possible (for Windows, we recommend using Chocolatey).

  • Git
  • Go version 1.19 or later
  • Docker version 20.10 or later
  • kubectl version 1.26 or later
  • Helm version 3.11 or later
  • k3d version 5.0 or later
  • GNU C Compiler version 12 or later. The gcc binaries must be in your PATH environment variable, and the go variable CGO_ENABLED must be set to 1. You can check this by running go env CGO_ENABLED in your terminal.

Other tools that are not required, but are recommended:

Fork the documentation repository

If you are not a member of the United Manufacturing Hub organization, you will need to fork the repository to your own GitHub account. This is done by clicking the Fork button in the top-right corner of the united-manufacturing-hub/united-manufacturing-hub repository page.

Clone the repository

Clone the repository to your local machine:

git clone https://github.com/<user>/united-manufacturing-hub.git
# or: git clone [email protected]:<user>/united-manufacturing-hub.git

Where <user> is your GitHub username, or united-manufacturing-hub if you are a member of the United Manufacturing Hub organization.

If you are not a member of the United Manufacturing Hub organization, you will need to add the upstream repository as a remote:

git remote add origin https://github.com/united-manufacturing-hub/united-manufacturing-hub.git
# or: git remote add upstream [email protected]:united-manufacturing-hub/united-manufacturing-hub.git

# Never push to upstream master
git remote set-url --push origin no_push

Install dependencies

Download the go dependencies:

make go-deps

Build the container images

These are the make targets to manage containers:

# Build the container images
make docker-build

# Push the container images
make docker-push

# Build and push the container images
make docker

You can pass the following variables to change the behavior of the make targets:

  • CTR_REPO: The container repository to push the images to. Defaults to ghcr.io/united-manufacturing-hub.
  • CTR_TAG: The tag to use for the container images. Defaults to latest.
  • CTR_IMG: Space-separated list of container images. Defaults to all the images in the deployment directory.

Run a cluster locally

To run a local cluster, run:

# Create a cluster that runs the latest version of the United Manufacturing Hub
make cluster-install

# Create a cluster that runs the local version of the United Manufacturing Hub
make cluster-install CHART=./deployment/helm/united-manufacturing-hub

You can pass the following variables to change the behavior of the make targets:

  • CLUSTER_NAME: The name of the cluster. Defaults to umh.
  • CHART: The Helm chart to use. Defaults to united-manufacturing-hub/united-manufacturing-hub.
  • VERSION: The version of the Helm chart to use. Default is empty, which means the latest version.
  • VALUES_FILE: The Helm values file to use. Default is empty, which means the default values.

Test

To run the unit tests, run:

make go-test-unit

To run e2e tests, run:

make helm-test-upgrade

# To run the upgrade test with data
make helm-test-upgrade-with-data

Other useful commands

# Display the help for the Makefile
make help

# Pass the PRINT_HELP=y flag to make to print the help for each target
make cluster-install PRINT_HELP=y

What’s next

10.1.3.2 - Coding Conventions

This document outlines a collection of guidelines, style suggestions, and tips for writing code in the different programming languages used throughout the Kubernetes project.

Code conventions

  • Bash

  • Go

    • Go Code Review Comments
    • Effective Go
    • Know and avoid Go landmines
    • Comment your code.
    • Command-line flags should use dashes, not underscores
    • Naming
      • Please consider package name when selecting an interface name, and avoid redundancy. For example, storage.Interface is better than storage.StorageInterface.
      • Do not use uppercase characters, underscores, or dashes in package names.
      • Please consider parent directory name when choosing a package name. For example, pkg/controllers/autoscaler/foo.go should say package autoscaler not package autoscalercontroller.
        • Unless there’s a good reason, the package foo line should match the name of the directory in which the .go file exists.
        • Importers can use a different name if they need to disambiguate.
      • Locks should be called lock and should never be embedded (always lock sync.Mutex). When multiple locks are present, give each lock a distinct name following Go conventions: stateLock, mapLock etc.

Testing conventions

  • All new packages and most new significant functionality must come with unit tests.
  • Significant features should come with integration and/or end-to-end.
  • Do not expect an asynchronous thing to happen immediately—do not wait for one second and expect a pod to be running. Wait and retry instead.

Directory and file conventions

  • Avoid package sprawl. Find an appropriate subdirectory for new packages.
    • Libraries with no appropriate home belong in new package subdirectories of pkg/util.
  • Avoid general utility packages. Packages called “util” are suspect. Instead, derive a name that describes your desired function. For example, the utility functions dealing with waiting for operations are in the wait package and include functionality like Poll. The full name is wait.Poll.
  • All filenames should be lowercase.
  • Go source files and directories use underscores, not dashes.
    • Package directories should generally avoid using separators as much as possible. When package names are multiple words, they usually should be in nested subdirectories.
  • Document directories and filenames should use dashes rather than underscores.
  • Go code for normal third-party dependencies is managed using go modules.

10.1.3.3 - Automation Tools

This section contains the description of the automation tools used in the United Manufacturing Hub project.

Automation tools are an essential part of the United Manufacturing Hub project. They automate the building and testing of the project’s code, ensuring that it remains of high quality and stays reliable.

We rely on GitHub Actions for running the pipelines, which are defined in the .github/workflows directory of the project’s repository.

Here’s a brief overview of each workflow:

Build Docker Images

This pipeline builds and pushes all the Docker images for the project, tagging them using the branch name or the git tag. This way there is always a tagged version for the latest release of the UMH, as well as specific version for each branch to use for testing.

It runs on push events only when relevant files have been changed, such as the Dockerfiles or the source code.

GitGuardian Scan

This pipeline scans the code for security vulnerabilities, such as exposed secrets.

It runs on both push and pull request events.

Test Deployment

Small deployment test

(deactivated for now as they were flaky. will be replaced in the future with E2E tests)

This pipeline group verifies that the current changes can be successfully installed and that data flows correctly. There are two pipelines: a “tiny” version with the minimum amount of services needed to run the stack, and a “full” version with as many services as possible.

Each pipeline has two jobs. The first job installs the stacks with the current changes, and the second job tries to upgrade from the latest stable version to the current changes.

A test is run in each workflow to verify that simulated data flows through MQTT, NodeRed, Kafka, and TimescaleDB. In the full version, an additional test for sensorconnect is run, using a mocked sensor to verify the data flow.

It runs on pull request events when the Helm configuration or the source code changes.

Full E2E test

On every push to main and staging, an E2E test is executed. More information about this can be found on Github

10.1.3.4 - Release Process

This page describes how to release a new version of the United Manufacturing Hub.

Releases are coordinated by the United Manufacturing Hub team. All the features and bug fixes due for a release are tracked in the internal project board.

Once all the features and bug fixes for a release are ready and merged into the staging branch, the release process can start.

Companion

This section is for internal use at UMH.

Testing

If a new version of the Companion is ready to be released, it must be tested before it can be published. The testing process is done in the staging environment.

The developer can push to the staging branch all the changes that needs to be tested, including the new version definition in the Updater and in the version.json file. They can then use the make docker_tag GIT_TAG=<semver-tag-to-be-released> command from the Companion directory to build and push the image. After that, from the staging environment, they can trigger the update process.

This process will not make the changes available to the user, but keep in mind that the tagged version could still be accidentally used. Once the testing is done, all the changes are pushed to main and the new release is published, the image will be overwritten with the correct one.

Preparing the Documentation

Begin by drafting new documentation within the /docs/whatsnew directory of the United Manufacturing Hub documentation repository. Your draft should comprehensively include:

  • The UMH version rolled out with this release.
  • The new Companion version.
  • Versions of any installed plugins, such as Benthos-UMH.

Initiate your document with an executive summary that encapsulates updates and changes across all platforms, including UMH and Companion.

Version Update Procedure

Navigate to the ManagementConsole repository and contribute a new .go file within the /updater/cmd/upgrades path. This file’s name must adhere to the semantic versioning convention of the update (e.g., 0.0.5.go).

This file should:

  • Implement the Version interface defined in upgrade_interface.go.
  • Include PreMigration and PostMigration functions. These functions should return another function that, when executed, returns nil unless specific migration tasks are necessary. This nested function structure allows for conditional execution of migration steps, as demonstrated in the PostMigration example below:
    func (v *v0x0x5) PostMigration() func(version *semver.Version, clientset kubernetes.Interface) error {
        return func(version *semver.Version, clientset kubernetes.Interface) error {
            zap.S().Infof("Post-Migration 0.0.5")
            return nil
        }
    }
    
  • Define GetImageVersion to return the Docker tag associated with the new version. For 0.5.0 this would look like:
    func (v *v0x0x5) GetImageVersion() *semver.Version {
        return semver.New(0, 0, 5, "", "")
    }
    
  • Specify any Kubernetes controllers (e.g., Statefulsets, Deployments) needing restart post-update in the GetPodControllers function. Usually you just need to restart the companion itself, so you can use:
    func (v *v0x0x5) GetPodControllers() []types.KubernetesController {
        return []types.KubernetesController{
            {
                Name: constants.StatefulsetName,
                Type: types.Statefulset,
            },
        }
    }
    

Validate that all kubernetes objects referenced here, are designed to restart after terminating their Pod. This is especially important for Jobs.

Inside the versions.go, ensure to add your version inside the buildVersionLinkedList function.

func buildVersionLinkedList() error {
	var err error
	builderOnce.Do(func() {
		zap.S().Infof("Building version list")
		start := v0x0x1{}
		versionLinkedList = &start
		/*
		    Other previous versions
		 */
		
		// Our new version
		err = addVersion(&v0x0x5{})
		if err != nil {
			zap.S().Warnf("Failed to add 0.0.5 to version list: %s", err)
			return
		}
		zap.S().Infof("Build version list")
	})
	return err
}

Update the version.json in the frontend/static/version directory with the new image tag and incorporate the changelog derived from your initial documentation draft.

{
  "companion": {
    "versions": [
      {
        "semver": "0.0.1",
        "changelog": {
          "full": ["INTERNAL TESTING 0.0.1"],
          "short": "Bugfixes"
        },
        "requiresManualIntervention": false
      },
       
       // Other previous versions        

       // Our new version 
      {
        "semver": "0.0.5",
        "changelog": {
          "full": ["See 0.0.4"],
          "short": "This version is the same as 0.0.5 and is used for upgrade testing"
        },
        "requiresManualIntervention": false
      }
    ]
  }
}

Finalizing the Release

To finalize:

  1. Submit a PR to the documentation repository to transition the release notes from draft to final.
  2. Initiate a PR from the staging to the main branch within the ManagementConsole repository, ensuring to reference the documentation PR.
  3. Confirm the success of all test suites.
  4. Merge the code changes and formalize the release on GitHub, labeling it with the semantic version (e.g., 0.0.5, excluding any preceding v).
  5. Merge the documentation PR to publicize the new version within the official documentation.

Checklist

  • Draft documentation in /docs/whatsnew with version details and summary.
  • Add new .go file for version update in /updater/cmd/upgrades.
  • Implement Version interface and necessary migration functions.
  • Update version.json with new image tag and changelog.
  • Submit PR to finalize documentation.
  • Create and merge PR in ManagementConsole repository, referencing documentation PR.
  • Validate tests and merge code changes.
  • Release new GitHub version without the v prefix.
  • Merge documentation PR to publish new version details.

Helm Chart

Prerelease

The prerelease process is used to test the release before it is published. If bugs are found during the prerelease, they can be fixed and the release process can be restarted. Once the prerelease is finished, the release can be published.

  1. Create a prerelease branch from staging:

    git checkout staging
    git pull
    git checkout -b <next-version>-prerelease1
    
  2. Update the version and appVersion fields in the Chart.yaml file to the next version:

    version: <next-version>-prerelease1
    appVersion: <next-version>-prerelease1
    
  3. Validate that all external docker images are correctly overwritten. This is especially important if an external chart is updated. The easiest way to do this is to run helm template and check the output.

  4. Navigate to the deployment/helm-repo directory and run the following commands:

    helm package ../united-manufacturing-hub
    helm repo index --url https://staging.united-manufacturing-hub.pages.dev --merge index.yaml .
    

    Pay attantion to use - instead of . as a separator in <next-version>.

  5. Commit and push the changes:

    git add .
    git commit -m "build: <next-version>-prerelease1"
    git push origin <next-version>-prerelease1
    
  6. Merge prerelease branch into staging

Test

All the new releases must be thoroughly tested before they can be published. This includes specific tests for the new features and bug fixes, as well as general tests for the whole stack.

General tests include, but are not limited to:

  • Deploy the stack with flatcar
  • Upgrade the stack from the previous version
  • Deploy the stack on Karbon 300 and test with real sensors

If any bugs are found during the testing phase, they must be fixed and pushed to the prerelease branch. Multiple prerelease versions can be created if necessary.

Release

Once all the tests have passed, the release can be published. Merge the prerelease branch into staging and create a new release branch.

  1. Create a release branch from staging:

    git checkout main
    git pull
    git checkout -b <next-version>
    
  2. Update the version and appVersion fields in the Chart.yaml file to the next version:

    version: <next-version>
    appVersion: <next-version>
    
  3. Navigate to the deployment/helm-repo directory and run the following commands:

    helm package ../united-manufacturing-hub
    helm repo index --url https://repo.umh.app --merge index.yaml .
    
  4. Commit and push the changes, tagging the release:

    git add .
    git commit -m "build: <next-version>"
    git tag <next-version>
    git push origin <next-version> --tags
    
  5. Merge the release branch into staging

  6. Merge staging into main and create a new release from the tag on GitHub.

10.1.4 - Documentation

Learn how to contribute to the United Manufacturing Hub documentation.

Welcome

Welcome to the United Manufacturing Hub documentation! We’re excited that you want to contribute to the project.

The first place to start is the Getting Started With Contributing page. It provides a high-level overview of the contribution process.

Once you’re familiar with the contribution process, you can prepare for your first contribution by reading the documents in this section.

United Manufacturing Hub documentation contributors:

  • Improve existing content
  • Create new content
  • Translate the documentation
  • Manage and publish the documentation parts of the United Manufacturing Hub release cycle

Your first contribution

You can prepare for your first contribution by reviewing several steps beforehand. The next figure outlines the steps nad the details to follow.

flowchart LR subgraph second[First Contribution] direction TB S[ ] -.- G[Review PRs from other
UMH members] --> A[Check umh.docs.umh.app
issues list for good
first PRs] --> B[Open a PR!!] end subgraph first[Suggested Prep] direction TB T[ ] -.- D[Read contribution overview] -->E[Read UMH content
and style guides] E --> F[Learn about Hugo page
content types
and shortcodes] end first ----> second classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px; classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000 class A,B,D,E,F,G grey class S,T spacewhite class first,second white
flowchart LR subgraph second[First Contribution] direction TB S[ ] -.- G[Review PRs from other
UMH members] --> A[Check umh.docs.umh.app
issues list for good
first PRs] --> B[Open a PR!!] end subgraph first[Suggested Prep] direction TB T[ ] -.- D[Read contribution overview] -->E[Read UMH content
and style guides] E --> F[Learn about Hugo page
content types
and shortcodes] end first ----> second classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px; classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000 class A,B,D,E,F,G grey class S,T spacewhite class first,second white

10.1.4.1 - Setup Local Environment

This document describes how to set up your local environment for contributing to United Manufacturing Hub documentation website.

The following instructions describe how to set up your local environment for contributing to United Manufacturing Hub documentation website.

You can use any text editor to contribute to the documentation. However, we recommend using Visual Studio Code with the Markdown All in One extension. Additional extensions that can be useful are:

Requirements

The following tools are required to contribute to the documentation website. Use your preferred package manager to install them (for Windows users, we recommend using Chocolatey).

Other tools that are not required, but are recommended:

Fork the documentation repository

If you are not a member of the United Manufacturing Hub organization, you will need to fork the repository to your own GitHub account. This is done by clicking the Fork button in the top-right corner of the united-manufacturing-hub/umh.docs.umh.app repository page.

Clone the repository

Clone the repository to your local machine:

git clone https://github.com/<user>/umh.docs.umh.app.git
# or: git clone [email protected]:<user>/umh.docs.umh.app.git

Where <user> is your GitHub username, or united-manufacturing-hub if you are a member of the United Manufacturing Hub organization.

If you are not a member of the United Manufacturing Hub organization, you will need to add the upstream repository as a remote:

git remote add upstream https://github.com/united-manufacturing-hub/umh.docs.umh.app.git

# Never push to upstream master
git remote set-url --push upstream no_push

Setup the environment

If you are running on a Windows system, manually install the above required tools.

If you are running on a Linux system, or can run a bash shell, you can use the following commands to install the required tools:

cd <path_to_your_repo>
make install

Run the development server

Now it’s time to run the server locally.

Navigate to the umh.docs.umh.app directory inside the repository you cloned earlier.

cd <path_to_your_repo>/umh.docs.umh.app

If you have not installed GNU make, run the following command:

hugo server --buildDrafts

Otherwise, run the following command:

make serve

Either method will start the local Hugo server on port 1313. Open up your browser to http://localhost:1313 to view the website. As you make changes to the source files, Hugo updates the website and forces a browser refresh.

You can stop the server by pressing Ctrl+C in the terminal.

What’s next

10.1.4.2 - Write a new topic

This page shows how to create a new topic for the United Manufacturing Hub docs.

Choosing a page type

As you prepare to write a new topic, think about the page type that would fit your content the best. We have many archetypes to choose from, and you can create a new one if none of the existing ones fit your needs.

Generally, each archetype is specific to a particular type of content. For example, the upgrading archetype is used for pages that describe how to upgrade to a new version of United Manufacturing Hub, and most of the content in the Production Guide section of the docs uses the tasks archetype.

In the content guide you can find a description of the most used archetypes. If you need to create a new archetype, you can find more information in the Hugo documentation.

Choosing a directory

The directory in which you put your file is mostly determined by the page type you choose.

If you think that your topic doesn’t belong to any of the existing sections, you should first discuss with the United Manufacturing Hub team where your topic should go. They will coordinate the creation of a new section if needed.

Choosing a title and filename

Choose a title that has the keywords you want search engines to find. Create a filename that uses the words in your title separated by hyphens. For example, the topic with title Access Factoryinsight Outside the Cluster has filename access-factoryinsight-outside-cluster.md. You don’t need to put “united manufacturing hub” in the filename, because “umh” is already in the URL for the topic, for example:

https://umh.docs.umh.app/docs/production-guide/administration/access-factoryinsight-outside-cluster/

Adding the topic title to the front matter

In your topic, put a title field in the front matter. The front matter is the YAML block that is between the triple-dashed lines at the top of the page. Here’s an example:

---
title: Access Factoryinsight Outside the Cluster
---

Most of the archetypes automatically create the page title using the filename, but always check that the title makes sense.

Creating a new page

Once you have chosen the archetype, the location, and the file name, you can create a new page using the hugo new command. For example, to create a new page using the tasks archetype, run the following command:

hugo new docs/production-guide/my-first-task.md -k tasks

Placing your topic in the table of contents

The table of contents is built dynamically using the directory structure of the documentation source. The top-level directories under /content/en/docs/ create top-level navigation, and subdirectories each have entries in the table of contents.

Each subdirectory has a file _index.md, which represents the “home” page for a given subdirectory’s content. The _index.md does not need a template. It can contain overview content about the topics in the subdirectory.

Other files in a directory are sorted alphabetically by default. This is almost never the best order. To control the relative sorting of topics in a subdirectory, set the weight: front-matter key to an integer. Typically, we use multiples of 10, to account for adding topics later. For instance, a topic with weight 10 will come before one with weight 20.

You can hide a topic from the table of contents by setting toc_hide: true, and you can hide the list of child pages at the botton of an _index.md file by setting no_list: true.

Embedding code in your topic

If you want to include some code in your topic, you can embed the code in your file directly using the markdown code block syntax. This is recommended for the following cases (not an exhaustive list):

  • The code shows the output from a command such as kubectl get deploy mydeployment -o json | jq '.status'.
  • The code is not generic enough for users to try out.
  • The code is an incomplete example because its purpose is to highlight a portion of a larger file.
  • The code is not meant for users to try out due to other reasons.

Including code from another file

Another way to include code in your topic is to create a new, complete sample file (or group of sample files) and then reference the sample from your topic. Use this method to include sample YAML files when the sample is generic and reusable, and you want the reader to try it out themselves.

When adding a new standalone sample file, such as a YAML file, place the code in one of the <LANG>/examples/ subdirectories where <LANG> is the language for the topic. In your topic file, use the codenew shortcode:

{{< codenew file="<RELPATH>/my-example-yaml>" >}}

where <RELPATH> is the path to the file to include, relative to the examples directory. The following Hugo shortcode references a YAML file located at /content/en/examples/pods/storage/gce-volume.yaml.

{{< codenew file="pods/storage/gce-volume.yaml" >}}

Adding images to a topic

Put image files in the /static/images directory. The preferred image format is SVG. Organize images in subdirectories under /static/images as needed.

Add images to the page using markdown image syntax:

![Alt text](/images/my-image.svg)

What’s next

10.1.4.3 - Style Overview

This section provides guidance on writing style, content formatting and organization, and using Hugo customizations specific to UMH documentation.

The topics in this section provide guidance on writing style, content formatting and organization, and using Hugo customizations specific to UMH documentation.

10.1.4.3.1 - Content Guide

This page contains guidelines for the United Manufacturing Hub documentation.

In this guide, you’ll find guidelines regarding the content for the United Manufacturing Hub documentation, that is what content is allowed and how to organize it.

For information about the styling, follow the style guide, and for a quick guide to writing a new page, follow the quick start guide.

What’s allowed

United Manufacturing Hub docs allow content for third-party projects only when:

  • Content documents software in the United Manufacturing Hub project
  • Content documents software that’s out of project but necessary for United Manufacturing Hub to function

Sections

The United Manufacturing Hub documentation is organized into sections. Each section contains a specific set of pages that are relevant to a user goal.

Get started

The Get started section contains information to help new users get started with the United Manufacturing Hub. It’s the first section a reader sees when visiting the website, and it guides users through the installation process.

Features

The Features section contains information about the capabilities of the United Manufacturing Hub. It’s a high-level overview of the project’s features, and it’s intended for users who want to learn more about them without diving into the technical details.

Data Model

The Data Model section contains information about the data model of the United Manufacturing Hub. It’s intended for users who want to learn more about the data model of the United Manufacturing Hub and how it’s used by the different components of the project.

Architecture

The Architecture section contains technical information about the United Manufacturing Hub. It’s intended for users who want to learn more about the project’s architecture and design decisions. Here there are information about the different components of the United Manufacturing Hub and how they interact with each other.

Production Guide

The Production Guide section contains a series of guides that help users to set up and operate the United Manufacturing Hub.

What’s New

The What’s New section contains high-level overview of all the releases of the United Manufacturing Hub. Usually, only the last 3 to 4 releases are displayed in the sidebar, but all the releases are available in the section page.

Reference

The Reference section contains technical information about the different components of the United Manufacturing Hub. It’s intended for users who want to learn more about the different components of the project and how they work.

Development

The Development section contains information about contributing to the United Manufacturing Hub project. It’s intended for users who want to contribute to the project, either by writing code or documentation.

Page Organization

This site uses Hugo. In Hugo, content organization is a core concept.

Page Lists

Page Order

The documentation side menu, the documentation page browser etc. are listed using Hugo’s default sort order, which sorts by weight (from 1), date (newest first), and finally by the link title.

Given that, if you want to move a page or a section up, set a weight in the page’s front matter:

title: My Page
weight: 10

For page weights, it can be smart not to use 1, 2, 3 …, but some other interval, say 10, 20, 30… This allows you to insert pages where you want later. Additionally, each weight within the same directory (section) should not be overlapped with the other weights. This makes sure that content is always organized correctly, especially in localized content.

In some sections, like the What’s New section, it’s easier to manage the order using a negative weight. This is because the What’s New section is organized by release version, and the release version is a string, so it’s easier to use a negative weight to sort the releases in the correct order.

Side Menu

The documentation side-bar menu is built from the current section tree starting below docs/.

It will show all sections and their pages.

If you don’t want to list a section or page, set the toc_hide flag to true in front matter:

toc_hide: true

When you navigate to a section that has content, the specific section or page (e.g. _index.md) is shown. Else, the first page inside that section is shown.

Page Bundles

In addition to standalone content pages (Markdown files), Hugo supports Page Bundles.

One example is Custom Hugo Shortcodes. It is considered a leaf bundle. Everything below the directory, including the index.md, will be part of the bundle. This also includes page-relative links, images that can be processed etc.:

en/docs/home/contribute/includes
├── example1.md
├── example2.md
├── index.md
└── podtemplate.json

Another widely used example is the includes bundle. It sets headless: true in front matter, which means that it does not get its own URL. It is only used in other pages.

en/includes
├── default-storage-class-prereqs.md
├── index.md
├── partner-script.js
├── partner-style.css
├── task-tutorial-prereqs.md
├── user-guide-content-moved.md
└── user-guide-migration-notice.md

Some important notes to the files in the bundles:

  • For translated bundles, any missing non-content files will be inherited from languages above. This avoids duplication.
  • All the files in a bundle are what Hugo calls Resources and you can provide metadata per language, such as parameters and title, even if it does not supports front matter (YAML files etc.). See Page Resources Metadata.
  • The value you get from .RelPermalink of a Resource is page-relative. See Permalinks.

Page Content Types

Hugo uses archetypes to define page types. The archetypes are located in the archetypes directory.

Each archetype informally defines its expected page structure. There are two main archetypes, described below, but it’s possible to create new archetypes for specific page types that are frequently used.

To create a new page using an archetype, run the following command:

hugo new -k <archetype> docs/<section>/<page-name>.md

Content Types

Concept

A concept page explains some aspect of United Manufacturing Hub. For example, a concept page might describe a specific component of the United Manufacturing Hub and explain the role it plays as an application while it is deployed, scaled, and updated. Typically, concept pages don’t include sequences of steps, but instead provide links to tasks or tutorials.

To write a new concept page, create a Markdown file with the following characteristics:

Concept pages are divided into three sections:

Page section
overview
body
whatsnext

The overview and body sections appear as comments in the concept page. You can add the whatsnext section to your page with the heading shortcode.

Fill each section with content. Follow these guidelines:

  • Organize content with H2 and H3 headings.
  • For overview, set the topic’s context with a single paragraph.
  • For body, explain the concept.
  • For whatsnext, provide a bulleted list of topics (5 maximum) to learn more about the concept.

Task

A task page shows how to do a single thing. The idea is to give readers a sequence of steps that they can actually do as they read the page. A task page can be short or long, provided it stays focused on one area. In a task page, it is OK to blend brief explanations with the steps to be performed, but if you need to provide a lengthy explanation, you should do that in a concept topic. Related task and concept topics should link to each other.

To write a new task page, create a Markdown file with the following characteristics:

Page section
overview
prerequisites
steps
discussion
whatsnext

The overview, steps, and discussion sections appear as comments in the task page. You can add the prerequisites and whatsnext sections to your page with the heading shortcode.

Within each section, write your content. Use the following guidelines:

  • Use a minimum of H2 headings (with two leading # characters). The sections themselves are titled automatically by the template.
  • For overview, use a paragraph to set context for the entire topic.
  • For prerequisites, use bullet lists when possible. Start adding additional prerequisites below the include. The default prerequisites include a running Kubernetes cluster.
  • For steps, use numbered lists.
  • For discussion, use normal content to expand upon the information covered in steps.
  • For whatsnext, give a bullet list of up to 5 topics the reader might be interested in reading next.

For an example of a short task page, see Expose Grafana to the internet. For an example of a longer task page, see Access the database

Content Sections

Each page content type contains a number of sections defined by Markdown comments and HTML headings. You can add content headings to your page with the heading shortcode. The comments and headings help maintain the structure of the page content types.

Examples of Markdown comments defining page content sections:

<!-- overview -->

<!-- body -->

To create common headings in your content pages, use the heading shortcode with a heading string.

Examples of heading strings:

  • whatsnext
  • prerequisites
  • objectives
  • cleanup
  • synopsis
  • seealso
  • options

For example, to create a whatsnext heading, add the heading shortcode with the “whatsnext” string:

## {{% heading "whatsnext" %}}

You can declare a prerequisites heading as follows:

## {{% heading "prerequisites" %}}

The heading shortcode expects one string parameter. The heading string parameter matches the prefix of a variable in the i18n/<lang>.toml files. For example:

i18n/en.toml:

[heading_whatsnext]
other = "What's next"

What’s next

10.1.4.3.2 - Style Guide

This page gives writing style guidelines for the United Manufacturing Hub documentation.

This page gives writing style guidelines for the United Manufacturing Hub documentation. These are guidelines, not rules. Use your best judgment, and feel free to propose changes to this document in a pull request.

For additional information on creating new content for the United Manufacturing Hub documentation, read the Documentation Content Guide.

Language

The United Manufacturing Hub documentation has not been translated yet. But if you want to help with that, you can check out the localization page.

Documentation formatting standards

Use upper camel case for Kubernetes objects

When you refer specifically to interacting with a Kubernetes object, use UpperCamelCase, also known as Pascal case.

When you are generally discussing a Kubernetes object, use sentence-style capitalization.

The following examples focus on capitalization. For more information about formatting Kubernetes object names, review the related guidance on Code Style.

Do and Don't - Use Pascal case for Kubernetes objects
DoDon’t
The ConfigMap of …The Config map of …
The Volume object contains a hostPath field.The volume object contains a hostPath field.
Every ConfigMap object is part of a namespace.Every configMap object is part of a namespace.
For managing confidential data, consider using a Secret.For managing confidential data, consider using the a secret.

Use angle brackets for placeholders

Use angle brackets for placeholders. Tell the reader what a placeholder represents, for example:

Display information about a pod:

kubectl describe pod <pod-name> -n <namespace>

Use bold for user interface elements

Do and Don't - Bold interface elements
DoDon’t
Click Fork.Click “Fork”.
Select Other.Select “Other”.

Use italics to define or introduce new terms

Do and Don't - Use italics for new terms
DoDon’t
A cluster is a set of nodes …A “cluster” is a set of nodes …
These components form the control plane.These components form the control plane.

Use code style for filenames, directories, and paths

Do and Don't - Use code style for filenames, directories, and paths
DoDon’t
Open the envars.yaml file.Open the envars.yaml file.
Go to the /docs/tutorials directory.Go to the /docs/tutorials directory.
Open the /_data/concepts.yaml file.Open the /_data/concepts.yaml file.

Use the international standard for punctuation inside quotes

Do and Don't - Use the international standard for punctuation inside quotes
DoDon’t
events are recorded with an associated “stage”.events are recorded with an associated “stage.”
The copy is called a “fork”.The copy is called a “fork.”

Inline code formatting

Use code style for inline code, commands, and API objects

For inline code in an HTML document, use the <code> tag. In a Markdown document, use the backtick (`).

Do and Don't - Use code style for inline code, commands, and API objects
DoDon’t
The kubectl run command creates a Pod.The “kubectl run” command creates a pod.
The kubelet on each node acquires a LeaseThe kubelet on each node acquires a lease…
A PersistentVolume represents durable storage…A Persistent Volume represents durable storage…
For declarative management, use kubectl apply.For declarative management, use “kubectl apply”.
Enclose code samples with triple backticks. (```)Enclose code samples with any other syntax.
Use single backticks to enclose inline code. For example, var example = true.Use two asterisks (**) or an underscore (_) to enclose inline code. For example, var example = true.
Use triple backticks before and after a multi-line block of code for fenced code blocks.Use multi-line blocks of code to create diagrams, flowcharts, or other illustrations.
Use meaningful variable names that have a context.Use variable names such as ‘foo’,‘bar’, and ‘baz’ that are not meaningful and lack context.
Remove trailing spaces in the code.Add trailing spaces in the code, where these are important, because the screen reader will read out the spaces as well.

The website supports syntax highlighting for code samples, but specifying a language is optional. Syntax highlighting in the code block should conform to the contrast guidelines.

Use code style for object field names and namespaces

Do and Don't - Use code style for object field names
DoDon’t
Set the value of the replicas field in the configuration file.Set the value of the “replicas” field in the configuration file.
The value of the exec field is an ExecAction object.The value of the “exec” field is an ExecAction object.
Run the process as a DaemonSet in the kube-system namespace.Run the process as a DaemonSet in the kube-system namespace.

Use code style for command tools and component names

Do and Don't - Use code style for command tools and component names
DoDon’t
The kubelet preserves node stability.The kubelet preserves node stability.
The kubectl handles locating and authenticating to the API server.The kubectl handles locating and authenticating to the apiserver.
Run the process with the certificate, kube-apiserver --client-ca-file=FILENAME.Run the process with the certificate, kube-apiserver –client-ca-file=FILENAME.

Starting a sentence with a component tool or component name

Do and Don't - Starting a sentence with a component tool or component name
DoDon’t
The kubeadm tool bootstraps and provisions machines in a cluster.kubeadm tool bootstraps and provisions machines in a cluster.
The kube-scheduler is the default scheduler for United Manufacturing Hub.kube-scheduler is the default scheduler for United Manufacturing Hub.

Use a general descriptor over a component name

Do and Don't - Use a general descriptor over a component name
DoDon’t
The United Manufacturing Hub MQTT broker handles…The HiveMQ handles…
To visualize data in the database…To visualize data in TimescaleDB…

Use normal style for string and integer field values

For field values of type string or integer, use normal style without quotation marks.

Do and Don't - Use normal style for string and integer field values
DoDon’t
Set the value of imagePullPolicy to Always.Set the value of imagePullPolicy to “Always”.
Set the value of image to nginx:1.16.Set the value of image to nginx:1.16.
Set the value of the replicas field to 2.Set the value of the replicas field to 2.

Code snippet formatting

Don’t include the command prompt

Do and Don't - Don't include the command prompt
DoDon’t
kubectl get pods$ kubectl get pods

Separate commands from output

Verify that the pod is running on your chosen node:

kubectl get pods --output=wide

The output is similar to this:

NAME     READY     STATUS    RESTARTS   AGE    IP           NODE
nginx    1/1       Running   0          13s    10.200.0.4   worker0

Versioning United Manufacturing Hub examples

Code examples and configuration examples that include version information should be consistent with the accompanying text.

If the information is version specific, the United Manufacturing Hub version needs to be defined in the prerequisites section of the Task template or the Tutorial template. Once the page is saved, the prerequisites section is shown as Before you begin.

To specify the United Manufacturing Hub version for a task or tutorial page, include minimum-version in the front matter of the page.

If the example YAML is in a standalone file, find and review the topics that include it as a reference. Verify that any topics using the standalone YAML have the appropriate version information defined. If a stand-alone YAML file is not referenced from any topics, consider deleting it instead of updating it.

For example, if you are writing a tutorial that is relevant to United Manufacturing Hub version 0.9.11, the front-matter of your markdown file should look something like:

---
title: <your tutorial title here>
minimum-version: 0.9.11
---

In code and configuration examples, do not include comments about alternative versions. Be careful to not include incorrect statements in your examples as comments, such as:

apiVersion: v1 # earlier versions use...
kind: Pod
...

United Manufacturing Hub word list

A list of UMH-specific terms and words to be used consistently across the site.

United Manufacturing Hub.io word list
TermUsage
United Manufacturing HubUnited Manufacturing Hub should always be capitalized.
Management ConsoleManagement Console should always be capitalized.

Shortcodes

Hugo Shortcodes help create different rhetorical appeal levels.

There are multiple custom shortcodes that can be used in the United Manufacturing Hub documentation. Refer to the shortcode guide for more information.

Markdown elements

Line breaks

Use a single newline to separate block-level content like headings, lists, images, code blocks, and others. The exception is second-level headings, where it should be two newlines. Second-level headings follow the first-level (or the title) without any preceding paragraphs or texts. A two line spacing helps visualize the overall structure of content in a code editor better.

Headings and titles

People accessing this documentation may use a screen reader or other assistive technology (AT). Screen readers are linear output devices, they output items on a page one at a time. If there is a lot of content on a page, you can use headings to give the page an internal structure. A good page structure helps all readers to easily navigate the page or filter topics of interest.

Do and Don't - Headings
DoDon’t
Update the title in the front matter of the page or blog post.Use first level heading, as Hugo automatically converts the title in the front matter of the page into a first-level heading.
Use ordered headings to provide a meaningful high-level outline of your content.Use headings level 4 through 6, unless it is absolutely necessary. If your content is that detailed, it may need to be broken into separate articles.
Use pound or hash signs (#) for non-blog post content.Use underlines (--- or ===) to designate first-level headings.
Use sentence case for headings in the page body. For example, Change the security contextUse title case for headings in the page body. For example, Change The Security Context
Use title case for the page title in the front matter. For example, title: Execute Kafka Shell ScriptsUse sentence case for page titles in the front matter. For example, don’t use title: Execute Kafka shell scripts

Paragraphs

Do and Don't - Paragraphs
DoDon’t
Try to keep paragraphs under 6 sentences.Indent the first paragraph with space characters. For example, ⋅⋅⋅Three spaces before a paragraph will indent it.
Use three hyphens (---) to create a horizontal rule. Use horizontal rules for breaks in paragraph content. For example, a change of scene in a story, or a shift of topic within a section.Use horizontal rules for decoration.
Do and Don't - Links
DoDon’t
Write hyperlinks that give you context for the content they link to. For example: Certain ports are open on your machines. See Check required ports for more details.Use ambiguous terms such as “click here”. For example: Certain ports are open on your machines. See here for more details.
Write Markdown-style links: [link text](/URL). For example: [Hugo shortcodes](/docs/development/contribute/documentation/style/hugo-shortcodes/#table-captions) and the output is Hugo shortcodes.Write HTML-style links: <a href="/media/examples/link-element-example.css" target="_blank">Visit our tutorial!</a>, or create links that open in new tabs or windows. For example: [example website](https://example.com){target="_blank"}

Lists

Group items in a list that are related to each other and need to appear in a specific order or to indicate a correlation between multiple items. When a screen reader comes across a list—whether it is an ordered or unordered list—it will be announced to the user that there is a group of list items. The user can then use the arrow keys to move up and down between the various items in the list. Website navigation links can also be marked up as list items; after all they are nothing but a group of related links.

  • End each item in a list with a period if one or more items in the list are complete sentences. For the sake of consistency, normally either all items or none should be complete sentences.

    Ordered lists that are part of an incomplete introductory sentence can be in lowercase and punctuated as if each item was a part of the introductory sentence.
  • Use the number one (1.) for ordered lists.

  • Use (+), (*), or (-) for unordered lists.

  • Leave a blank line after each list.

  • Indent nested lists with four spaces (for example, ⋅⋅⋅⋅).

  • List items may consist of multiple paragraphs. Each subsequent paragraph in a list item must be indented by either four spaces or one tab.

Tables

The semantic purpose of a data table is to present tabular data. Sighted users can quickly scan the table but a screen reader goes through line by line. A table caption is used to create a descriptive title for a data table. Assistive technologies (AT) use the HTML table caption element to identify the table contents to the user within the page structure.

Content best practices

This section contains suggested best practices for clear, concise, and consistent content.

Use present tense

Do and Don't - Use present tense
DoDon’t
This command starts a proxy.This command will start a proxy.

Exception: Use future or past tense if it is required to convey the correct meaning.

Use active voice

Do and Don't - Use active voice
DoDon’t
You can explore the API using a browser.The API can be explored using a browser.
The YAML file specifies the replica count.The replica count is specified in the YAML file.

Exception: Use passive voice if active voice leads to an awkward construction.

Use simple and direct language

Use simple and direct language. Avoid using unnecessary phrases, such as saying “please.”

Do and Don't - Use simple and direct language
DoDon’t
To create a ReplicaSet, …In order to create a ReplicaSet, …
See the configuration file.Please see the configuration file.
View the pods.With this next command, we’ll view the pods.

Address the reader as “you”

Do and Don't - Addressing the reader
DoDon’t
You can create a Deployment by …We’ll create a Deployment by …
In the preceding output, you can see…In the preceding output, we can see …

Avoid Latin phrases

Prefer English terms over Latin abbreviations.

Do and Don't - Avoid Latin phrases
DoDon’t
For example, …e.g., …
That is, …i.e., …

Exception: Use “etc.” for et cetera.

Patterns to avoid

Avoid using “we”

Using “we” in a sentence can be confusing, because the reader might not know whether they’re part of the “we” you’re describing.

Do and Don't - Patterns to avoid
DoDon’t
Version 1.4 includes …In version 1.4, we have added …
United Manufacturing Hub provides a new feature for …We provide a new feature …
This page teaches you how to use pods.In this page, we are going to learn about pods.

Avoid jargon and idioms

Some readers speak English as a second language. Avoid jargon and idioms to help them understand better.

Do and Don't - Avoid jargon and idioms
DoDon’t
Internally, …Under the hood, …
Create a new cluster.Turn up a new cluster.

Avoid statements about the future

Avoid making promises or giving hints about the future. If you need to talk about an alpha feature, put the text under a heading that identifies it as alpha information.

An exception to this rule is documentation about announced deprecations targeting removal in future versions.

Avoid statements that will soon be out of date

Avoid words like “currently” and “new.” A feature that is new today might not be considered new in a few months.

Do and Don't - Avoid statements that will soon be out of date
DoDon’t
In version 1.4, …In the current version, …
The Federation feature provides …The new Federation feature provides …

Avoid words that assume a specific level of understanding

Avoid words such as “just”, “simply”, “easy”, “easily”, or “simple”. These words do not add value.

Do and Don't - Avoid insensitive words
DoDon’t
Include one command in …Include just one command in …
Run the container …Simply run the container …
You can remove …You can easily remove …
These steps …These simple steps …

What’s next

10.1.4.3.3 - Diagram Guide

This guide shows you how to create, edit and share diagrams using the Mermaid JavaScript library.
This guide is taken from the Kubernets documentation, so there might be some references to Kubernetes that are not relevant to United Manufacturing Hub.

This guide shows you how to create, edit and share diagrams using the Mermaid JavaScript library. Mermaid.js allows you to generate diagrams using a simple markdown-like syntax inside Markdown files. You can also use Mermaid to generate .svg or .png image files that you can add to your documentation.

The target audience for this guide is anybody wishing to learn about Mermaid and/or how to create and add diagrams to United Manufacturing Hub documentation.

Figure 1 outlines the topics covered in this section.

flowchart LR subgraph m[Mermaid.js] direction TB S[ ]-.- C[build
diagrams
with markdown] --> D[on-line
live editor] end A[Why are diagrams
useful?] --> m m --> N[3 x methods
for creating
diagrams] N --> T[Examples] T --> X[Styling
and
captions] X --> V[Tips] classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000; classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000 class A,C,D,N,X,m,T,V box class S spacewhite %% you can hyperlink Mermaid diagram nodes to a URL using click statements click A "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank click C "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank click D "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank click N "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank click T "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank click X "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank click V "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank
flowchart LR subgraph m[Mermaid.js] direction TB S[ ]-.- C[build
diagrams
with markdown] --> D[on-line
live editor] end A[Why are diagrams
useful?] --> m m --> N[3 x methods
for creating
diagrams] N --> T[Examples] T --> X[Styling
and
captions] X --> V[Tips] classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000; classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000 class A,C,D,N,X,m,T,V box class S spacewhite %% you can hyperlink Mermaid diagram nodes to a URL using click statements click A "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank click C "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank click D "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank click N "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank click T "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank click X "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank click V "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank

All you need to begin working with Mermaid is the following:

You can click on each diagram in this section to view the code and rendered diagram in the Mermaid live editor.

Why you should use diagrams in documentation

Diagrams improve documentation clarity and comprehension. There are advantages for both the user and the contributor.

The user benefits include:

  • Friendly landing spot. A detailed text-only greeting page could intimidate users, in particular, first-time United Manufacturing Hub users.
  • Faster grasp of concepts. A diagram can help users understand the key points of a complex topic. Your diagram can serve as a visual learning guide to dive into the topic details.
  • Better retention. For some, it is easier to recall pictures rather than text.

The contributor benefits include:

  • Assist in developing the structure and content of your contribution. For example, you can start with a simple diagram covering the high-level points and then dive into details.
  • Expand and grow the user community. Easily consumed documentation augmented with diagrams attracts new users who might previously have been reluctant to engage due to perceived complexities.

You should consider your target audience. In addition to experienced UMH users, you will have many who are new to United Manufacturing Hub. Even a simple diagram can assist new users in absorbing United Manufacturing Hub concepts. They become emboldened and more confident to further explore United Manufacturing Hub and the documentation.

Mermaid

Mermaid is an open source JavaScript library that allows you to create, edit and easily share diagrams using a simple, markdown-like syntax configured inline in Markdown files.

The following lists features of Mermaid:

  • Simple code syntax.
  • Includes a web-based tool allowing you to code and preview your diagrams.
  • Supports multiple formats including flowchart, state and sequence.
  • Easy collaboration with colleagues by sharing a per-diagram URL.
  • Broad selection of shapes, lines, themes and styling.

The following lists advantages of using Mermaid:

  • No need for separate, non-Mermaid diagram tools.
  • Adheres to existing PR workflow. You can think of Mermaid code as just Markdown text included in your PR.
  • Simple tool builds simple diagrams. You don’t want to get bogged down (re)crafting an overly complex and detailed picture. Keep it simple!

Mermaid provides a simple, open and transparent method for the SIG communities to add, edit and collaborate on diagrams for new or existing documentation.

You can still use Mermaid to create/edit diagrams even if it’s not supported in your environment. This method is called Mermaid+SVG and is explained below.

Live editor

The Mermaid live editor is a web-based tool that enables you to create, edit and review diagrams.

The following lists live editor functions:

  • Displays Mermaid code and rendered diagram.
  • Generates a URL for each saved diagram. The URL is displayed in the URL field of your browser. You can share the URL with colleagues who can access and modify the diagram.
  • Option to download .svg or .png files.
The live editor is the easiest and fastest way to create and edit Mermaid diagrams.

Methods for creating diagrams

Figure 2 outlines the three methods to generate and add diagrams.

graph TB A[Contributor] B[Inline

Mermaid code
added to .md file] C[Mermaid+SVG

Add mermaid-generated
svg file to .md file] D[External tool

Add external-tool-
generated svg file
to .md file] A --> B A --> C A --> D classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000; class A,B,C,D box %% you can hyperlink Mermaid diagram nodes to a URL using click statements click A "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBBW0NvbnRyaWJ1dG9yXVxuICAgIEJbSW5saW5lPGJyPjxicj5NZXJtYWlkIGNvZGU8YnI-YWRkZWQgdG8gLm1kIGZpbGVdXG4gICAgQ1tNZXJtYWlkK1NWRzxicj48YnI-QWRkIG1lcm1haWQtZ2VuZXJhdGVkPGJyPnN2ZyBmaWxlIHRvIC5tZCBmaWxlXVxuICAgIERbRXh0ZXJuYWwgdG9vbDxicj48YnI-QWRkIGV4dGVybmFsLXRvb2wtPGJyPmdlbmVyYXRlZCBzdmcgZmlsZTxicj50byAubWQgZmlsZV1cblxuICAgIEEgLS0-IEJcbiAgICBBIC0tPiBDXG4gICAgQSAtLT4gRFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3giLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank click B "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBBW0NvbnRyaWJ1dG9yXVxuICAgIEJbSW5saW5lPGJyPjxicj5NZXJtYWlkIGNvZGU8YnI-YWRkZWQgdG8gLm1kIGZpbGVdXG4gICAgQ1tNZXJtYWlkK1NWRzxicj48YnI-QWRkIG1lcm1haWQtZ2VuZXJhdGVkPGJyPnN2ZyBmaWxlIHRvIC5tZCBmaWxlXVxuICAgIERbRXh0ZXJuYWwgdG9vbDxicj48YnI-QWRkIGV4dGVybmFsLXRvb2wtPGJyPmdlbmVyYXRlZCBzdmcgZmlsZTxicj50byAubWQgZmlsZV1cblxuICAgIEEgLS0-IEJcbiAgICBBIC0tPiBDXG4gICAgQSAtLT4gRFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3giLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank click C "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBBW0NvbnRyaWJ1dG9yXVxuICAgIEJbSW5saW5lPGJyPjxicj5NZXJtYWlkIGNvZGU8YnI-YWRkZWQgdG8gLm1kIGZpbGVdXG4gICAgQ1tNZXJtYWlkK1NWRzxicj48YnI-QWRkIG1lcm1haWQtZ2VuZXJhdGVkPGJyPnN2ZyBmaWxlIHRvIC5tZCBmaWxlXVxuICAgIERbRXh0ZXJuYWwgdG9vbDxicj48YnI-QWRkIGV4dGVybmFsLXRvb2wtPGJyPmdlbmVyYXRlZCBzdmcgZmlsZTxicj50byAubWQgZmlsZV1cblxuICAgIEEgLS0-IEJcbiAgICBBIC0tPiBDXG4gICAgQSAtLT4gRFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3giLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank click D "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBBW0NvbnRyaWJ1dG9yXVxuICAgIEJbSW5saW5lPGJyPjxicj5NZXJtYWlkIGNvZGU8YnI-YWRkZWQgdG8gLm1kIGZpbGVdXG4gICAgQ1tNZXJtYWlkK1NWRzxicj48YnI-QWRkIG1lcm1haWQtZ2VuZXJhdGVkPGJyPnN2ZyBmaWxlIHRvIC5tZCBmaWxlXVxuICAgIERbRXh0ZXJuYWwgdG9vbDxicj48YnI-QWRkIGV4dGVybmFsLXRvb2wtPGJyPmdlbmVyYXRlZCBzdmcgZmlsZTxicj50byAubWQgZmlsZV1cblxuICAgIEEgLS0-IEJcbiAgICBBIC0tPiBDXG4gICAgQSAtLT4gRFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3giLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank
graph TB A[Contributor] B[Inline

Mermaid code
added to .md file] C[Mermaid+SVG

Add mermaid-generated
svg file to .md file] D[External tool

Add external-tool-
generated svg file
to .md file] A --> B A --> C A --> D classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000; class A,B,C,D box %% you can hyperlink Mermaid diagram nodes to a URL using click statements click A "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBBW0NvbnRyaWJ1dG9yXVxuICAgIEJbSW5saW5lPGJyPjxicj5NZXJtYWlkIGNvZGU8YnI-YWRkZWQgdG8gLm1kIGZpbGVdXG4gICAgQ1tNZXJtYWlkK1NWRzxicj48YnI-QWRkIG1lcm1haWQtZ2VuZXJhdGVkPGJyPnN2ZyBmaWxlIHRvIC5tZCBmaWxlXVxuICAgIERbRXh0ZXJuYWwgdG9vbDxicj48YnI-QWRkIGV4dGVybmFsLXRvb2wtPGJyPmdlbmVyYXRlZCBzdmcgZmlsZTxicj50byAubWQgZmlsZV1cblxuICAgIEEgLS0-IEJcbiAgICBBIC0tPiBDXG4gICAgQSAtLT4gRFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3giLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank click B "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBBW0NvbnRyaWJ1dG9yXVxuICAgIEJbSW5saW5lPGJyPjxicj5NZXJtYWlkIGNvZGU8YnI-YWRkZWQgdG8gLm1kIGZpbGVdXG4gICAgQ1tNZXJtYWlkK1NWRzxicj48YnI-QWRkIG1lcm1haWQtZ2VuZXJhdGVkPGJyPnN2ZyBmaWxlIHRvIC5tZCBmaWxlXVxuICAgIERbRXh0ZXJuYWwgdG9vbDxicj48YnI-QWRkIGV4dGVybmFsLXRvb2wtPGJyPmdlbmVyYXRlZCBzdmcgZmlsZTxicj50byAubWQgZmlsZV1cblxuICAgIEEgLS0-IEJcbiAgICBBIC0tPiBDXG4gICAgQSAtLT4gRFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3giLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank click C "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBBW0NvbnRyaWJ1dG9yXVxuICAgIEJbSW5saW5lPGJyPjxicj5NZXJtYWlkIGNvZGU8YnI-YWRkZWQgdG8gLm1kIGZpbGVdXG4gICAgQ1tNZXJtYWlkK1NWRzxicj48YnI-QWRkIG1lcm1haWQtZ2VuZXJhdGVkPGJyPnN2ZyBmaWxlIHRvIC5tZCBmaWxlXVxuICAgIERbRXh0ZXJuYWwgdG9vbDxicj48YnI-QWRkIGV4dGVybmFsLXRvb2wtPGJyPmdlbmVyYXRlZCBzdmcgZmlsZTxicj50byAubWQgZmlsZV1cblxuICAgIEEgLS0-IEJcbiAgICBBIC0tPiBDXG4gICAgQSAtLT4gRFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3giLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank click D "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBBW0NvbnRyaWJ1dG9yXVxuICAgIEJbSW5saW5lPGJyPjxicj5NZXJtYWlkIGNvZGU8YnI-YWRkZWQgdG8gLm1kIGZpbGVdXG4gICAgQ1tNZXJtYWlkK1NWRzxicj48YnI-QWRkIG1lcm1haWQtZ2VuZXJhdGVkPGJyPnN2ZyBmaWxlIHRvIC5tZCBmaWxlXVxuICAgIERbRXh0ZXJuYWwgdG9vbDxicj48YnI-QWRkIGV4dGVybmFsLXRvb2wtPGJyPmdlbmVyYXRlZCBzdmcgZmlsZTxicj50byAubWQgZmlsZV1cblxuICAgIEEgLS0-IEJcbiAgICBBIC0tPiBDXG4gICAgQSAtLT4gRFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3giLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank

Figure 2. Methods to create diagrams.

Inline

Figure 3 outlines the steps to follow for adding a diagram using the Inline method.

graph LR A[1. Use live editor
to create/edit
diagram] --> B[2. Store diagram
URL somewhere] --> C[3. Copy Mermaid code
to page markdown file] --> D[4. Add caption] classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000; class A,B,C,D box %% you can hyperlink Mermaid diagram nodes to a URL using click statements click A "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggTFJcbiAgICBBWzEuIFVzZSBsaXZlIGVkaXRvcjxicj4gdG8gY3JlYXRlL2VkaXQ8YnI-ZGlhZ3JhbV0gLS0-XG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdIC0tPlxuICAgIENbMy4gQ29weSBNZXJtYWlkIGNvZGU8YnI-dG8gcGFnZSBtYXJrZG93biBmaWxlXSAtLT5cbiAgICBEWzQuIEFkZCBjYXB0aW9uXVxuIFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank click B "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggTFJcbiAgICBBWzEuIFVzZSBsaXZlIGVkaXRvcjxicj4gdG8gY3JlYXRlL2VkaXQ8YnI-ZGlhZ3JhbV0gLS0-XG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdIC0tPlxuICAgIENbMy4gQ29weSBNZXJtYWlkIGNvZGU8YnI-dG8gcGFnZSBtYXJrZG93biBmaWxlXSAtLT5cbiAgICBEWzQuIEFkZCBjYXB0aW9uXVxuIFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank click C "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggTFJcbiAgICBBWzEuIFVzZSBsaXZlIGVkaXRvcjxicj4gdG8gY3JlYXRlL2VkaXQ8YnI-ZGlhZ3JhbV0gLS0-XG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdIC0tPlxuICAgIENbMy4gQ29weSBNZXJtYWlkIGNvZGU8YnI-dG8gcGFnZSBtYXJrZG93biBmaWxlXSAtLT5cbiAgICBEWzQuIEFkZCBjYXB0aW9uXVxuIFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank click D "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggTFJcbiAgICBBWzEuIFVzZSBsaXZlIGVkaXRvcjxicj4gdG8gY3JlYXRlL2VkaXQ8YnI-ZGlhZ3JhbV0gLS0-XG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdIC0tPlxuICAgIENbMy4gQ29weSBNZXJtYWlkIGNvZGU8YnI-dG8gcGFnZSBtYXJrZG93biBmaWxlXSAtLT5cbiAgICBEWzQuIEFkZCBjYXB0aW9uXVxuIFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank
graph LR A[1. Use live editor
to create/edit
diagram] --> B[2. Store diagram
URL somewhere] --> C[3. Copy Mermaid code
to page markdown file] --> D[4. Add caption] classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000; class A,B,C,D box %% you can hyperlink Mermaid diagram nodes to a URL using click statements click A "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggTFJcbiAgICBBWzEuIFVzZSBsaXZlIGVkaXRvcjxicj4gdG8gY3JlYXRlL2VkaXQ8YnI-ZGlhZ3JhbV0gLS0-XG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdIC0tPlxuICAgIENbMy4gQ29weSBNZXJtYWlkIGNvZGU8YnI-dG8gcGFnZSBtYXJrZG93biBmaWxlXSAtLT5cbiAgICBEWzQuIEFkZCBjYXB0aW9uXVxuIFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank click B "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggTFJcbiAgICBBWzEuIFVzZSBsaXZlIGVkaXRvcjxicj4gdG8gY3JlYXRlL2VkaXQ8YnI-ZGlhZ3JhbV0gLS0-XG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdIC0tPlxuICAgIENbMy4gQ29weSBNZXJtYWlkIGNvZGU8YnI-dG8gcGFnZSBtYXJrZG93biBmaWxlXSAtLT5cbiAgICBEWzQuIEFkZCBjYXB0aW9uXVxuIFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank click C "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggTFJcbiAgICBBWzEuIFVzZSBsaXZlIGVkaXRvcjxicj4gdG8gY3JlYXRlL2VkaXQ8YnI-ZGlhZ3JhbV0gLS0-XG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdIC0tPlxuICAgIENbMy4gQ29weSBNZXJtYWlkIGNvZGU8YnI-dG8gcGFnZSBtYXJrZG93biBmaWxlXSAtLT5cbiAgICBEWzQuIEFkZCBjYXB0aW9uXVxuIFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank click D "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggTFJcbiAgICBBWzEuIFVzZSBsaXZlIGVkaXRvcjxicj4gdG8gY3JlYXRlL2VkaXQ8YnI-ZGlhZ3JhbV0gLS0-XG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdIC0tPlxuICAgIENbMy4gQ29weSBNZXJtYWlkIGNvZGU8YnI-dG8gcGFnZSBtYXJrZG93biBmaWxlXSAtLT5cbiAgICBEWzQuIEFkZCBjYXB0aW9uXVxuIFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank

The following lists the steps you should follow for adding a diagram using the Inline method:

  1. Create your diagram using the live editor.
  2. Store the diagram URL somewhere for later access.
  3. Copy the mermaid code to the location in your .md file where you want the diagram to appear.
  4. Add a caption below the diagram using Markdown text.

A Hugo build runs the Mermaid code and turns it into a diagram.

You may find keeping track of diagram URLs is cumbersome. If so, make a notice note in the .md file that the Mermaid code is self-documenting. Contributors can copy the Mermaid code to and from the live editor for diagram edits.

Here is a sample code snippet contained in an .md file:

---
title: My PR
---
Figure 17 shows a simple A to B process.
some markdown text
...
{{< mermaid >}} 
    graph TB
    A --> B
{{< /mermaid >}}

Figure 17. A to B
more text
You must include the Hugo Mermaid shortcode tags at the start and end of the Mermaid code block. You should add a diagram caption below the diagram.

For more details on diagram captions, see How to use captions.

The following lists advantages of the Inline method:

  • Live editor tool.
  • Easy to copy Mermaid code to and from the live editor and your .md file.
  • No need for separate .svg image file handling.
  • Content text, diagram code and diagram caption contained in the same .md file.

You should use the local and Cloudflare previews to verify the diagram is properly rendered.

The Mermaid live editor feature set may not support the umh/umh.docd.umh.app Mermaid feature set. You might see a syntax error or a blank screen after the Hugo build. If that is the case, consider using the Mermaid+SVG method.

Mermaid+SVG

Figure 4 outlines the steps to follow for adding a diagram using the Mermaid+SVG method.

flowchart LR A[1. Use live editor
to create/edit
diagram] B[2. Store diagram
URL somewhere] C[3. Generate .svg file
and download to
images/ folder] subgraph w[ ] direction TB D[4. Use figure shortcode
to reference .svg
file in page
.md file] --> E[5. Add caption] end A --> B B --> C C --> w classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000; class A,B,C,D,E,w box click A "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgbGl2ZSBlZGl0b3I8YnI-IHRvIGNyZWF0ZS9lZGl0PGJyPmRpYWdyYW1dXG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdXG4gICAgQ1szLiBHZW5lcmF0ZSAuc3ZnIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSAuc3ZnPGJyPmZpbGUgaW4gcGFnZTxicj4ubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbkEgLS0-IEJcbkIgLS0-IENcbkMgLS0-IHdcblxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzIEEsQixDLEQsRSx3IGJveFxuICAgICIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank click B "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgbGl2ZSBlZGl0b3I8YnI-IHRvIGNyZWF0ZS9lZGl0PGJyPmRpYWdyYW1dXG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdXG4gICAgQ1szLiBHZW5lcmF0ZSAuc3ZnIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSAuc3ZnPGJyPmZpbGUgaW4gcGFnZTxicj4ubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbkEgLS0-IEJcbkIgLS0-IENcbkMgLS0-IHdcblxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzIEEsQixDLEQsRSx3IGJveFxuICAgICIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank click C "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgbGl2ZSBlZGl0b3I8YnI-IHRvIGNyZWF0ZS9lZGl0PGJyPmRpYWdyYW1dXG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdXG4gICAgQ1szLiBHZW5lcmF0ZSAuc3ZnIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSAuc3ZnPGJyPmZpbGUgaW4gcGFnZTxicj4ubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbkEgLS0-IEJcbkIgLS0-IENcbkMgLS0-IHdcblxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzIEEsQixDLEQsRSx3IGJveFxuICAgICIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank click D "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgbGl2ZSBlZGl0b3I8YnI-IHRvIGNyZWF0ZS9lZGl0PGJyPmRpYWdyYW1dXG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdXG4gICAgQ1szLiBHZW5lcmF0ZSAuc3ZnIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSAuc3ZnPGJyPmZpbGUgaW4gcGFnZTxicj4ubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbkEgLS0-IEJcbkIgLS0-IENcbkMgLS0-IHdcblxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzIEEsQixDLEQsRSx3IGJveFxuICAgICIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank click E "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgbGl2ZSBlZGl0b3I8YnI-IHRvIGNyZWF0ZS9lZGl0PGJyPmRpYWdyYW1dXG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdXG4gICAgQ1szLiBHZW5lcmF0ZSAuc3ZnIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSAuc3ZnPGJyPmZpbGUgaW4gcGFnZTxicj4ubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbkEgLS0-IEJcbkIgLS0-IENcbkMgLS0-IHdcblxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzIEEsQixDLEQsRSx3IGJveFxuICAgICIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank
flowchart LR A[1. Use live editor
to create/edit
diagram] B[2. Store diagram
URL somewhere] C[3. Generate .svg file
and download to
images/ folder] subgraph w[ ] direction TB D[4. Use figure shortcode
to reference .svg
file in page
.md file] --> E[5. Add caption] end A --> B B --> C C --> w classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000; class A,B,C,D,E,w box click A "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgbGl2ZSBlZGl0b3I8YnI-IHRvIGNyZWF0ZS9lZGl0PGJyPmRpYWdyYW1dXG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdXG4gICAgQ1szLiBHZW5lcmF0ZSAuc3ZnIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSAuc3ZnPGJyPmZpbGUgaW4gcGFnZTxicj4ubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbkEgLS0-IEJcbkIgLS0-IENcbkMgLS0-IHdcblxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzIEEsQixDLEQsRSx3IGJveFxuICAgICIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank click B "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgbGl2ZSBlZGl0b3I8YnI-IHRvIGNyZWF0ZS9lZGl0PGJyPmRpYWdyYW1dXG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdXG4gICAgQ1szLiBHZW5lcmF0ZSAuc3ZnIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSAuc3ZnPGJyPmZpbGUgaW4gcGFnZTxicj4ubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbkEgLS0-IEJcbkIgLS0-IENcbkMgLS0-IHdcblxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzIEEsQixDLEQsRSx3IGJveFxuICAgICIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank click C "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgbGl2ZSBlZGl0b3I8YnI-IHRvIGNyZWF0ZS9lZGl0PGJyPmRpYWdyYW1dXG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdXG4gICAgQ1szLiBHZW5lcmF0ZSAuc3ZnIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSAuc3ZnPGJyPmZpbGUgaW4gcGFnZTxicj4ubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbkEgLS0-IEJcbkIgLS0-IENcbkMgLS0-IHdcblxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzIEEsQixDLEQsRSx3IGJveFxuICAgICIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank click D "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgbGl2ZSBlZGl0b3I8YnI-IHRvIGNyZWF0ZS9lZGl0PGJyPmRpYWdyYW1dXG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdXG4gICAgQ1szLiBHZW5lcmF0ZSAuc3ZnIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSAuc3ZnPGJyPmZpbGUgaW4gcGFnZTxicj4ubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbkEgLS0-IEJcbkIgLS0-IENcbkMgLS0-IHdcblxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzIEEsQixDLEQsRSx3IGJveFxuICAgICIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank click E "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgbGl2ZSBlZGl0b3I8YnI-IHRvIGNyZWF0ZS9lZGl0PGJyPmRpYWdyYW1dXG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdXG4gICAgQ1szLiBHZW5lcmF0ZSAuc3ZnIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSAuc3ZnPGJyPmZpbGUgaW4gcGFnZTxicj4ubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbkEgLS0-IEJcbkIgLS0-IENcbkMgLS0-IHdcblxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzIEEsQixDLEQsRSx3IGJveFxuICAgICIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank

Figure 4. Mermaid+SVG method steps.

The following lists the steps you should follow for adding a diagram using the Mermaid+SVG method:

  1. Create your diagram using the live editor.
  2. Store the diagram URL somewhere for later access.
  3. Generate an .svg image file for the diagram and download it to the appropriate images/ folder.
  4. Use the {{< figure >}} shortcode to reference the diagram in the .md file.
  5. Add a caption using the {{< figure >}} shortcode’s caption parameter.

For example, use the live editor to create a diagram called boxnet. Store the diagram URL somewhere for later access. Generate and download a boxnet.svg file to the appropriate ../images/ folder.

Use the {{< figure >}} shortcode in your PR’s .md file to reference the .svg image file and add a caption.

{{< figure src="/static/images/boxnet.svg" alt="Boxnet figure" class="diagram-large" caption="Figure 14. Boxnet caption" >}}

For more details on diagram captions, see How to use captions.

The {{< figure >}} shortcode is the preferred method for adding .svg image files to your documentation. You can also use the standard markdown image syntax like so: ![my boxnet diagram](/static/images/boxnet.svg). And you will need to add a caption below the diagram.

You should add the live editor URL as a comment block in the .svg image file using a text editor. For example, you would include the following at the beginning of the .svg image file:

<!-- To view or edit the mermaid code, use the following URL: -->
<!-- https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb ... <remainder of the URL> -->

The following lists advantages of the Mermaid+SVG method:

  • Live editor tool.
  • Live editor tool supports the most current Mermaid feature set.
  • Employ existing umh/website methods for handling .svg image files.
  • Environment doesn’t require Mermaid support.

Be sure to check that your diagram renders properly using the local and Netlify previews.

External tool

Figure 5 outlines the steps to follow for adding a diagram using the External Tool method.

First, use your external tool to create the diagram and save it as an .svg or .png image file. After that, use the same steps as the Mermaid+SVG method for adding .svg image files.

flowchart LR A[1. Use external
tool to create/edit
diagram] B[2. If possible, save
diagram coordinates
for contributor
access] C[3. Generate .svg
or.png file
and download to
appropriate
images/ folder] subgraph w[ ] direction TB D[4. Use figure shortcode
to reference svg or
png file in
page .md file] --> E[5. Add caption] end A --> B B --> C C --> w classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000; class A,B,C,D,E,w box click A "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgZXh0ZXJuYWw8YnI-dG9vbCB0byBjcmVhdGUvZWRpdDxicj5kaWFncmFtXVxuICAgIEJbMi4gSWYgcG9zc2libGUsIHNhdmU8YnI-ZGlhZ3JhbSBjb29yZGluYXRlczxicj5mb3IgY29udHJpYnV0b3I8YnI-YWNjZXNzXVxuICAgIENbMy4gR2VuZXJhdGUgLnN2ZyA8YnI-b3IucG5nIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmFwcHJvcHJpYXRlPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSBzdmcgb3I8YnI-cG5nIGZpbGUgaW48YnI-cGFnZSAubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbiAgICBBIC0tPiBCXG4gICAgQiAtLT4gQ1xuICAgIEMgLS0-IHdcbiAgICBjbGFzc0RlZiBib3ggZmlsbDojZmZmLHN0cm9rZTojMDAwLHN0cm9rZS13aWR0aDoxcHgsY29sb3I6IzAwMDtcbiAgICBjbGFzcyBBLEIsQyxELEUsdyBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" click B "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgZXh0ZXJuYWw8YnI-dG9vbCB0byBjcmVhdGUvZWRpdDxicj5kaWFncmFtXVxuICAgIEJbMi4gSWYgcG9zc2libGUsIHNhdmU8YnI-ZGlhZ3JhbSBjb29yZGluYXRlczxicj5mb3IgY29udHJpYnV0b3I8YnI-YWNjZXNzXVxuICAgIENbMy4gR2VuZXJhdGUgLnN2ZyA8YnI-b3IucG5nIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmFwcHJvcHJpYXRlPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSBzdmcgb3I8YnI-cG5nIGZpbGUgaW48YnI-cGFnZSAubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbiAgICBBIC0tPiBCXG4gICAgQiAtLT4gQ1xuICAgIEMgLS0-IHdcbiAgICBjbGFzc0RlZiBib3ggZmlsbDojZmZmLHN0cm9rZTojMDAwLHN0cm9rZS13aWR0aDoxcHgsY29sb3I6IzAwMDtcbiAgICBjbGFzcyBBLEIsQyxELEUsdyBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" click C "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgZXh0ZXJuYWw8YnI-dG9vbCB0byBjcmVhdGUvZWRpdDxicj5kaWFncmFtXVxuICAgIEJbMi4gSWYgcG9zc2libGUsIHNhdmU8YnI-ZGlhZ3JhbSBjb29yZGluYXRlczxicj5mb3IgY29udHJpYnV0b3I8YnI-YWNjZXNzXVxuICAgIENbMy4gR2VuZXJhdGUgLnN2ZyA8YnI-b3IucG5nIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmFwcHJvcHJpYXRlPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSBzdmcgb3I8YnI-cG5nIGZpbGUgaW48YnI-cGFnZSAubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbiAgICBBIC0tPiBCXG4gICAgQiAtLT4gQ1xuICAgIEMgLS0-IHdcbiAgICBjbGFzc0RlZiBib3ggZmlsbDojZmZmLHN0cm9rZTojMDAwLHN0cm9rZS13aWR0aDoxcHgsY29sb3I6IzAwMDtcbiAgICBjbGFzcyBBLEIsQyxELEUsdyBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" click D "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgZXh0ZXJuYWw8YnI-dG9vbCB0byBjcmVhdGUvZWRpdDxicj5kaWFncmFtXVxuICAgIEJbMi4gSWYgcG9zc2libGUsIHNhdmU8YnI-ZGlhZ3JhbSBjb29yZGluYXRlczxicj5mb3IgY29udHJpYnV0b3I8YnI-YWNjZXNzXVxuICAgIENbMy4gR2VuZXJhdGUgLnN2ZyA8YnI-b3IucG5nIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmFwcHJvcHJpYXRlPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSBzdmcgb3I8YnI-cG5nIGZpbGUgaW48YnI-cGFnZSAubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbiAgICBBIC0tPiBCXG4gICAgQiAtLT4gQ1xuICAgIEMgLS0-IHdcbiAgICBjbGFzc0RlZiBib3ggZmlsbDojZmZmLHN0cm9rZTojMDAwLHN0cm9rZS13aWR0aDoxcHgsY29sb3I6IzAwMDtcbiAgICBjbGFzcyBBLEIsQyxELEUsdyBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" click E "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgZXh0ZXJuYWw8YnI-dG9vbCB0byBjcmVhdGUvZWRpdDxicj5kaWFncmFtXVxuICAgIEJbMi4gSWYgcG9zc2libGUsIHNhdmU8YnI-ZGlhZ3JhbSBjb29yZGluYXRlczxicj5mb3IgY29udHJpYnV0b3I8YnI-YWNjZXNzXVxuICAgIENbMy4gR2VuZXJhdGUgLnN2ZyA8YnI-b3IucG5nIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmFwcHJvcHJpYXRlPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSBzdmcgb3I8YnI-cG5nIGZpbGUgaW48YnI-cGFnZSAubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbiAgICBBIC0tPiBCXG4gICAgQiAtLT4gQ1xuICAgIEMgLS0-IHdcbiAgICBjbGFzc0RlZiBib3ggZmlsbDojZmZmLHN0cm9rZTojMDAwLHN0cm9rZS13aWR0aDoxcHgsY29sb3I6IzAwMDtcbiAgICBjbGFzcyBBLEIsQyxELEUsdyBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ"
flowchart LR A[1. Use external
tool to create/edit
diagram] B[2. If possible, save
diagram coordinates
for contributor
access] C[3. Generate .svg
or.png file
and download to
appropriate
images/ folder] subgraph w[ ] direction TB D[4. Use figure shortcode
to reference svg or
png file in
page .md file] --> E[5. Add caption] end A --> B B --> C C --> w classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000; class A,B,C,D,E,w box click A "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgZXh0ZXJuYWw8YnI-dG9vbCB0byBjcmVhdGUvZWRpdDxicj5kaWFncmFtXVxuICAgIEJbMi4gSWYgcG9zc2libGUsIHNhdmU8YnI-ZGlhZ3JhbSBjb29yZGluYXRlczxicj5mb3IgY29udHJpYnV0b3I8YnI-YWNjZXNzXVxuICAgIENbMy4gR2VuZXJhdGUgLnN2ZyA8YnI-b3IucG5nIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmFwcHJvcHJpYXRlPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSBzdmcgb3I8YnI-cG5nIGZpbGUgaW48YnI-cGFnZSAubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbiAgICBBIC0tPiBCXG4gICAgQiAtLT4gQ1xuICAgIEMgLS0-IHdcbiAgICBjbGFzc0RlZiBib3ggZmlsbDojZmZmLHN0cm9rZTojMDAwLHN0cm9rZS13aWR0aDoxcHgsY29sb3I6IzAwMDtcbiAgICBjbGFzcyBBLEIsQyxELEUsdyBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" click B "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgZXh0ZXJuYWw8YnI-dG9vbCB0byBjcmVhdGUvZWRpdDxicj5kaWFncmFtXVxuICAgIEJbMi4gSWYgcG9zc2libGUsIHNhdmU8YnI-ZGlhZ3JhbSBjb29yZGluYXRlczxicj5mb3IgY29udHJpYnV0b3I8YnI-YWNjZXNzXVxuICAgIENbMy4gR2VuZXJhdGUgLnN2ZyA8YnI-b3IucG5nIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmFwcHJvcHJpYXRlPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSBzdmcgb3I8YnI-cG5nIGZpbGUgaW48YnI-cGFnZSAubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbiAgICBBIC0tPiBCXG4gICAgQiAtLT4gQ1xuICAgIEMgLS0-IHdcbiAgICBjbGFzc0RlZiBib3ggZmlsbDojZmZmLHN0cm9rZTojMDAwLHN0cm9rZS13aWR0aDoxcHgsY29sb3I6IzAwMDtcbiAgICBjbGFzcyBBLEIsQyxELEUsdyBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" click C "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgZXh0ZXJuYWw8YnI-dG9vbCB0byBjcmVhdGUvZWRpdDxicj5kaWFncmFtXVxuICAgIEJbMi4gSWYgcG9zc2libGUsIHNhdmU8YnI-ZGlhZ3JhbSBjb29yZGluYXRlczxicj5mb3IgY29udHJpYnV0b3I8YnI-YWNjZXNzXVxuICAgIENbMy4gR2VuZXJhdGUgLnN2ZyA8YnI-b3IucG5nIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmFwcHJvcHJpYXRlPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSBzdmcgb3I8YnI-cG5nIGZpbGUgaW48YnI-cGFnZSAubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbiAgICBBIC0tPiBCXG4gICAgQiAtLT4gQ1xuICAgIEMgLS0-IHdcbiAgICBjbGFzc0RlZiBib3ggZmlsbDojZmZmLHN0cm9rZTojMDAwLHN0cm9rZS13aWR0aDoxcHgsY29sb3I6IzAwMDtcbiAgICBjbGFzcyBBLEIsQyxELEUsdyBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" click D "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgZXh0ZXJuYWw8YnI-dG9vbCB0byBjcmVhdGUvZWRpdDxicj5kaWFncmFtXVxuICAgIEJbMi4gSWYgcG9zc2libGUsIHNhdmU8YnI-ZGlhZ3JhbSBjb29yZGluYXRlczxicj5mb3IgY29udHJpYnV0b3I8YnI-YWNjZXNzXVxuICAgIENbMy4gR2VuZXJhdGUgLnN2ZyA8YnI-b3IucG5nIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmFwcHJvcHJpYXRlPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSBzdmcgb3I8YnI-cG5nIGZpbGUgaW48YnI-cGFnZSAubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbiAgICBBIC0tPiBCXG4gICAgQiAtLT4gQ1xuICAgIEMgLS0-IHdcbiAgICBjbGFzc0RlZiBib3ggZmlsbDojZmZmLHN0cm9rZTojMDAwLHN0cm9rZS13aWR0aDoxcHgsY29sb3I6IzAwMDtcbiAgICBjbGFzcyBBLEIsQyxELEUsdyBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" click E "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgZXh0ZXJuYWw8YnI-dG9vbCB0byBjcmVhdGUvZWRpdDxicj5kaWFncmFtXVxuICAgIEJbMi4gSWYgcG9zc2libGUsIHNhdmU8YnI-ZGlhZ3JhbSBjb29yZGluYXRlczxicj5mb3IgY29udHJpYnV0b3I8YnI-YWNjZXNzXVxuICAgIENbMy4gR2VuZXJhdGUgLnN2ZyA8YnI-b3IucG5nIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmFwcHJvcHJpYXRlPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSBzdmcgb3I8YnI-cG5nIGZpbGUgaW48YnI-cGFnZSAubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbiAgICBBIC0tPiBCXG4gICAgQiAtLT4gQ1xuICAgIEMgLS0-IHdcbiAgICBjbGFzc0RlZiBib3ggZmlsbDojZmZmLHN0cm9rZTojMDAwLHN0cm9rZS13aWR0aDoxcHgsY29sb3I6IzAwMDtcbiAgICBjbGFzcyBBLEIsQyxELEUsdyBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ"

The following lists the steps you should follow for adding a diagram using the External Tool method:

  1. Use your external tool to create a diagram.
  2. Save the diagram coordinates for contributor access. For example, your tool may offer a link to the diagram image, or you could place the source code file, such as an .xml file, in a public repository for later contributor access.
  3. Generate and save the diagram as an .svg or .png image file. Download this file to the appropriate ../images/ folder.
  4. Use the {{< figure >}} shortcode to reference the diagram in the .md file.
  5. Add a caption using the {{< figure >}} shortcode’s caption parameter.

Here is the {{< figure >}} shortcode for the images/apple.svg diagram:

{{< figure src="/static/images/apple.svg" alt="red-apple-figure" class="diagram-large" caption="Figure 9. A Big Red Apple" >}} 

If your external drawing tool permits:

  • You can incorporate multiple .svg or .png logos, icons and images into your diagram. However, make sure you observe copyright and follow the United Manufacturing Hub documentation guidelines on the use of third party content.
  • You should save the diagram source coordinates for later contributor access. For example, your tool may offer a link to the diagram image, or you could place the source code file, such as an .xml file, somewhere for contributor access.

The following lists advantages of the External Tool method:

  • Contributor familiarity with external tool.
  • Diagrams require more detail than what Mermaid can offer.

Don’t forget to check that your diagram renders correctly using the local and Netlify previews.

Examples

This section shows several examples of Mermaid diagrams.

The code block examples omit the Hugo Mermaid shortcode tags. This allows you to copy the code block into the live editor to experiment on your own. notice note that the live editor doesn't recognize Hugo shortcodes.

Example 1 - Pod topology spread constraints

Figure 6 shows the diagram appearing in the Pod topology spread constraints page.

graph TB subgraph "zoneB" n3(Node3) n4(Node4) end subgraph "zoneA" n1(Node1) n2(Node2) end classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; class n1,n2,n3,n4 k8s; class zoneA,zoneB cluster; click n3 "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBzdWJncmFwaCBcInpvbmVCXCJcbiAgICAgICAgbjMoTm9kZTMpXG4gICAgICAgIG40KE5vZGU0KVxuICAgIGVuZFxuICAgIHN1YmdyYXBoIFwiem9uZUFcIlxuICAgICAgICBuMShOb2RlMSlcbiAgICAgICAgbjIoTm9kZTIpXG4gICAgZW5kXG5cbiAgICBjbGFzc0RlZiBwbGFpbiBmaWxsOiNkZGQsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICAgIGNsYXNzRGVmIGNsdXN0ZXIgZmlsbDojZmZmLHN0cm9rZTojYmJiLHN0cm9rZS13aWR0aDoycHgsY29sb3I6IzMyNmNlNTtcbiAgICBjbGFzcyBuMSxuMixuMyxuNCBrOHM7XG4gICAgY2xhc3Mgem9uZUEsem9uZUIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank click n4 "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBzdWJncmFwaCBcInpvbmVCXCJcbiAgICAgICAgbjMoTm9kZTMpXG4gICAgICAgIG40KE5vZGU0KVxuICAgIGVuZFxuICAgIHN1YmdyYXBoIFwiem9uZUFcIlxuICAgICAgICBuMShOb2RlMSlcbiAgICAgICAgbjIoTm9kZTIpXG4gICAgZW5kXG5cbiAgICBjbGFzc0RlZiBwbGFpbiBmaWxsOiNkZGQsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICAgIGNsYXNzRGVmIGNsdXN0ZXIgZmlsbDojZmZmLHN0cm9rZTojYmJiLHN0cm9rZS13aWR0aDoycHgsY29sb3I6IzMyNmNlNTtcbiAgICBjbGFzcyBuMSxuMixuMyxuNCBrOHM7XG4gICAgY2xhc3Mgem9uZUEsem9uZUIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank click n1 "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBzdWJncmFwaCBcInpvbmVCXCJcbiAgICAgICAgbjMoTm9kZTMpXG4gICAgICAgIG40KE5vZGU0KVxuICAgIGVuZFxuICAgIHN1YmdyYXBoIFwiem9uZUFcIlxuICAgICAgICBuMShOb2RlMSlcbiAgICAgICAgbjIoTm9kZTIpXG4gICAgZW5kXG5cbiAgICBjbGFzc0RlZiBwbGFpbiBmaWxsOiNkZGQsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICAgIGNsYXNzRGVmIGNsdXN0ZXIgZmlsbDojZmZmLHN0cm9rZTojYmJiLHN0cm9rZS13aWR0aDoycHgsY29sb3I6IzMyNmNlNTtcbiAgICBjbGFzcyBuMSxuMixuMyxuNCBrOHM7XG4gICAgY2xhc3Mgem9uZUEsem9uZUIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank click n2 "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBzdWJncmFwaCBcInpvbmVCXCJcbiAgICAgICAgbjMoTm9kZTMpXG4gICAgICAgIG40KE5vZGU0KVxuICAgIGVuZFxuICAgIHN1YmdyYXBoIFwiem9uZUFcIlxuICAgICAgICBuMShOb2RlMSlcbiAgICAgICAgbjIoTm9kZTIpXG4gICAgZW5kXG5cbiAgICBjbGFzc0RlZiBwbGFpbiBmaWxsOiNkZGQsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICAgIGNsYXNzRGVmIGNsdXN0ZXIgZmlsbDojZmZmLHN0cm9rZTojYmJiLHN0cm9rZS13aWR0aDoycHgsY29sb3I6IzMyNmNlNTtcbiAgICBjbGFzcyBuMSxuMixuMyxuNCBrOHM7XG4gICAgY2xhc3Mgem9uZUEsem9uZUIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank
graph TB subgraph "zoneB" n3(Node3) n4(Node4) end subgraph "zoneA" n1(Node1) n2(Node2) end classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; class n1,n2,n3,n4 k8s; class zoneA,zoneB cluster; click n3 "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBzdWJncmFwaCBcInpvbmVCXCJcbiAgICAgICAgbjMoTm9kZTMpXG4gICAgICAgIG40KE5vZGU0KVxuICAgIGVuZFxuICAgIHN1YmdyYXBoIFwiem9uZUFcIlxuICAgICAgICBuMShOb2RlMSlcbiAgICAgICAgbjIoTm9kZTIpXG4gICAgZW5kXG5cbiAgICBjbGFzc0RlZiBwbGFpbiBmaWxsOiNkZGQsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICAgIGNsYXNzRGVmIGNsdXN0ZXIgZmlsbDojZmZmLHN0cm9rZTojYmJiLHN0cm9rZS13aWR0aDoycHgsY29sb3I6IzMyNmNlNTtcbiAgICBjbGFzcyBuMSxuMixuMyxuNCBrOHM7XG4gICAgY2xhc3Mgem9uZUEsem9uZUIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank click n4 "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBzdWJncmFwaCBcInpvbmVCXCJcbiAgICAgICAgbjMoTm9kZTMpXG4gICAgICAgIG40KE5vZGU0KVxuICAgIGVuZFxuICAgIHN1YmdyYXBoIFwiem9uZUFcIlxuICAgICAgICBuMShOb2RlMSlcbiAgICAgICAgbjIoTm9kZTIpXG4gICAgZW5kXG5cbiAgICBjbGFzc0RlZiBwbGFpbiBmaWxsOiNkZGQsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICAgIGNsYXNzRGVmIGNsdXN0ZXIgZmlsbDojZmZmLHN0cm9rZTojYmJiLHN0cm9rZS13aWR0aDoycHgsY29sb3I6IzMyNmNlNTtcbiAgICBjbGFzcyBuMSxuMixuMyxuNCBrOHM7XG4gICAgY2xhc3Mgem9uZUEsem9uZUIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank click n1 "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBzdWJncmFwaCBcInpvbmVCXCJcbiAgICAgICAgbjMoTm9kZTMpXG4gICAgICAgIG40KE5vZGU0KVxuICAgIGVuZFxuICAgIHN1YmdyYXBoIFwiem9uZUFcIlxuICAgICAgICBuMShOb2RlMSlcbiAgICAgICAgbjIoTm9kZTIpXG4gICAgZW5kXG5cbiAgICBjbGFzc0RlZiBwbGFpbiBmaWxsOiNkZGQsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICAgIGNsYXNzRGVmIGNsdXN0ZXIgZmlsbDojZmZmLHN0cm9rZTojYmJiLHN0cm9rZS13aWR0aDoycHgsY29sb3I6IzMyNmNlNTtcbiAgICBjbGFzcyBuMSxuMixuMyxuNCBrOHM7XG4gICAgY2xhc3Mgem9uZUEsem9uZUIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank click n2 "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBzdWJncmFwaCBcInpvbmVCXCJcbiAgICAgICAgbjMoTm9kZTMpXG4gICAgICAgIG40KE5vZGU0KVxuICAgIGVuZFxuICAgIHN1YmdyYXBoIFwiem9uZUFcIlxuICAgICAgICBuMShOb2RlMSlcbiAgICAgICAgbjIoTm9kZTIpXG4gICAgZW5kXG5cbiAgICBjbGFzc0RlZiBwbGFpbiBmaWxsOiNkZGQsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICAgIGNsYXNzRGVmIGNsdXN0ZXIgZmlsbDojZmZmLHN0cm9rZTojYmJiLHN0cm9rZS13aWR0aDoycHgsY29sb3I6IzMyNmNlNTtcbiAgICBjbGFzcyBuMSxuMixuMyxuNCBrOHM7XG4gICAgY2xhc3Mgem9uZUEsem9uZUIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank

Figure 6. Pod Topology Spread Constraints.

Code block:

graph TB
   subgraph "zoneB"
       n3(Node3)
       n4(Node4)
   end
   subgraph "zoneA"
       n1(Node1)
       n2(Node2)
   end
 
   classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
   classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
   classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
   class n1,n2,n3,n4 k8s;
   class zoneA,zoneB cluster;

Example 2 - Ingress

Figure 7 shows the diagram appearing in the What is Ingress page.

graph LR; client([client])-. Ingress-managed
load balancer .->ingress[Ingress]; ingress-->|routing rule|service[Service]; subgraph cluster ingress; service-->pod1[Pod]; service-->pod2[Pod]; end classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; class ingress,service,pod1,pod2 k8s; class client plain; class cluster cluster; click client "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggIExSXG4gIGNsaWVudChbY2xpZW50XSktLiBJbmdyZXNzLW1hbmFnZWQgPGJyPiBsb2FkIGJhbGFuY2VyIC4tPmluZ3Jlc3NbSW5ncmVzc107XG4gIGluZ3Jlc3MtLT58cm91dGluZyBydWxlfHNlcnZpY2VbU2VydmljZV07XG4gIHN1YmdyYXBoIGNsdXN0ZXJcbiAgaW5ncmVzcztcbiAgc2VydmljZS0tPnBvZDFbUG9kXTtcbiAgc2VydmljZS0tPnBvZDJbUG9kXTtcbiAgZW5kXG4gIGNsYXNzRGVmIHBsYWluIGZpbGw6I2RkZCxzdHJva2U6I2ZmZixzdHJva2Utd2lkdGg6NHB4LGNvbG9yOiMwMDA7XG4gIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICBjbGFzc0RlZiBjbHVzdGVyIGZpbGw6I2ZmZixzdHJva2U6I2JiYixzdHJva2Utd2lkdGg6MnB4LGNvbG9yOiMzMjZjZTU7XG4gIGNsYXNzIGluZ3Jlc3Msc2VydmljZSxwb2QxLHBvZDIgazhzO1xuICBjbGFzcyBjbGllbnQgcGxhaW47XG4gIGNsYXNzIGNsdXN0ZXIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6ZmFsc2V9" _blank click ingress "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggIExSXG4gIGNsaWVudChbY2xpZW50XSktLiBJbmdyZXNzLW1hbmFnZWQgPGJyPiBsb2FkIGJhbGFuY2VyIC4tPmluZ3Jlc3NbSW5ncmVzc107XG4gIGluZ3Jlc3MtLT58cm91dGluZyBydWxlfHNlcnZpY2VbU2VydmljZV07XG4gIHN1YmdyYXBoIGNsdXN0ZXJcbiAgaW5ncmVzcztcbiAgc2VydmljZS0tPnBvZDFbUG9kXTtcbiAgc2VydmljZS0tPnBvZDJbUG9kXTtcbiAgZW5kXG4gIGNsYXNzRGVmIHBsYWluIGZpbGw6I2RkZCxzdHJva2U6I2ZmZixzdHJva2Utd2lkdGg6NHB4LGNvbG9yOiMwMDA7XG4gIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICBjbGFzc0RlZiBjbHVzdGVyIGZpbGw6I2ZmZixzdHJva2U6I2JiYixzdHJva2Utd2lkdGg6MnB4LGNvbG9yOiMzMjZjZTU7XG4gIGNsYXNzIGluZ3Jlc3Msc2VydmljZSxwb2QxLHBvZDIgazhzO1xuICBjbGFzcyBjbGllbnQgcGxhaW47XG4gIGNsYXNzIGNsdXN0ZXIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6ZmFsc2V9" _blank click service "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggIExSXG4gIGNsaWVudChbY2xpZW50XSktLiBJbmdyZXNzLW1hbmFnZWQgPGJyPiBsb2FkIGJhbGFuY2VyIC4tPmluZ3Jlc3NbSW5ncmVzc107XG4gIGluZ3Jlc3MtLT58cm91dGluZyBydWxlfHNlcnZpY2VbU2VydmljZV07XG4gIHN1YmdyYXBoIGNsdXN0ZXJcbiAgaW5ncmVzcztcbiAgc2VydmljZS0tPnBvZDFbUG9kXTtcbiAgc2VydmljZS0tPnBvZDJbUG9kXTtcbiAgZW5kXG4gIGNsYXNzRGVmIHBsYWluIGZpbGw6I2RkZCxzdHJva2U6I2ZmZixzdHJva2Utd2lkdGg6NHB4LGNvbG9yOiMwMDA7XG4gIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICBjbGFzc0RlZiBjbHVzdGVyIGZpbGw6I2ZmZixzdHJva2U6I2JiYixzdHJva2Utd2lkdGg6MnB4LGNvbG9yOiMzMjZjZTU7XG4gIGNsYXNzIGluZ3Jlc3Msc2VydmljZSxwb2QxLHBvZDIgazhzO1xuICBjbGFzcyBjbGllbnQgcGxhaW47XG4gIGNsYXNzIGNsdXN0ZXIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6ZmFsc2V9" _blank click pod1 "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggIExSXG4gIGNsaWVudChbY2xpZW50XSktLiBJbmdyZXNzLW1hbmFnZWQgPGJyPiBsb2FkIGJhbGFuY2VyIC4tPmluZ3Jlc3NbSW5ncmVzc107XG4gIGluZ3Jlc3MtLT58cm91dGluZyBydWxlfHNlcnZpY2VbU2VydmljZV07XG4gIHN1YmdyYXBoIGNsdXN0ZXJcbiAgaW5ncmVzcztcbiAgc2VydmljZS0tPnBvZDFbUG9kXTtcbiAgc2VydmljZS0tPnBvZDJbUG9kXTtcbiAgZW5kXG4gIGNsYXNzRGVmIHBsYWluIGZpbGw6I2RkZCxzdHJva2U6I2ZmZixzdHJva2Utd2lkdGg6NHB4LGNvbG9yOiMwMDA7XG4gIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICBjbGFzc0RlZiBjbHVzdGVyIGZpbGw6I2ZmZixzdHJva2U6I2JiYixzdHJva2Utd2lkdGg6MnB4LGNvbG9yOiMzMjZjZTU7XG4gIGNsYXNzIGluZ3Jlc3Msc2VydmljZSxwb2QxLHBvZDIgazhzO1xuICBjbGFzcyBjbGllbnQgcGxhaW47XG4gIGNsYXNzIGNsdXN0ZXIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6ZmFsc2V9" _blank click pod2 "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggIExSXG4gIGNsaWVudChbY2xpZW50XSktLiBJbmdyZXNzLW1hbmFnZWQgPGJyPiBsb2FkIGJhbGFuY2VyIC4tPmluZ3Jlc3NbSW5ncmVzc107XG4gIGluZ3Jlc3MtLT58cm91dGluZyBydWxlfHNlcnZpY2VbU2VydmljZV07XG4gIHN1YmdyYXBoIGNsdXN0ZXJcbiAgaW5ncmVzcztcbiAgc2VydmljZS0tPnBvZDFbUG9kXTtcbiAgc2VydmljZS0tPnBvZDJbUG9kXTtcbiAgZW5kXG4gIGNsYXNzRGVmIHBsYWluIGZpbGw6I2RkZCxzdHJva2U6I2ZmZixzdHJva2Utd2lkdGg6NHB4LGNvbG9yOiMwMDA7XG4gIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICBjbGFzc0RlZiBjbHVzdGVyIGZpbGw6I2ZmZixzdHJva2U6I2JiYixzdHJva2Utd2lkdGg6MnB4LGNvbG9yOiMzMjZjZTU7XG4gIGNsYXNzIGluZ3Jlc3Msc2VydmljZSxwb2QxLHBvZDIgazhzO1xuICBjbGFzcyBjbGllbnQgcGxhaW47XG4gIGNsYXNzIGNsdXN0ZXIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6ZmFsc2V9" _blank
graph LR; client([client])-. Ingress-managed
load balancer .->ingress[Ingress]; ingress-->|routing rule|service[Service]; subgraph cluster ingress; service-->pod1[Pod]; service-->pod2[Pod]; end classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; class ingress,service,pod1,pod2 k8s; class client plain; class cluster cluster; click client "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggIExSXG4gIGNsaWVudChbY2xpZW50XSktLiBJbmdyZXNzLW1hbmFnZWQgPGJyPiBsb2FkIGJhbGFuY2VyIC4tPmluZ3Jlc3NbSW5ncmVzc107XG4gIGluZ3Jlc3MtLT58cm91dGluZyBydWxlfHNlcnZpY2VbU2VydmljZV07XG4gIHN1YmdyYXBoIGNsdXN0ZXJcbiAgaW5ncmVzcztcbiAgc2VydmljZS0tPnBvZDFbUG9kXTtcbiAgc2VydmljZS0tPnBvZDJbUG9kXTtcbiAgZW5kXG4gIGNsYXNzRGVmIHBsYWluIGZpbGw6I2RkZCxzdHJva2U6I2ZmZixzdHJva2Utd2lkdGg6NHB4LGNvbG9yOiMwMDA7XG4gIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICBjbGFzc0RlZiBjbHVzdGVyIGZpbGw6I2ZmZixzdHJva2U6I2JiYixzdHJva2Utd2lkdGg6MnB4LGNvbG9yOiMzMjZjZTU7XG4gIGNsYXNzIGluZ3Jlc3Msc2VydmljZSxwb2QxLHBvZDIgazhzO1xuICBjbGFzcyBjbGllbnQgcGxhaW47XG4gIGNsYXNzIGNsdXN0ZXIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6ZmFsc2V9" _blank click ingress "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggIExSXG4gIGNsaWVudChbY2xpZW50XSktLiBJbmdyZXNzLW1hbmFnZWQgPGJyPiBsb2FkIGJhbGFuY2VyIC4tPmluZ3Jlc3NbSW5ncmVzc107XG4gIGluZ3Jlc3MtLT58cm91dGluZyBydWxlfHNlcnZpY2VbU2VydmljZV07XG4gIHN1YmdyYXBoIGNsdXN0ZXJcbiAgaW5ncmVzcztcbiAgc2VydmljZS0tPnBvZDFbUG9kXTtcbiAgc2VydmljZS0tPnBvZDJbUG9kXTtcbiAgZW5kXG4gIGNsYXNzRGVmIHBsYWluIGZpbGw6I2RkZCxzdHJva2U6I2ZmZixzdHJva2Utd2lkdGg6NHB4LGNvbG9yOiMwMDA7XG4gIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICBjbGFzc0RlZiBjbHVzdGVyIGZpbGw6I2ZmZixzdHJva2U6I2JiYixzdHJva2Utd2lkdGg6MnB4LGNvbG9yOiMzMjZjZTU7XG4gIGNsYXNzIGluZ3Jlc3Msc2VydmljZSxwb2QxLHBvZDIgazhzO1xuICBjbGFzcyBjbGllbnQgcGxhaW47XG4gIGNsYXNzIGNsdXN0ZXIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6ZmFsc2V9" _blank click service "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggIExSXG4gIGNsaWVudChbY2xpZW50XSktLiBJbmdyZXNzLW1hbmFnZWQgPGJyPiBsb2FkIGJhbGFuY2VyIC4tPmluZ3Jlc3NbSW5ncmVzc107XG4gIGluZ3Jlc3MtLT58cm91dGluZyBydWxlfHNlcnZpY2VbU2VydmljZV07XG4gIHN1YmdyYXBoIGNsdXN0ZXJcbiAgaW5ncmVzcztcbiAgc2VydmljZS0tPnBvZDFbUG9kXTtcbiAgc2VydmljZS0tPnBvZDJbUG9kXTtcbiAgZW5kXG4gIGNsYXNzRGVmIHBsYWluIGZpbGw6I2RkZCxzdHJva2U6I2ZmZixzdHJva2Utd2lkdGg6NHB4LGNvbG9yOiMwMDA7XG4gIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICBjbGFzc0RlZiBjbHVzdGVyIGZpbGw6I2ZmZixzdHJva2U6I2JiYixzdHJva2Utd2lkdGg6MnB4LGNvbG9yOiMzMjZjZTU7XG4gIGNsYXNzIGluZ3Jlc3Msc2VydmljZSxwb2QxLHBvZDIgazhzO1xuICBjbGFzcyBjbGllbnQgcGxhaW47XG4gIGNsYXNzIGNsdXN0ZXIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6ZmFsc2V9" _blank click pod1 "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggIExSXG4gIGNsaWVudChbY2xpZW50XSktLiBJbmdyZXNzLW1hbmFnZWQgPGJyPiBsb2FkIGJhbGFuY2VyIC4tPmluZ3Jlc3NbSW5ncmVzc107XG4gIGluZ3Jlc3MtLT58cm91dGluZyBydWxlfHNlcnZpY2VbU2VydmljZV07XG4gIHN1YmdyYXBoIGNsdXN0ZXJcbiAgaW5ncmVzcztcbiAgc2VydmljZS0tPnBvZDFbUG9kXTtcbiAgc2VydmljZS0tPnBvZDJbUG9kXTtcbiAgZW5kXG4gIGNsYXNzRGVmIHBsYWluIGZpbGw6I2RkZCxzdHJva2U6I2ZmZixzdHJva2Utd2lkdGg6NHB4LGNvbG9yOiMwMDA7XG4gIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICBjbGFzc0RlZiBjbHVzdGVyIGZpbGw6I2ZmZixzdHJva2U6I2JiYixzdHJva2Utd2lkdGg6MnB4LGNvbG9yOiMzMjZjZTU7XG4gIGNsYXNzIGluZ3Jlc3Msc2VydmljZSxwb2QxLHBvZDIgazhzO1xuICBjbGFzcyBjbGllbnQgcGxhaW47XG4gIGNsYXNzIGNsdXN0ZXIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6ZmFsc2V9" _blank click pod2 "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggIExSXG4gIGNsaWVudChbY2xpZW50XSktLiBJbmdyZXNzLW1hbmFnZWQgPGJyPiBsb2FkIGJhbGFuY2VyIC4tPmluZ3Jlc3NbSW5ncmVzc107XG4gIGluZ3Jlc3MtLT58cm91dGluZyBydWxlfHNlcnZpY2VbU2VydmljZV07XG4gIHN1YmdyYXBoIGNsdXN0ZXJcbiAgaW5ncmVzcztcbiAgc2VydmljZS0tPnBvZDFbUG9kXTtcbiAgc2VydmljZS0tPnBvZDJbUG9kXTtcbiAgZW5kXG4gIGNsYXNzRGVmIHBsYWluIGZpbGw6I2RkZCxzdHJva2U6I2ZmZixzdHJva2Utd2lkdGg6NHB4LGNvbG9yOiMwMDA7XG4gIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICBjbGFzc0RlZiBjbHVzdGVyIGZpbGw6I2ZmZixzdHJva2U6I2JiYixzdHJva2Utd2lkdGg6MnB4LGNvbG9yOiMzMjZjZTU7XG4gIGNsYXNzIGluZ3Jlc3Msc2VydmljZSxwb2QxLHBvZDIgazhzO1xuICBjbGFzcyBjbGllbnQgcGxhaW47XG4gIGNsYXNzIGNsdXN0ZXIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6ZmFsc2V9" _blank

Code block:

graph LR;
 client([client])-. Ingress-managed <br> load balancer .->ingress[Ingress];
 ingress-->|routing rule|service[Service];
 subgraph cluster
 ingress;
 service-->pod1[Pod];
 service-->pod2[Pod];
 end
 classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
 classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
 classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
 class ingress,service,pod1,pod2 k8s;
 class client plain;
 class cluster cluster;

Example 3 - umh system flow WIP

Figure 8 depicts a Mermaid sequence diagram showing the system flow between umh components to start a container.

Code block:

%%{init:{"theme":"neutral"}}%%
sequenceDiagram
    actor me
    participant apiSrv as control plane<br><br>api-server
    participant etcd as control plane<br><br>etcd datastore
    participant cntrlMgr as control plane<br><br>controller<br>manager
    participant sched as control plane<br><br>scheduler
    participant kubelet as node<br><br>kubelet
    participant container as node<br><br>container<br>runtime
    me->>apiSrv: 1. kubectl create -f pod.yaml
    apiSrv-->>etcd: 2. save new state
    cntrlMgr->>apiSrv: 3. check for changes
    sched->>apiSrv: 4. watch for unassigned pods(s)
    apiSrv->>sched: 5. notify about pod w nodename=" "
    sched->>apiSrv: 6. assign pod to node
    apiSrv-->>etcd: 7. save new state
    kubelet->>apiSrv: 8. look for newly assigned pod(s)
    apiSrv->>kubelet: 9. bind pod to node
    kubelet->>container: 10. start container
    kubelet->>apiSrv: 11. update pod status
    apiSrv-->>etcd: 12. save new state

How to style diagrams

You can style one or more diagram elements using well-known CSS nomenclature. You accomplish this using two types of statements in the Mermaid code.

  • classDef defines a class of style attributes.
  • class defines one or more elements to apply the class to.

In the code for figure 7, you can see examples of both.

classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; // defines style for the k8s class
class ingress,service,pod1,pod2 k8s; // k8s class is applied to elements ingress, service, pod1 and pod2.

You can include one or multiple classDef and class statements in your diagram.

For more information on styling and classes, see Mermaid Styling and classes docs.

How to use captions

A caption is a brief description of a diagram. A title or a short description of the diagram are examples of captions. Captions aren’t meant to replace explanatory text you have in your documentation. Rather, they serve as a “context link” between that text and your diagram.

The combination of some text and a diagram tied together with a caption help provide a concise representation of the information you wish to convey to the user.

Without captions, you are asking the user to scan the text above or below the diagram to figure out a meaning. This can be frustrating for the user.

Figure 9 lays out the three components for proper captioning: diagram, diagram caption and the diagram referral.

flowchart A[Diagram

Inline Mermaid or
SVG image files] B[Diagram Caption

Add Figure Number. and
Caption Text] C[Diagram Referral

Referenence Figure Number
in text] classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000; class A,B,C box click A "https://mermaid-js.github.io/mermaid-live-editor/edit#eyJjb2RlIjoiZmxvd2NoYXJ0XG4gICAgQVtEaWFncmFtPGJyPjxicj5JbmxpbmUgTWVybWFpZCBvcjxicj5TVkcgaW1hZ2UgZmlsZXNdXG4gICAgQltEaWFncmFtIENhcHRpb248YnI-PGJyPkFkZCBGaWd1cmUgTnVtYmVyLiBhbmQ8YnI-Q2FwdGlvbiBUZXh0XVxuICAgIENbRGlhZ3JhbSBSZWZlcnJhbDxicj48YnI-UmVmZXJlbmVuY2UgRmlndXJlIE51bWJlcjxicj5pbiB0ZXh0XVxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMgYm94IiwibWVybWFpZCI6IntcbiAgXCJ0aGVtZVwiOiBcImRlZmF1bHRcIlxufSIsInVwZGF0ZUVkaXRvciI6ZmFsc2UsImF1dG9TeW5jIjp0cnVlLCJ1cGRhdGVEaWFncmFtIjpmYWxzZX0" _blank click B "https://mermaid-js.github.io/mermaid-live-editor/edit#eyJjb2RlIjoiZmxvd2NoYXJ0XG4gICAgQVtEaWFncmFtPGJyPjxicj5JbmxpbmUgTWVybWFpZCBvcjxicj5TVkcgaW1hZ2UgZmlsZXNdXG4gICAgQltEaWFncmFtIENhcHRpb248YnI-PGJyPkFkZCBGaWd1cmUgTnVtYmVyLiBhbmQ8YnI-Q2FwdGlvbiBUZXh0XVxuICAgIENbRGlhZ3JhbSBSZWZlcnJhbDxicj48YnI-UmVmZXJlbmVuY2UgRmlndXJlIE51bWJlcjxicj5pbiB0ZXh0XVxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMgYm94IiwibWVybWFpZCI6IntcbiAgXCJ0aGVtZVwiOiBcImRlZmF1bHRcIlxufSIsInVwZGF0ZUVkaXRvciI6ZmFsc2UsImF1dG9TeW5jIjp0cnVlLCJ1cGRhdGVEaWFncmFtIjpmYWxzZX0" _blank click C "https://mermaid-js.github.io/mermaid-live-editor/edit#eyJjb2RlIjoiZmxvd2NoYXJ0XG4gICAgQVtEaWFncmFtPGJyPjxicj5JbmxpbmUgTWVybWFpZCBvcjxicj5TVkcgaW1hZ2UgZmlsZXNdXG4gICAgQltEaWFncmFtIENhcHRpb248YnI-PGJyPkFkZCBGaWd1cmUgTnVtYmVyLiBhbmQ8YnI-Q2FwdGlvbiBUZXh0XVxuICAgIENbRGlhZ3JhbSBSZWZlcnJhbDxicj48YnI-UmVmZXJlbmVuY2UgRmlndXJlIE51bWJlcjxicj5pbiB0ZXh0XVxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMgYm94IiwibWVybWFpZCI6IntcbiAgXCJ0aGVtZVwiOiBcImRlZmF1bHRcIlxufSIsInVwZGF0ZUVkaXRvciI6ZmFsc2UsImF1dG9TeW5jIjp0cnVlLCJ1cGRhdGVEaWFncmFtIjpmYWxzZX0" _blank
flowchart A[Diagram

Inline Mermaid or
SVG image files] B[Diagram Caption

Add Figure Number. and
Caption Text] C[Diagram Referral

Referenence Figure Number
in text] classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000; class A,B,C box click A "https://mermaid-js.github.io/mermaid-live-editor/edit#eyJjb2RlIjoiZmxvd2NoYXJ0XG4gICAgQVtEaWFncmFtPGJyPjxicj5JbmxpbmUgTWVybWFpZCBvcjxicj5TVkcgaW1hZ2UgZmlsZXNdXG4gICAgQltEaWFncmFtIENhcHRpb248YnI-PGJyPkFkZCBGaWd1cmUgTnVtYmVyLiBhbmQ8YnI-Q2FwdGlvbiBUZXh0XVxuICAgIENbRGlhZ3JhbSBSZWZlcnJhbDxicj48YnI-UmVmZXJlbmVuY2UgRmlndXJlIE51bWJlcjxicj5pbiB0ZXh0XVxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMgYm94IiwibWVybWFpZCI6IntcbiAgXCJ0aGVtZVwiOiBcImRlZmF1bHRcIlxufSIsInVwZGF0ZUVkaXRvciI6ZmFsc2UsImF1dG9TeW5jIjp0cnVlLCJ1cGRhdGVEaWFncmFtIjpmYWxzZX0" _blank click B "https://mermaid-js.github.io/mermaid-live-editor/edit#eyJjb2RlIjoiZmxvd2NoYXJ0XG4gICAgQVtEaWFncmFtPGJyPjxicj5JbmxpbmUgTWVybWFpZCBvcjxicj5TVkcgaW1hZ2UgZmlsZXNdXG4gICAgQltEaWFncmFtIENhcHRpb248YnI-PGJyPkFkZCBGaWd1cmUgTnVtYmVyLiBhbmQ8YnI-Q2FwdGlvbiBUZXh0XVxuICAgIENbRGlhZ3JhbSBSZWZlcnJhbDxicj48YnI-UmVmZXJlbmVuY2UgRmlndXJlIE51bWJlcjxicj5pbiB0ZXh0XVxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMgYm94IiwibWVybWFpZCI6IntcbiAgXCJ0aGVtZVwiOiBcImRlZmF1bHRcIlxufSIsInVwZGF0ZUVkaXRvciI6ZmFsc2UsImF1dG9TeW5jIjp0cnVlLCJ1cGRhdGVEaWFncmFtIjpmYWxzZX0" _blank click C "https://mermaid-js.github.io/mermaid-live-editor/edit#eyJjb2RlIjoiZmxvd2NoYXJ0XG4gICAgQVtEaWFncmFtPGJyPjxicj5JbmxpbmUgTWVybWFpZCBvcjxicj5TVkcgaW1hZ2UgZmlsZXNdXG4gICAgQltEaWFncmFtIENhcHRpb248YnI-PGJyPkFkZCBGaWd1cmUgTnVtYmVyLiBhbmQ8YnI-Q2FwdGlvbiBUZXh0XVxuICAgIENbRGlhZ3JhbSBSZWZlcnJhbDxicj48YnI-UmVmZXJlbmVuY2UgRmlndXJlIE51bWJlcjxicj5pbiB0ZXh0XVxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMgYm94IiwibWVybWFpZCI6IntcbiAgXCJ0aGVtZVwiOiBcImRlZmF1bHRcIlxufSIsInVwZGF0ZUVkaXRvciI6ZmFsc2UsImF1dG9TeW5jIjp0cnVlLCJ1cGRhdGVEaWFncmFtIjpmYWxzZX0" _blank
Figure 9. Caption Components.

You should always add a caption to each diagram in your documentation.

Diagram

The Mermaid+SVG and External Tool methods generate .svg image files.

Here is the {{< figure >}} shortcode for the diagram defined in an .svg image file saved to /images/development/contribute/documentation/components-of-kubernetes.svg:

{{< figure src="/images/development/contribute/documentation/components-of-kubernetes.svg" alt="United Manufacturing Hub pod running inside a cluster" class="diagram-large" caption="Figure 4. United Manufacturing Hub Architecture Components >}}

You should pass the src, alt, class and caption values into the {{< figure >}} shortcode. You can adjust the size of the diagram using diagram-large, diagram-medium and diagram-small classes.

Diagrams created using the `Inline` method don't use the `{{< figure >}}` shortcode. The Mermaid code defines how the diagram will render on your page.

See Methods for creating diagrams for more information on the different methods for creating diagrams.

Diagram Caption

Next, add a diagram caption.

If you define your diagram in an .svg image file, then you should use the {{< figure >}} shortcode’s caption parameter.

{{< figure src="/images/development/contribute/documentation/components-of-kubernetes.svg" alt="United Manufacturing Hub pod running inside a cluster" class="diagram-large" caption="Figure 4. United Manufacturing Hub Architecture Components" >}}

If you define your diagram using inline Mermaid code, then you should use Markdown text.

Figure 4. United Manufacturing Hub Architecture Components

The following lists several items to consider when adding diagram captions:

  • Use the {{< figure >}} shortcode to add a diagram caption for Mermaid+SVG and External Tool diagrams.
  • Use simple Markdown text to add a diagram caption for the Inline method.
  • Prepend your diagram caption with Figure NUMBER.. You must use Figure and the number must be unique for each diagram in your documentation page. Add a period after the number.
  • Add your diagram caption text after the Figure NUMBER. on the same line. You must puncuate the caption with a period. Keep the caption text short.
  • Position your diagram caption BELOW your diagram.

Diagram Referral

Finally, you can add a diagram referral. This is used inside your text and should precede the diagram itself. It allows a user to connect your text with the associated diagram. The Figure NUMBER in your referral and caption must match.

You should avoid using spatial references such as ..the image below.. or ..the following figure ..

Here is an example of a diagram referral:

Figure 10 depicts the components of the United Manufacturing Hub architecture.
The control plane ...

Diagram referrals are optional and there are cases where they might not be suitable. If you are not sure, add a diagram referral to your text to see if it looks and sounds okay. When in doubt, use a diagram referral.

Complete picture

Figure 10 shows the United Manufacturing Hub Architecture diagram that includes the diagram, diagram caption and diagram referral. The {{< figure >}} shortcode renders the diagram, adds the caption and includes the optional link parameter so you can hyperlink the diagram. The diagram referral is contained in this paragraph.

Here is the {{< figure >}} shortcode for this diagram:

{{< figure src="/images/development/contribute/documentation/components-of-kubernetes.svg" alt="United Manufacturing Hub pod running inside a cluster" class="diagram-large" caption="Figure 10. United Manufacturing Hub Architecture." link="https://kubernetes.io/docs/concepts/overview/components/" >}}
United Manufacturing Hub pod running inside a cluster

Figure 10. United Manufacturing Hub Architecture.

Tips

  • Always use the live editor to create/edit your diagram.

  • Always use Hugo local and Netlify previews to check out how the diagram appears in the documentation.

  • Include diagram source pointers such as a URL, source code location, or indicate the code is self-documenting.

  • Always use diagram captions.

  • Very helpful to include the diagram .svg or .png image and/or Mermaid source code in issues and PRs.

  • With the Mermaid+SVG and External Tool methods, use .svg image files because they stay sharp when you zoom in on the diagram.

  • Best practice for .svg files is to load it into an SVG editing tool and use the “Convert text to paths” function. This ensures that the diagram renders the same on all systems, regardless of font availability and font rendering support.

  • No Mermaid support for additional icons or artwork.

  • Hugo Mermaid shortcodes don’t work in the live editor.

  • Any time you modify a diagram in the live editor, you must save it to generate a new URL for the diagram.

  • Click on the diagrams in this section to view the code and diagram rendering in the live editor.

  • Look over the source code of this page, diagram-guide.md, for more examples.

  • Check out the Mermaid docs for explanations and examples.

Most important, Keep Diagrams Simple. This will save time for you and fellow contributors, and allow for easier reading by new and experienced users.

10.1.4.3.4 - Custom Hugo Shortcodes

This page explains the custom Hugo shortcodes used in United Manufacturing Hub documentation.

One of the powerful features of Hugo is the ability to create custom shortcodes. Shortcodes are simple snippets of code that you can use to add complex content to your documentation.

Read more about shortcodes in the Hugo documentation.

Code example

You can use the codenew shortcode to display code examples in your documentation. This is especially useful for code snippets that you want to reuse in multiple places.

After you add a new file with a code snippet in the examples directory, you can reference it in your documentation using the codenew shortcode with the file parameter set to the path to the file, relative to the examples directory.

A Copy button is automatically added to the code snippet. When the user clicks the button, the code is copied to the clipboard.

Here’s an example:

{{< codenew file="helm/install-umh.sh" >}}

The rendered shortcode looks like this:

helm install united-manufacturing-hub united-manufacturing-hub -n united-manufacturing-hub

Heading

You can use the heading shortcode to use localized strings as headings in your documentation. The available headings are described in the content types page.

For example, to create a whatsnext heading, add the heading shortcode with the “whatsnext” string:

## {{% heading "whatsnext" %}}

Include

You can use the include shortcode to include a file in your documentation. This is especially useful for including markdown files that you want to reuse in multiple places.

After you add a new file in the includes directory, you can reference it in your documentation using the include shortcode with the first parameter set to the path to the file, relative to the includes directory.

Here’s an example:

{{< include "pod-logs.md" >}}

Mermaid

You can use the mermaid shortcode to display Mermaid diagrams in your documentation. You can find more information in the diagram guide.

Here’s an example:

{{< mermaid >}}
graph TD;
    A-->B;
    A-->C;
    B-->D;
    C-->D;
{{< /mermaid >}}

The rendered shortcode looks like this:

graph TD; A-->B; A-->C; B-->D; C-->D;
graph TD; A-->B; A-->C; B-->D; C-->D;

Notice

You can use the notice shortcode to display a notice in your documentation. There are four types of notices: note, warning, info, and tip.

Here’s an example:

{{< notice note >}}
This is a note.
{{< /notice >}}
{{< notice warning >}}
This is a warning.
{{< /notice >}}
{{< notice info >}}
This is an info.
{{< /notice >}}
{{< notice tip >}}
This is a tip.
{{< /notice >}}

The rendered shortcode looks like this:

This is a note.
This is a warning.
This is an info.
This is a tip.

Resource

You can use the resource shortcode to display a resource in your documentation. The resource shortcode takes these parameters:

  • name: The name of the resource.
  • type: The type of the resource.

This is useful for displaying resources which name might change over time, like a pod name.

Here’s an example:

{{< resource type="pod" name="database" >}}

The rendered shortcode looks like this: united-manufacturing-hub-timescaledb-0

The resources are defined in the i18n/en.toml file. You can add a new resource by adding a new entry like [resource_<type>_<name>]

Table captions

You can make tables more accessible to screen readers by adding a table caption. To add a caption to a table, enclose the table with a table shortcode and specify the caption with the caption parameter.

Table captions are visible to screen readers but invisible when viewed in standard HTML.

Here’s an example:

{{< table caption="Configuration parameters" >}}
| Parameter  | Description                  | Default |
| :--------- | :--------------------------- | :------ |
| `timeout`  | The timeout for requests     | `30s`   |
| `logLevel` | The log level for log output | `INFO`  |
{{< /table >}}

The rendered table looks like this:

Configuration parameters
ParameterDescriptionDefault
timeoutThe timeout for requests30s
logLevelThe log level for log outputINFO

If you inspect the HTML for the table, you should see this element immediately after the opening <table> element:

<caption style="display: none;">Configuration parameters</caption>

Tabs

In a markdown page (.md file) on this site, you can add a tab set to display multiple flavors of a given solution.

The tabs shortcode takes these parameters:

  • name: The name as shown on the tab.
  • codelang: If you provide inner content to the tab shortcode, you can tell Hugo what code language to use for highlighting.
  • include: The file to include in the tab. If the tab lives in a Hugo leaf bundle, the file – which can be any MIME type supported by Hugo – is looked up in the bundle itself. If not, the content page that needs to be included is looked up relative to the current page. Note that with the include, you do not have any shortcode inner content and must use the self-closing syntax. For example, {{< tab name="Content File #1" include="example1" />}}. The language needs to be specified under codelang or the language is taken based on the file name. Non-content files are code-highlighted by default.
  • If your inner content is markdown, you must use the %-delimiter to surround the tab. For example, {{% tab name="Tab 1" %}}This is **markdown**{{% /tab %}}
  • You can combine the variations mentioned above inside a tab set.

Below is a demo of the tabs shortcode.

The tab **name** in a `tabs` definition must be unique within a content page.

Tabs demo: Code highlighting

{{< tabs name="tab_with_code" >}}
{{< tab name="Tab 1" codelang="bash" >}}
echo "This is tab 1."
{{< /tab >}}
{{< tab name="Tab 2" codelang="go" >}}
println "This is tab 2."
{{< /tab >}}
{{< /tabs >}}

Renders to:


echo "This is tab 1."


println "This is tab 2."

Tabs demo: Inline Markdown and HTML

{{< tabs name="tab_with_md" >}}
{{% tab name="Markdown" %}}
This is **some markdown.**
{{< note >}}
It can even contain shortcodes.
{{< /note >}}
{{% /tab %}}
{{< tab name="HTML" >}}
<div>
  <h3>Plain HTML</h3>
  <p>This is some <i>plain</i> HTML.</p>
</div>
{{< /tab >}}
{{< /tabs >}}

Renders to:

This is some markdown.

It can even contain shortcodes.

Plain HTML

This is some plain HTML.

Tabs demo: File include

{{< tabs name="tab_with_file_include" >}}
{{< tab name="Content File #1" include="example1" />}}
{{< tab name="Content File #2" include="example2" />}}
{{< tab name="JSON File" include="podtemplate" />}}
{{< /tabs >}}

Renders to:

This is an example content file inside the includes leaf bundle.

Included content files can also contain shortcodes.

This is another example content file inside the includes leaf bundle.

  {
    "apiVersion": "v1",
    "kind": "PodTemplate",
    "metadata": {
      "name": "nginx"
    },
    "template": {
      "metadata": {
        "labels": {
          "name": "nginx"
        },
        "generateName": "nginx-"
      },
      "spec": {
         "containers": [{
           "name": "nginx",
           "image": "dockerfile/nginx",
           "ports": [{"containerPort": 80}]
         }]
      }
    }
  }

Version strings

To generate a version string for inclusion in the documentation, you can choose from several version shortcodes. Each version shortcode displays a version string derived from the value of a version parameter found in the site configuration file, config.toml. The two most commonly used version parameters are latest and version.

{{< param "version" >}}

The {{< param "version" >}} shortcode generates the value of the current version of the Kubernetes documentation from the version site parameter. The param shortcode accepts the name of one site parameter, in this case: version.

In previously released documentation, `latest` and `version` parameter values are not equivalent. After a new version is released, `latest` is incremented and the value of `version` for the documentation set remains unchanged. For example, a previously released version of the documentation displays `version` as `v1.19` and `latest` as `v1.20`.

Renders to:

v0.2

{{< latest-umh-version >}}

The {{< latest-umh-version >}} shortcode returns the value of the latestUMH site parameter. The latestUMH site parameter must be updated when a new version of the UMH Helm chart is released.

Renders to:

{{< latest-umh-semver >}}

The {{< latest-umh-semver >}} shortcode generates the value of latestUMH without the “v” prefix.

Renders to:

{{< version-check >}}

The {{< version-check >}} shortcode checks if the min-kubernetes-server-version page parameter is present and then uses this value to compare to version.

Renders to:

To check the United Manufacturing Hub version, open UMHLens / OpenLens and go to Helm > Releases. The version is listed in the Version column.

What’s next

10.1.4.4 - Localizing UMH documentation

This page shows you how to localize the docs for a different language.

This page shows you how to localize the docs for a different language.

Contribute to an existing localization

You can help add or improve the content of an existing localization.

For extra details on how to contribute to a specific localization, look for a localized version of this page.

Find your two-letter language code

First, consult the ISO 639-1standard to find your localization’s two-letter language code. For example, the two-letter code for German is de.

Some languages use a lowercase version of the country code as defined by the ISO-3166 along with their language codes. For example, the Brazilian Portuguese language code is pt-br.

Fork and clone the repo

First, create your ownfork of the united-manufacturing-hub/umh.docs.umh.app repository.

Then, clone your fork and cd into it:

git clone https://github.com/<username>/umh.docs.umh.app
cd umh.docs.umh.app

The website content directory includes subdirectories for each language. The localization you want to help out with is inside content/<two-letter-code>.

Suggest changes

Create or update your chosen localized page based on the English original. See translating content for more details.

If you notice a technical inaccuracy or other problem with the upstream (English) documentation, you should fix the upstream documentation first and then repeat the equivalent fix by updating the localization you’re working on.

Limit changes in a pull requests to a single localization. Reviewing pull requests that change content in multiple localizations is problematic.

Follow Suggesting Content Improvements to propose changes to that localization. The process is similar to proposing changes to the upstream (English) content.

Start a new localization

If you want the United Manufacturing Hub documentation localized into a new language, here’s what you need to do.

All localization teams must be self-sufficient. The United Manufacturing Hub website is happy to host your work, but it’s up to you to translate it and keep existing localized content current.

You’ll need to know the two-letter language code for your language. Consult the ISO 639-1 standard to find your localization’s two-letter language code. For example, the two-letter code for Korean is ko.

If the language you are starting a localization for is spoken in various places with significant differences between the variants, it might make sense to combine the lowercased ISO-3166 country code with the language two-letter code. For example, Brazilian Portuguese is localized as pt-br.

When you start a new localization, you must localize all the minimum required content before the United Manufacturing Hub project can publish your changes to the live website.

Modify the site configuration

The United Manufacturing Hub website uses Hugo as its web framework. The website’s Hugo configuration resides in the config.toml file. You’ll need to modify config.toml to support a new localization.

Add a configuration block for the new language to config.toml under the existing [languages] block. The German block, for example, looks like:

[languages.de]
title = "United Manufacturing Hub"
description = "Dokumentation des United Manufacturing Hub"
languageName = "Deutsch (German)"
languageNameLatinScript = "Deutsch"
contentDir = "content/de"
weight = 8

The language selection bar lists the value for languageName. Assign “language name in native script and language (English language name in Latin script)” to languageName. For example, languageName = "한국어 (Korean)" or languageName = "Deutsch (German)".

languageNameLatinScript can be used to access the language name in Latin script and use it in the theme. Assign “language name in latin script” to languageNameLatinScript. For example, languageNameLatinScript ="Korean" or languageNameLatinScript = "Deutsch".

When assigning a weight parameter for your block, find the language block with the highest weight and add 1 to that value.

For more information about Hugo’s multilingual support, see “Multilingual Mode”.

Add a new localization directory

Add a language-specific subdirectory to the content folder in the repository. For example, the two-letter code for German is de:

mkdir content/de

You also need to create a directory inside i18n/ for localized strings; look at existing localizations for an example.

For example, for German the strings live in i18n/de.toml.

Open a pull request

Next, open a pull request (PR) to add a localization to the united-manufacturing-hub/umh.docs.umh.app repository. The PR must include all the minimum required content before it can be approved.

Add a localized README file

To guide other localization contributors, add a new README-**.md to the top level of united-manufacturing-hub/umh.docs.umh.app, where ** is the two-letter language code. For example, a German README file would be README-de.md.

Guide localization contributors in the localized README-**.md file. Include the same information contained in README.md as well as:

  • A point of contact for the localization project
  • Any information specific to the localization

After you create the localized README, add a link to the file from the main English README.md, and include contact information in English. You can provide a GitHub ID, email address, Discord channel, or another method of contact.

Launching your new localization

When a localization meets the requirements for workflow and minimum output, the UMH team does the following:

Translating content

Localizing all the United Manufacturing Hub documentation is an enormous task. It’s okay to start small and expand over time.

Minimum required content

At a minimum, all localizations must include:

DescriptionURLs
AdministrationAll heading and subheading URLs
ArchitectureAll heading and subheading URLs
Getting startedAll heading and subheading URLs
Produciton guideAll heading and subheading URLs
Site stringsAll site strings in a new localized TOML file

Translated documents must reside in their own content/**/ subdirectory, but otherwise, follow the same URL path as the English source. For example, to prepare the Getting started tutorial for translation into German, create a subfolder under the content/de/ folder and copy the English source:

mkdir -p content/de/docs/getstarted
cp content/en/docs/getstarted/installation.md content/de/docs/getstarted/installation.md

Translation tools can speed up the translation process. For example, some editors offer plugins to quickly translate text.

Machine-generated translation is insufficient on its own. Localization requires extensive human review to meet minimum standards of quality.

To ensure accuracy in grammar and meaning, members of your localization team should carefully review all machine-generated translations before publishing.

Source files

Localizations must be based on the English files from a specific release targeted by the localization team. Each localization team can decide which release to target, referred to as the target version below.

To find source files for your target version:

  1. Navigate to the United Manufacturing Hub website repository at united-manufacturing-hub/umh.docs.umh.app.

  2. Select a branch for your target version from the following table:

Target versionBranch
Latest versionmain

The main branch holds content for the current release .

Site strings in i18n

Localizations must include the contents of i18n/en.toml in a new language-specific file. Using German as an example: i18n/de.toml.

Add a new localization file to i18n/. For example, with German (de):

cp i18n/en.toml i18n/de.toml

Revise the comments at the top of the file to suit your localization, then translate the value of each string. For example, this is the German-language placeholder text for the search form:

[ui_search_placeholder]
other = "Suchen"

Localizing site strings lets you customize site-wide text and features: for example, the legal copyright text in the footer on each page.

10.1.4.5 - Versioning Documentation

This page describe how to version the documentation website.

With the Beta release of the Management Console, we are introducing a new versioning system for the documentation website. This system will ensure that the documentation is versioned in sync with the Management Console’s minor versions. Each new minor release of the Management Console will correspond to a new version of the documentation.

Branches

Below is an outline of the branching strategy we will employ for versioning the documentation website:

Branching system
Branching system

main branch

The main branch will serve as the living documentation for the latest released version of the Management Console. Following a new release, only patches and hotfixes will be committed to this branch.

Version branches

Upon the release of a new minor version of the Management Console, a snapshot of the main branch will be taken. This serves as an archive for the documentation corresponding to the previous version. For instance, with the release of Management Console version 1.1, we will create a branch from main named v1.0. The v1.0 branch will host the documentation for the Management Console version 1.0 and will no longer receive updates for subsequent versions.

Development branches

Simultaneously with the snapshot creation, we’ll establish a development branch for the upcoming version. For example, concurrent with the launch of Management Console version 1.1, we will initiate a dev-v1.2 branch. This branch will accumulate all the documentation updates for the forthcoming version of the Management Console. Upon the release of the next version, we will merge the dev-v1.2 branch into main, updating the documentation website to reflect the newest version.

Hugo configuration

To maintain the versioning of our documentation website, specific adjustments need to be made to the Hugo configuration file (hugo.toml). Follow the steps below to ensure the versioning is correctly reflected.

  • Version branches

    • Update the latest parameter to match the branch version. For instance, for the v1.0 branch, set latest to 1.0.

    • The [[params.versions]] array should include entries for the current version and the upcoming version. For the v1.0 branch, the configuration would be:

      [[params.versions]]
      version = "1.1" # Upcoming version
      url = "https://umh.docs.umh.app"
      branch = "main"
      [[params.versions]]
      version = "1.0" # Current version
      url = "https://v1-0.umh-docs-umh-app.pages.dev/docs/"
      branch = "v1.0"
      
  • Development branches

    • Set the latest parameter to the version that the branch is preparing. If the branch is dev-v1.2, then latest should be 1.2.

    • The [[params.versions]] array should list the version being developed using the Cloudflare Pages URL. The entry for dev-v1.2 would be:

      [[params.versions]]
      version = "1.2" # Version in development
      url = "https://dev-v1-2--umh-docs-umh-app.pages.dev"
      branch = "dev-v1.2"
      [[params.versions]]
      version = "1.1" # latest version
      url = "https://umh.docs.umh.app"
      branch = "main"
      
    • Prior to merging a development branch into main, update the url for the version being released to point to the main site and adjust the entry for the previous version to its Cloudflare Pages URL. For instance, just before merging dev-v1.2:

      [[params.versions]]
      version = "1.2" # New stable version
      url = "https://umh.docs.umh.app"
      branch = "main"
      [[params.versions]]
      version = "1.1" # Previous version
      url = "https://v1-1--umh-docs-umh-app.pages.dev"
      branch = "v1.1"
      

Always ensure that the [[params.versions]] array reflects the correct order of the versions, with the newest version appearing first.

10.2 - Debugging using fgtrace

Tutorial on how to get started with fgtrace
  1. Enable fgtrace

    ActivateFgtrace
    ActivateFgtrace

  2. Forward the new fgtrace port

    ForwardFgtrace
    ForwardFgtrace

  3. Visit the /debug/fgtrace trace path using Insomnia or a similar tool. Please note that it will take about half a minute for a trace to complete.

    InsomniaExampleTrace
    InsomniaExampleTrace

  4. Export the returned JSON

    InsomniaSaveJson
    InsomniaSaveJson

  5. Open the Perfetto UI

  6. Click on “Open trace file” and select the exported JSON

    PerfettoOpenSavedTrace
    PerfettoOpenSavedTrace

  7. Wait for it to load

  8. You are now viewing a Chrome-like waterfall graph that shows the wallclock time used by each goroutine.

    PerfettoTraceOverview
    PerfettoTraceOverview

  9. Expanding a goroutine will allow you to view the function calls it made.

    PerfettoTraceDetails
    PerfettoTraceDetails

  10. Please note that due to our sampling frequency, function calls that take less than 0.01 seconds will not be captured.

Changing the trace length and frequency

To control the trace length and frequency, you can use the query parameters “seconds” and “hz”.

InsomniaTraceOptions
InsomniaTraceOptions

11 - Learning Hub