This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Architecture

A detailed view of the architecture of the UMH stack.

The United Manufacturing Hub at its core is a Helm Chart for Kubernetes consisting of several microservices and open source 3rd party applications, such as Node-RED and Grafana. This Helm Chart can be deployed in various environments, from edge devices and virtual machines to managed Kubernetes offerings. In large-scale deployments, you find typically a combination out of all these deployment options.

In this chapter, we’ll explore the various microservices and applications that make up the United Manufacturing Hub, and how they work together to help you extract, contextualize, store, and visualize data from your shop floor.

flowchart subgraph UMH["United Manufacturing Hub"] style UMH fill:#47a0b5 subgraph UNS["Unified Namespace"] style UNS fill:#f4f4f4 kafka["Apache Kafka"] mqtt["HiveMQ"] console["Console"] kafka-bridge mqtt-kafka-bridge["mqtt-kafka-bridge"] click kafka "./microservices/core/kafka" click mqtt "./microservices/core/mqtt-broker" click console "./microservices/core/console" click kafka-bridge "./microservices/core/kafka-bridge" click mqtt-kafka-bridge "./microservices/core/mqtt-kafka-bridge" mqtt <-- MQTT --> mqtt-kafka-bridge <-- Kafka --> kafka kafka -- Kafka --> console end subgraph custom["Custom Microservices"] custom-microservice["A user provied custom microservice in the Helm Chart"] custom-application["A user provided custom application deployed as Kubernetes resources or as a Helm Chart"] click custom-microservice "./microservices/core/custom" end subgraph Historian style Historian fill:#f4f4f4 kafka-to-postgresql timescaledb[("TimescaleDB")] factoryinsight umh-datasource grafana["Grafana"] redis click kafka-to-postgresql "./microservices/core/kafka-to-postgresql" click timescaledb "./microservices/core/database" click factoryinsight "./microservices/core/factoryinsight" click grafana "./microservices/core/grafana" click redis "./microservices/core/redis" kafka -- Kafka ---> kafka-to-postgresql kafka-to-postgresql -- SQL --> timescaledb timescaledb -- SQL --> factoryinsight factoryinsight -- HTTP --> umh-datasource umh-datasource --Plugin--> grafana factoryinsight <--RESP--> redis kafka-to-postgresql <--RESP--> redis end subgraph Connectivity style Connectivity fill:#f4f4f4 nodered["Node-RED"] barcodereader sensorconnect click nodered "./microservices/core/node-red" click barcodereader "./microservices/community/barcodereader" click sensorconnect "./microservices/core/sensorconnect" nodered <-- Kafka --> kafka barcodereader -- Kafka --> kafka sensorconnect -- Kafka --> kafka end subgraph Simulators style Simulators fill:#f4f4f4 mqtt-simulator["IoT sensors simulator"] packml-simulator["PackML simulator"] opcua-simulator["OPC-UA simulator"] click mqtt-simulator "./microservices/community/mqtt-simulator" click packml-simulator "./microservices/community/packml-simulator" click opcua-simulator "./microservices/community/opcua-simulator" mqtt-simulator -- MQTT --> mqtt packml-simulator -- MQTT --> mqtt opcua-simulator -- OPC-UA --> nodered end end subgraph Datasources plc["PLCs"] other["Other systems on the shopfloor (MES, ERP, etc.)"] barcode["USB barcode reader"] ifm["IO-link sensor"] iot["IoT devices"] plc -- "Siemens S7, OPC-UA, Modbus, etc." --> nodered other -- " " ----> nodered ifm -- HTTP --> sensorconnect barcode -- USB --> barcodereader iot <-- MQTT --> mqtt %% at the end for styling purposes nodered <-- MQTT --> mqtt end subgraph Data sinks umh-other["Other UMH instances"] other-systems["Other systems (cloud analytics, cold storage, BI tools, etc.)"] kafka <-- Kafka --> kafka-bridge kafka-bridge <-- Kafka ----> umh-other factoryinsight -- HTTP ----> other-systems end

Simulators

The United Manufacturing Hub includes several simulators to generate data during development and testing.

Microservices

  • iotsensorsmqtt simulates data in three different MQTT topics, providing a simple way to test and visualize MQTT data streams.
  • packml-simulator simulates a PackML machine which sends and receives MQTT messages
  • opcua-simulator simulates an OPC-UA server, which can then be used to test connectivity of OPC-UA clients and to generate sample data for OPC-UA clients

Data connectivity microservices

The United Manufacturing Hub includes microservices that extract data from the shop floor and push it into the Unified Namespace. Additionally, you can deploy your own microservices or third-party solutions directly into the Kubernetes cluster using the custom microservice feature. To learn more about third-party solutions, check out our extensive tutorials on our learning hub

Microservices

  • sensorconnect automatically reads out IO-Link Master and their connected sensors, and pushes the data to the message broker.
  • barcodereader connects to USB barcode reader devices and pushes the data to the message broker.
  • Node-RED is a versatile tool with many community plugins and allows access to machine PLCs or connections with other systems on the shopfloor. It plays an important role and is explained in the next section.

Node-RED: connectivity & contextualization

Node-RED is not just a tool for connectivity, but also for stream processing and data contextualization. It is often used to extract data from the message broker, reformat the event, and push it back into a different topic, such as the UMH datamodel.

In addition to the built-in microservices, third-party contextualization solutions can be deployed similarly to data connectivity microservices. For more information on these solutions, check out our extensive tutorials on our learning hub. In addition to the built-in microservices, third-party contextualization solutions can be deployed similarly to data connectivity microservices. For more information on these solutions, check out our extensive tutorials on our learning hub.

Microservices

  • Node-RED is a programming tool that can wire together hardware devices, APIs, and online services.

Unified Namespace

At the core of the United Manufacturing Hub lies the Unified Namespace, which serves as the central source of truth for all events and messages occurring on your shop floor. The Unified Namespace is implemented using two message brokers: HiveMQ for MQTT and Apache Kafka. MQTT is used to receive data from IoT devices on the shop floor because it excels at handling a large number of unreliable connections. On the other hand, Kafka is used to enable communication between the microservices, leveraging its large-scale data processing capabilities.

The data between both brokers is bridged automatically using the mqtt-to-kafka microservice, allowing you to send data to MQTT and process it reliably in Kafka.

If you’re curious about the benefits of this dual approach to MQTT/Kafka, check out our blog article about Tools & Techniques for Scalable Dataprocessing in Industrial IoT.

For more information on the Unified Namespace feature and how to use it, check out the detailed description of the Unified Namespace feature.

Microservices

  • HiveMQ is an MQTT broker used for receiving data from IoT devices on the shop floor. It excels at handling large numbers of unreliable connections.
  • Apache Kafka is a distributed streaming platform used for communication between microservices. It offers large-scale data processing capabilities.
  • mqtt-kafka-bridge is a microservice that bridges messages between MQTT and Kafka, allowing you to send data to MQTT and process them reliably in Kafka.
  • kafka-bridge a microservice that bridges messages between multiple Kafka instances.
  • console is a web-based user interface for Kafka, which provides a graphical view of topics and messages.

Historian / data storage and visualization

The United Manufacturing Hub stores events according to our datamodel. These events are automatically stored in TimescaleDB, an open-source time-series SQL database. From there, you can access the stored data using Grafana, a visualization and analytics software. With Grafana, you can perform on-the-fly data analysis by executing simple min, max, and avg on tags, or extended KPI calculations such as OEE. These calculations can be selected in the umh-datasource microservice.

For more information on the Historian or Analytics feature and how to use it, check out the detailed description of the Historian feature or the Analytics features.

Microservices

  • kafka-to-postgresql stores data in selected topics from the Kafka broker in a PostgreSQL compatible database such as TimescaleDB.
  • TimescaleDB, which is an open-source time-series SQL database
  • factoryinsight provides REST endpoints to fetch data and calculate KPIs
  • Grafana is a visualization and analytics software
  • umh-datasource is a plugin providing access factoryinsight
  • redis is an in-memory data structure store, used for cache.

Custom Microservices

The Helm Chart allows you to add your own microservices or Docker containers to the United Manufacturing Hub. These can be used, for example, to connect with third-party systems or to analyze the data. Additionally, you can deploy any other third-party application as long as it is available as a Helm Chart, Kubernetes resource, or Docker Compose (which can be converted to Kubernetes resources).

1 - Helm Chart

This page describes the Helm Chart of the United Manufacturing Hub and the possible configuration options.

An Helm chart is a package manager for Kubernetes that simplifies the installation, configuration, and deployment of applications and services. It contains all the necessary Kubernetes manifests, configuration files, and dependencies required to run a particular application or service. One of the main advantages of Helm is that it allows to define the configuration of the installed resources in a single YAML file, called values.yaml. Helm provides great documentation on how to acheive this at https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing

The Helm Chart of the United Manufacturing Hub is composed of both custom microservices and third-party applications. If you want a more in-depth view of the architecture of the United Manufacturing Hub, you can read the Architecture overview page.

Helm Chart structure

Custom microservices

The Helm Chart of the United Manufacturing Hub is composed of the following custom microservices:

  • barcodereader: reads the input from a barcode reader and sends it to the MQTT broker for further processing.
  • customMicroservice: a template for deploying any number of custom microservices.
  • factoryinput: provides REST endpoints for MQTT messages.
  • factoryinsight: provides REST endpoints to fetch data and calculate KPIs.
  • grafanaproxy: provides a proxy to the backend services.
  • MQTT Simulator: simulates sensors and sends the data to the MQTT broker for further processing.
  • kafka-bridge: connects Kafka brokers on different Kubernetes clusters.
  • kafkatopostgresql: stores the data from the Kafka broker in a PostgreSQL database.
  • mqtt-kafka-bridge: connects the MQTT broker and the Kafka broker.
  • mqttbridge: connects MQTT brokers on different Kubernetes clusters.
  • opcuasimulator: simulates OPC UA servers and sends the data to the MQTT broker for further processing.
  • packmlmqttsimulator: simulates a PackML state machine and sends the data to the MQTT broker for further processing.
  • sensorconnect: connects to a sensor and sends the data to the MQTT and Kafka brokers for further processing.
  • tulip-connector: exposes internal APIs to the internet, especially tailored for the Tulip platform.

Third-party applications

The Helm Chart of the United Manufacturing Hub is composed of the following third-party applications:

  • Grafana: a visualization and analytics software.
  • HiveMQ: an MQTT broker.
  • Node-RED: a programming tool for wiring together hardware devices, APIs and online services.
  • Redis: an in-memory data structure store, used for cache.
  • RedPanda: a Kafka-compatible distributed event streaming platform.
  • RedPanda Console: a web-based user interface for RedPanda.
  • TimescaleDB: an open-source time-series SQL database.

Configuration options

The Helm Chart of the United Manufacturing Hub can be configured by setting values in the values.yaml file. This file has three main sections that can be used to configure the applications:

  • customers: contains the definition of the customers that will be created during the installation of the Helm Chart. This section is optional, and it’s used only by factoryinsight and factoryinput.
  • _000_commonConfig: contains the basic configuration options to customize the United Manufacturing Hub, and it’s divided into sections that group applications with similar scope, like the ones that compose the infrastructure or the ones responsible for data processing. This is the section that should be mostly used to configure the microservices.
  • _001_customMicroservices: used to define the configuration of custom microservices that are not included in the Helm Chart.

After those three sections, there are the specific sections for each microservice, which contain their advanced configuration. This is the so called Danger Zone, because the values in those sections should not be changed, unlsess you absolutely know what you are doing.

When a parameter contains . (dot) characters, it means that it is a nested parameter. For example, in the tls.factoryinput.cert parameter the cert parameter is nested inside the tls.factoryinput section, and the factoryinput section is nested inside the tls section.

Customers

The customers section contains the definition of the customers that will be created during the installation of the Helm Chart. It’s a simple dictionary where the key is the name of the customer, and the value is the password.

For example, the following snippet creates two customers:

customers:
  customer1: password1
  customer2: password2

Common configuration options

The _000_commonConfig contains the basic configuration options to customize the United Manufacturing Hub, and it’s divided into sections that group applications with similar scope.

The following table lists the configuration options that can be set in the _000_commonConfig section:

_000_commonConfig section parameters
ParameterDescriptionTypeAllowed valuesDefault
datainputThe configuration of the microservices used to input data.objectSee belowSee below
dataprocessingThe configuration of the microservices used to process data.objectSee belowSee below
datasourcesThe configuration of the microservices used to acquire data.objectSee belowSee below
datastorageThe configuration of the microservices used to store data.objectSee belowSee below
debugThe configuration for the debug mode.objectSee belowSee below
infrastructureThe configuration of the microservices used to provide infrastructure services.objectSee belowSee below
kafkaBridgeThe configuration for the Kafka bridge.objectSee belowSee below
kafkaStateDetectorThe configuration for the Kafka state detector.objectSee belowSee below
metrics.enabledWhether to enable the anonymous metrics service or not.booltrue or falsetrue
mqttBridgeThe configuration for the MQTT bridge.objectSee belowSee below
serialNumberThe hostname of the device. Used by some microservices to identify the device.stringAnydefault
tulipconnectorThe configuration for the Tulip connector.objectSee belowSee below

Data sources

The _000_commonConfig.datasources section contains the configuration of the microservices used to acquire data, like the ones that connect to a sensor or simulate data.

The following table lists the configuration options that can be set in the _000_commonConfig.datasources section:

datasources section parameters
ParameterDescriptionTypeAllowed valuesDefault
barcodereaderThe configuration of the barcodereader microservice.objectSee belowSee below
iotsensorsmqttThe configuration of the IoTSensorsMQTT microservice.objectSee belowSee below
opcuasimulatorThe configuration of the opcuasimulator microservice.objectSee belowSee below
packmlmqttsimulatorThe configuration of the packmlsimulator microservice.objectSee belowSee below
sensorconnectThe configuration of the sensorconnect microservice.objectSee belowSee below
Barcode reader

The _000_commonConfig.datasources.barcodereader section contains the configuration of the barcodereader microservice.

The following table lists the configuration options that can be set in the _000_commonConfig.datasources.barcodereader section:

barcodereader section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether the barcodereader microservice is enabled.booltrue, falsefalse
USBDeviceNameThe name of the USB device to use.stringAnyDatalogic ADC, Inc. Handheld Barcode Scanner
USBDevicePathThe path of the USB device to use. It is recommended to use a wildcard (for example, /dev/input/event*) or leave emptystringValid Unix device path""
customerIDThe customer ID to use in the topic structure.stringAnyraw
locationThe location to use in the topic structure.stringAnybarcodereader
machineIDThe asset ID to use in the topic structure.stringAnybarcodereader
IoT Sensors MQTT

The _000_commonConfig.datasources.iotsensorsmqtt section contains the configuration of the IoTSensorsMQTT microservice.

The following table lists the configuration options that can be set in the _000_commonConfig.datasources.iotsensorsmqtt section:

iotsensorsmqtt section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether the IoTSensorsMQTT microservice is enabled.booltrue, falsetrue
OPC UA Simulator

The _000_commonConfig.datasources.opcuasimulator section contains the configuration of the opcuasimulator microservice.

The following table lists the configuration options that can be set in the _000_commonConfig.datasources.opcuasimulator section:

opcuasimulator section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether the opcuasimulator microservice is enabled.booltrue, falsetrue
PackML MQTT Simulator

The _000_commonConfig.datasources.packmlmqttsimulator section contains the configuration of the packmlsimulator microservice.

The following table lists the configuration options that can be set in the _000_commonConfig.datasources.packmlmqttsimulator section:

packmlmqttsimulator section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether the packmlsimulator microservice is enabled.booltrue, falsetrue
Sensor connect

The _000_commonConfig.datasources.sensorconnect section contains the configuration of the sensorconnect microservice.

The following table lists the configuration options that can be set in the _000_commonConfig.datasources.sensorconnect section:

sensorconnect section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether the sensorconnect microservice is enabled.booltrue, falsefalse
iprangeThe IP range of the sensors in CIDR notation.stringValid IP range192.168.10.1/24
enableKafkaWhether the sensorconnect microservice should use Kafka.booltrue, falsetrue
enableMQTTWhether the sensorconnect microservice should use MQTT.booltrue, falsefalse
testModeWhether to enable test mode. Only useful for development.booltrue, falsefalse

Data processing

The _000_commonConfig.dataprocessing section contains the configuration of the microservices used to process data, such as the nodered microservice.

The following table lists the configuration options that can be set in the _000_commonConfig.dataprocessing section:

dataprocessing section parameters
ParameterDescriptionTypeAllowed valuesDefault
noderedThe configuration of the nodered microservice.objectSee belowSee below
Node-RED

The _000_commonConfig.dataprocessing.nodered section contains the configuration of the nodered microservice.

The following table lists the configuration options that can be set in the _000_commonConfig.dataprocessing.nodered section:

nodered section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether the nodered microservice is enabled.booltrue, falsetrue
defaultFlowsWhether the default flows should be used.booltrue, falsefalse

Infrastructure

The _000_commonConfig.infrastructure section contains the configuration of the microservices responsible for connecting all the other microservices, such as the MQTT broker and the Kafka broker.

The following table lists the configuration options that can be set in the _000_commonConfig.infrastructure section:

infrastructure section parameters
ParameterDescriptionTypeAllowed valuesDefault
mqttThe configuration of the MQTT broker.objectSee belowSee below
kafkaThe configuration of the Kafka broker.objectSee belowSee below
MQTT

The _000_commonConfig.infrastructure.mqtt section contains the configuration of the MQTT broker.

The following table lists the configuration options that can be set in the _000_commonConfig.infrastructure.mqtt section:

mqtt section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether the MQTT broker is enabledbooltrue, falsetrue
adminUser.enabledWhether the admin user is enabledbooltrue, falsefalse
adminUser.nameThe name of the admin userstringAny UTF-8 stringadmin-user
adminUser.encryptedPasswordThe encrypted password of the admin userstringAny""
tls.useTLSWhether TLS should be usedbooltrue, falsetrue
tls.insecureSkipVerifyWhether the SSL certificate validation should be skippedbooltrue, falsetrue
tls.keystoreBase64The base64 encoded keystorestringAny""
tls.keystorePasswordThe password of the keystorestringAny""
tls.truststoreBase64The base64 encoded truststorestringAny""
tls.truststorePasswordThe password of the truststorestringAny""
tls.caCertThe CA certificatestringAny""
tls.factoryinput.certThe certificate used for the factoryinput microservicestringAny""
tls.factoryinput.keyThe key used for the factoryinput microservicestringAny""
tls.mqtt_kafka_bridge.certThe certificate used for the mqttkafkabridgestringAny""
tls.mqtt_kafka_bridge.keyThe key used for the mqttkafkabridgestringAny""
tls.mqtt_bridge.local_certThe certificate used for the local mqttbridge brokerstringAny""
tls.mqtt_bridge.local_keyThe key used for the local mqttbridge brokerstringAny""
tls.mqtt_bridge.remote_certThe certificate used for the remote mqttbridge brokerstringAny""
tls.mqtt_bridge.remote_keyThe key used for the remote mqttbridge brokerstringAny""
tls.sensorconnect.certThe certificate used for the sensorconnect microservicestringAny""
tls.sensorconnect.keyThe key used for the sensorconnect microservicestringAny""
tls.iotsensorsmqtt.certThe certificate used for the iotsensorsmqtt microservicestringAny""
tls.iotsensorsmqtt.keyThe key used for the iotsensorsmqtt microservicestringAny""
tls.packmlsimulator.certThe certificate used for the packmlsimulator microservicestringAny""
tls.packmlsimulator.keyThe key used for the packmlsimulator microservicestringAny""
tls.nodered.certThe certificate used for the nodered microservicestringAny""
tls.nodered.keyThe key used for the nodered microservicestringAny""
Kafka

The _000_commonConfig.infrastructure.kafka section contains the configuration of the Kafka broker and related services, like mqttkafkabridge, kafkatopostgresql and the Kafka console.

The following table lists the configuration options that can be set in the _000_commonConfig.infrastructure.kafka section:

kafka section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether the Kafka broker and related services are enabledbooltrue, falsetrue
useSSLWhether SSL should be usedbooltrue, falsetrue
defaultTopicsThe default topics that should be createdstringSemicolon separated list of valid Kafka topicsia.test.test.test.processValue;ia.test.test.test.count;umh.v1.kafka.newTopic
tls.CACertThe CA certificatestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafka.certThe certificate used for the kafka brokerstringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafka.privkeyThe private key of the certificate for the Kafka brokerstringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.barcodereader.sslKeyPasswordThe encrypted password of the SSL key for the barcodereader microservice. If empty, no password is usedstringAny""
tls.barcodereader.sslKeyPemThe private key for the SSL certificate of the barcodereader microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.barcodereader.sslCertificatePemThe private SSL certificate for the barcodereader microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkabridge.sslKeyPasswordLocalThe encrypted password of the SSL key for the local mqttbridge broker. If empty, no password is usedstringAny""
tls.kafkabridge.sslKeyPemLocalThe private key for the SSL certificate of the local mqttbridge brokerstringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkabridge.sslCertificatePemLocalThe private SSL certificate for the local mqttbridge brokerstringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkabridge.sslCACertRemoteThe CA certificate for the remote mqttbridge brokerstringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkabridge.sslCertificatePemRemoteThe private SSL certificate for the remote mqttbridge brokerstringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkabridge.sslKeyPasswordRemoteThe encrypted password of the SSL key for the remote mqttbridge broker. If empty, no password is usedstringAny""
tls.kafkabridge.sslKeyPemRemoteThe private key for the SSL certificate of the remote mqttbridge brokerstringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkadebug.sslKeyPasswordThe encrypted password of the SSL key for the kafkadebug microservice. If empty, no password is usedstringAny""
tls.kafkadebug.sslKeyPemThe private key for the SSL certificate of the kafkadebug microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkadebug.sslCertificatePemThe private SSL certificate for the kafkadebug microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkainit.sslKeyPasswordThe encrypted password of the SSL key for the kafkainit microservice. If empty, no password is usedstringAny""
tls.kafkainit.sslKeyPemThe private key for the SSL certificate of the kafkainit microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkainit.sslCertificatePemThe private SSL certificate for the kafkainit microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkastatedetector.sslKeyPasswordThe encrypted password of the SSL key for the kafkastatedetector microservice. If empty, no password is usedstringAny""
tls.kafkastatedetector.sslKeyPemThe private key for the SSL certificate of the kafkastatedetector microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkastatedetector.sslCertificatePemThe private SSL certificate for the kafkastatedetector microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kafkatopostgresql.sslKeyPasswordThe encrypted password of the SSL key for the kafkatopostgresql microservice. If empty, no password is usedstringAny""
tls.kafkatopostgresql.sslKeyPemThe private key for the SSL certificate of the kafkatopostgresql microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kafkatopostgresql.sslCertificatePemThe private SSL certificate for the kafkatopostgresql microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.kowl.sslKeyPasswordThe encrypted password of the SSL key for the kowl microservice. If empty, no password is usedstringAny""
tls.kowl.sslKeyPemThe private key for the SSL certificate of the kowl microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.kowl.sslCertificatePemThe private SSL certificate for the kowl microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.mqttkafkabridge.sslKeyPasswordThe encrypted password of the SSL key for the mqttkafkabridge microservice. If empty, no password is usedstringAny""
tls.mqttkafkabridge.sslKeyPemThe private key for the SSL certificate of the mqttkafkabridge microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.mqttkafkabridge.sslCertificatePemThe private SSL certificate for the mqttkafkabridge microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.nodered.sslKeyPasswordThe encrypted password of the SSL key for the nodered microservice. If empty, no password is usedstringAny""
tls.nodered.sslKeyPemThe private key for the SSL certificate of the nodered microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.nodered.sslCertificatePemThe private SSL certificate for the nodered microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–
tls.sensorconnect.sslKeyPasswordThe encrypted password of the SSL key for the sensorconnect microservice. If empty, no password is usedstringAny""
tls.sensorconnect.sslKeyPemThe private key for the SSL certificate of the sensorconnect microservicestringAny—–BEGIN PRIVATE KEY—– … —–END PRIVATE KEY—–
tls.sensorconnect.sslCertificatePemThe private SSL certificate for the sensorconnect microservicestringAny—–BEGIN CERTIFICATE—– … —–END CERTIFICATE—–

Data storage

The _000_commonConfig.datastorage section contains the configuration of the microservices used to store data. Specifically, it controls the following microservices:

If you want to specifically configure one of these microservices, you can do so in their respective sections in the Danger Zone.

The following table lists the configurable parameters of the _000_commonConfig.datastorage section.

datastorage section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the data storage microservicesbooltrue, falsetrue
db_passwordThe password for the database. Used by all the microservices that need to connect to the databasestringAnychangeme

Data input

The _000_commonConfig.datainput section contains the configuration of the microservices used to input data. Specifically, it controls the following microservices:

If you want to specifically configure one of these microservices, you can do so in their respective sections in the danger zone.

The following table lists the configurable parameters of the _000_commonConfig.datainput section./

datainput section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the data input microservicesbooltrue, falsefalse

MQTT Bridge

The _000_commonConfig.mqttBridge section contains the configuration of the mqtt-bridge microservice, responsible for bridging MQTT brokers in different Kubernetes clusters.

The following table lists the configurable parameters of the _000_commonConfig.mqttBridge section.

mqttBridge section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the mqtt-bridge microservicebooltrue, falsefalse
localSubTopicThe topic that the local MQTT broker subscribes tostringAny valid MQTT topicia/factoryinsight
localPubTopicThe topic that the local MQTT broker publishes tostringAny valid MQTT topicia/factoryinsight
oneWayWhether to enable one-way communication, from local to remotebooltrue, falsetrue
remoteBrokerUrlThe URL of the remote MQTT brokerstringAny valid MQTT broker URLssl://united-manufacturing-hub-mqtt.united-manufacturing-hub:8883
remoteBrokerSSLEnablesWhether to enable SSL for the remote MQTT brokerbooltrue, falsetrue
remoteSubTopicThe topic that the remote MQTT broker subscribes tostringAny valid MQTT topicia
remotePubTopicThe topic that the remote MQTT broker publishes tostringAny valid MQTT topicia/factoryinsight

Kafka Bridge

The _000_commonConfig.kafkaBridge section contains the configuration of the kafka-bridge microservice, responsible for bridging Kafka brokers in different Kubernetes clusters.

The following table lists the configurable parameters of the _000_commonConfig.kafkaBridge section.

kafkaBridge section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the kafka-bridge microservicebooltrue, falsefalse
remotebootstrapServerThe URL of the remote Kafka brokerstringAny""
topicCreationLocalListThe list of topics to create locallystringSemicolon separated list of valid Kafka topicsia.test.test.test.processValue;ia.test.test.test.count;umh.v1.kafka.newTopic
topicCreationRemoteListThe list of topics to create remotelystringSemicolon separated list of valid Kafka topicsia.test.test.test.processValue;ia.test.test.test.count;umh.v1.kafka.newTopic
topicmapThe list of topic maps of topics to forwardobjectSee belowempty
Topic Map

The topicmap parameter is a list of topic maps, each of which contains the following parameters:

topicmap section parameters
ParameterDescriptionTypeAllowed values
bidirectionalWhether to enable bidirectional communication for that topicbooltrue, false
nameThe name of the mapstringHighIntegrity, HighThroughput
send_directionThe direction of the communication for that topicstringto_remote, to_local
topicThe topic to forward. A regex can be used to match multiple topics.stringAny valid Kafka topic

For more information about the topic maps, see the kafka-bridge documentation.

Kafka State Detector

The _000_commonConfig.kafkaStateDetector section contains the configuration of the kafka-state-detector microservice, responsible for detecting the state of the Kafka broker.

The following table lists the configurable parameters of the _000_commonConfig.kafkaStateDetector section.

kafkastatedetector section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the kafka-state-detector microservicebooltrue, falsefalse

Debug

The _000_commonConfig.debug section contains the debug configuration for all the microservices. This values should not be enabled in production.

The following table lists the configurable parameters of the _000_commonConfig.debug section.

debug section parameters
ParameterDescriptionTypeAllowed valuesDefault
enableFGTraceWhether to enable the foreground tracebooltrue, falsefalse

Tulip Connector

The _000_commonConfig.tulipconnector section contains the configuration of the tulip-connector microservice, responsible for connecting a Tulip instance with the United Manufacturing Hub.

The following table lists the configurable parameters of the _000_commonConfig.tulipconnector section.

tulipconnector section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the tulip-connector microservicebooltrue, falsefalse
domainThe domain name pointing to you clusterstringAny valid domain nametulip-connector.changme.com

Custom microservices configuration

The _001_customConfig section contains a list of custom microservices definitions. It can be used to deploy any application of your choice, which can be configured using the following parameters:

Custom microservices configuration parameters
ParameterDescriptionTypeAllowed valuesDefault
nameThe name of the microservicestringAnyexample
imageThe image and tag of the microservicestringAnyhello-world:latest
enabledWhether to enable the microservicebooltrue, falsefalse
imagePullPolicyThe image pull policy of the microservicestring“Always”, “IfNotPresent”, “Never”“Always”
envThe list of environment variables to set for the microserviceobjectAny[{name: LOGGING_LEVEL, value: PRODUCTION}]
portThe internal port of the microservice to targetintAny80
externalPortThe host port to which expose the internal portintAny8080
probePortThe port to use for the liveness and startup probesintAny9091
startupProbeThe interval in seconds for the startup probeintAny200
livenessProbeThe interval in seconds for the liveness probeintAny500
statefulEnabledCreate a PersistentVolumeClaim for the microservice and mount it in /databooltrue, falsefalse

Danger zone

The next sections contain a more advanced configuration of the microservices. Usually, changing the values of the previous sections is enough to run the United Manufacturing Hub. However, you may need to adjust some of the values below if you want to change the default behavior of the microservices.

Everything below this point should not be changed, unless you know what you are doing.
Danger zone advanced configuration
SectionDescription
barcodereaderConfiguration for barcodereader
factoryinputConfiguration for factoryinput
factoryinsightConfiguration for factoryinsight
grafanaConfiguration for Grafana
grafanaproxyConfiguration for the Grafana proxy
iotsensorsmqttConfiguration for the IoTSensorsMQTT simulator
kafkabridgeConfiguration for kafka-bridge
kafkastatedetectorConfiguration for kafka-state-detector
kafkatopostgresqlConfiguration for kafka-to-postgresql
metricsConfiguration for the metrics
mqtt_brokerConfiguration for the MQTT broker
mqttbridgeConfiguration for mqtt-bridge
mqttkafkabridgeConfiguration for mqtt-kafka-bridge
noderedConfiguration for Node-RED
opcuasimulatorConfiguration for the OPC UA simulator
packmlmqttsimulatorConfiguration for the PackML MQTT simulator
redisConfiguration for Redis
redpandaConfiguration for the Kafka broker
sensorconnectConfiguration for sensorconnect
serviceAccountConfiguration for the service account used by the microservices
timescaledb-singleConfiguration for TimescaleDB
tulipconnectorConfiguration for tulip-connector

Sections

barcodereader

The barcodereader section contains the advanced configuration of the barcodereader microservice.

barcodereader advanced section parameters
ParameterDescriptionTypeAllowed valuesDefault
annotationsAnnotations to add to the Kubernetes resourcesobjectAny{}
enabledWhether to enable the barcodereader microservicebooltrue, falsefalse
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the barcodereader microservicestringAnyghcr.io/united-manufacturing-hub/barcodereader
image.tagThe tag of the barcodereader microservice. Defaults to Chart version if not setstringAny0.9.14
resources.limits.cpuThe CPU limitstringAny10m
resources.limits.memoryThe memory limitstringAny60Mi
resources.requests.cpuThe CPU requeststringAny2m
resources.requests.memoryThe memory requeststringAny30Mi
scanOnlyWhether to only scan without sending the data to the Kafka brokerbooltrue, falsefalse

factoryinput

The factoryinput section contains the advanced configuration of the factoryinput microservice.

factoryinput advanced section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the factoryinput microservicebooltrue, falsefalse
envThe environment variablesobjectAnySee env section
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the factoryinput microservicestringAnyghcr.io/united-manufacturing-hub/factoryinput
image.tagThe tag of the factoryinput microservice. Defaults to Chart version if not setstringAny0.9.14
mqtt.encryptedPasswordThe encrypted password of the MQTT brokerstringAnyBase 64 encrypted password
mqtt.passwordThe password of the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
pdb.enabledWhether to enable a PodDisruptionBudgetbooltrue, falsetrue
pdb.minAvailableThe minimum number of available podsintAny1
replicasThe number of Pod replicasintAny1
service.annotationsAnnotations to add to the factoryinput ServiceobjectAny{}
storageRequestThe amount of storage for the PersistentVolumeClaimstringAny1Gi
userThe user of factoryinputstringAnyfactoryinsight
env

The env section contains the configuration of the environment variables to add to the Pod.

factoryinput env parameters
ParameterDescriptionTypeAllowed valuesDefault
loggingLevelThe logging level of the factoryinput microservicestringPRODUCTION, DEVELOPMENTPRODUCTION
mqttQueueHandlerNumber of queue workers to spawnint0-6553510
versionThe version of the API used. Each version also enables all the previous onesintAny2

factoryinsight

The factoryinsight section contains the advanced configuration of the factoryinsight microservice.

factoryinsight advanced section parameters
ParameterDescriptionTypeAllowed valuesDefault
db_databaseThe database namestringAnyfactoryinsight
db_hostThe host of the databasestringAny[i18n] resource_service_database
db_userThe database userstringAnyfactoryinsight
enabledWhether to enable the factoryinsight microservicebooltrue, falsefalse
hpa.enabledWhether to enable a HorizontalPodAutoscalerbooltrue, falsefalse
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the factoryinsight microservicestringAnyghcr.io/united-manufacturing-hub/factoryinsight
image.tagThe tag of the factoryinsight microservice. Defaults to Chart version if not setstringAny0.9.14
ingress.enabledWhether to enable an Ingressbooltrue, falsefalse
ingress.publicHostSecretNameThe secret name of the public host of the IngressstringAny""
ingress.publicHostThe public host of the IngressstringAny""
insecure_no_authWhether to enable the insecure_no_auth modebooltrue, falsefalse
pdb.enabledWhether to enable a PodDisruptionBudgetbooltrue, falsefalse
redis.URIThe URI of the Redis instancestringAnyunited-manufacturing-hub-redis-headless:6379
replicasThe number of Pod replicasintAny2
resources.limits.cpuThe CPU limitstringAny200m
resources.limits.memoryThe memory limitstringAny200Mi
resources.requests.cpuThe CPU requeststringAny50m
resources.requests.memoryThe memory requeststringAny50Mi
service.annotationsAnnotations to add to the factoryinsight ServiceobjectAny{}
userThe user of factoryinsightstringAnyfactoryinsight
versionThe version of the API used. Each version also enables all the previous onesintAny2

grafana

The grafana section contains the advanced configuration of the grafana microservice. This is based on the official Grafana Helm chart. For more information about the parameters, please refer to the official documentation.

Here are only the values different from the default ones.

grafana advanced section parameters
ParameterDescriptionTypeAllowed valuesDefault
admin.existingSecretThe name of the secret containing the admin passwordstringAnygrafana-secret
admin.passwordKeyThe key of the admin password in the secretstringAnyadminpassword
admin.userKeyThe key of the admin password in the secretstringAnyadminuser
datasourcesThe datasources configuration.objectAnySee datasources section
envValueFromEnvironment variables to add to the Pod, from a secret or a configmapobjectAnySee envValueFrom section
envEnvironment variables to add to the PodobjectAnySee env section
extraInitContainersExtra init containers to add to the PodobjectAnySee extraInitContainers section
grafana.iniThe grafana.ini configuration.objectAnySee grafana.ini section
initChownData.enabledWhether to enable the initChownData job, to reset data ownership at startupbooltrue, falsetrue
persistence.enabledWhether to enable persistencebooltrue, falsetrue
persistence.sizeThe size of the persistent volumestringAny5Gi
podDisruptionBudget.minAvailableThe minimum number of available podsintAny1
service.portThe port of the ServiceintAny8080
service.typeThe type of Service to exposestringClusterIP, LoadBalancerLoadBalancer
serviceAccount.createWhether to create a ServiceAccountbooltrue, falsefalse
testFramework.enabledWhether to enable the test frameworkbooltrue, falsefalse
datasources

The datasources section contains the configuration of the datasources provisioning. See the Grafana documentation for more information.

datasources.yaml:
  apiVersion: 1
  datasources:
    - name: umh-v2-datasource
      # <string, required> datasource type. Required
      type: umh-v2-datasource
      # <string, required> access mode. proxy or direct (Server or Browser in the UI). Required
      access: proxy
      # <int> org id. will default to orgId 1 if not specified
      orgId: 1
      url: "http://united-manufacturing-hub-factoryinsight-service/"
      jsonData:
        customerID: $FACTORYINSIGHT_CUSTOMERID
        apiKey: $FACTORYINSIGHT_PASSWORD
        baseURL: "http://united-manufacturing-hub-factoryinsight-service/"
        apiKeyConfigured: true
      version: 1
      # <bool> allow users to edit datasources from the UI.
      isDefault: false
      editable: false
    # <string, required> name of the datasource. Required
    - name: umh-datasource
      # <string, required> datasource type. Required
      type: umh-datasource
      # <string, required> access mode. proxy or direct (Server or Browser in the UI). Required
      access: proxy
      # <int> org id. will default to orgId 1 if not specified
      orgId: 1
      url: "http://united-manufacturing-hub-factoryinsight-service/"
      jsonData:
        customerId: $FACTORYINSIGHT_CUSTOMERID
        apiKey: $FACTORYINSIGHT_PASSWORD
        serverURL: "http://united-manufacturing-hub-factoryinsight-service/"
        apiKeyConfigured: true
      version: 1
      # <bool> allow users to edit datasources from the UI.
      isDefault: true
      editable: false
    # <string, required> name of the datasource. Required
envValueFrom

The envValueFrom section contains the configuration of the environment variables to add to the Pod, from a secret or a configmap.

grafana envValueFrom section parameters
ParameterDescriptionValue fromNameKey
FACTORYINSIGHT_APIKEYThe API key to use to authenticate to the Factoryinsight APIsecretKeyReffactoryinsight-secretapiKey
FACTORYINSIGHT_BASEURLThe base URL of the Factoryinsight APIsecretKeyReffactoryinsight-secretbaseURL
FACTORYINSIGHT_CUSTOMERIDThe customer ID to use to authenticate to the Factoryinsight APIsecretKeyReffactoryinsight-secretcustomerID
FACTORYINSIGHT_PASSWORDThe password to use to authenticate to the Factoryinsight APIsecretKeyReffactoryinsight-secretpassword
env

The env section contains the configuration of the environment variables to add to the Pod.

grafana env section parameters
ParameterDescriptionTypeAllowed valuesDefault
GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINSList of plugin identifiers to allow loading even if they lack a valid signaturestringComma separated listumh-datasource,umh-factoryinput-panel,umh-v2-datasource
extraInitContainers

The extraInitContainers section contains the configuration of the extra init containers to add to the Pod.

The init-plugins container is used to install the default plugins shipped with the UMH version of Grafana without the need to have an internet connection. See the documentation for a list of the plugins.

- image: unitedmanufacturinghub/grafana-umh:1.2.0
  name: init-plugins
  imagePullPolicy: IfNotPresent
  command: ['sh', '-c', 'cp -r /plugins /var/lib/grafana/']
  volumeMounts:
    - name: storage
      mountPath: /var/lib/grafana
grafana.ini

The grafana.ini section contains the configuration of the grafana.ini file. See the Grafana documentation for more information.

paths:
  data: /var/lib/grafana/data
  logs: /var/log/grafana
  plugins: /var/lib/grafana/plugins
  provisioning: /etc/grafana/provisioning
database:
  host: united-manufacturing-hub
  user: "grafana"
  name: "grafana"
  password: "changeme"
  ssl_mode: require
  type: postgres

grafanaproxy

The grafanaproxy section contains the configuration of the Grafana proxy microservice.

grafanaproxy section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the Grafana proxy microservicebooltrue, falsetrue
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the grafana-proxy microservicestringAnyghcr.io/united-manufacturing-hub/barcodereader
image.tagThe tag of the grafana-proxy microservice. Defaults to Chart version if not setstringAny0.9.14
replicasThe number of Pod replicasintAny1
service.annotationsAnnotations to add to the serviceobjectAny{}
service.portThe port of the serviceintAny2096
service.typeThe type of the servicestringClusterIP, LoadBalancerLoadBalancer
service.targetPortThe target port of the serviceintAny80
service.protocolThe protocol of the servicestringTCP, UDPTCP
service.nameThe name of the port of the servicestringAnyservice
resources.limits.cpuThe CPU limitstringAny300m
resources.requests.cpuThe CPU requeststringAny100m

iotsensorsmqtt

The iotsensorsmqtt section contains the configuration of the IoT Sensors MQTT microservice.

iotsensorsmqtt section parameters
ParameterDescriptionTypeAllowed valuesDefault
imageThe image of the iotsensorsmqtt microservicestringAnyamineamaach/sensors-mqtt
mqtt.encryptedPasswordThe encrypted password of the MQTT brokerstringAnyBase 64 encrypted password
mqtt.passwordThe password of the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
resources.limits.cpuThe CPU limitstringAny30m
resources.limits.memoryThe memory limitstringAny50Mi
resources.requests.cpuThe CPU requeststringAny10m
resources.requests.memoryThe memory requeststringAny20Mi
tagThe tag of the iotsensorsmqtt microservice. Defaults to latest if not setstringAnyv1.0.0

kafkabridge

The kafkabridge section contains the configuration of the Kafka bridge.

kafkabridge section parameters
ParameterDescriptionTypeAllowed valuesDefault
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the kafka-bridge microservicestringAnyghcr.io/united-manufacturing-hub/kafka-bridge
image.tagThe tag of the kafka-bridge microservice. Defaults to Chart version if not setstringAny0.9.14
initContainer.pullPolicyThe image pull policy of the init containerstringAlways, IfNotPresent, NeverIfNotPresent
initContainer.repositoryThe image of the init containerstringAnyghcr.io/united-manufacturing-hub/kafka-init
initContainer.tagThe tag of the init container. Defaults to Chart version if not setstringAny0.9.14

kafkastatedetector

The kafkastatedetector section contains the configuration of the Kafka state detector.

kafkastatedetector section parameters
ParameterDescriptionTypeAllowed valuesDefault
activityEnabledControls wheter to check the activity of the Kafka brokerbooltrue, falsetrue
anomalyEnabledControls wheter to check for anomalies in the Kafka brokerbooltrue, falsetrue
enabledWhether to enable the Kafka state detectorbooltrue, falsetrue
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the kafkastatedetector microservicestringAnyghcr.io/united-manufacturing-hub/kafka-state-detector
image.tagThe tag of the kafkastatedetector microservice. Defaults to Chart version if not setstringAny0.9.14

kafkatopostgresql

The kafkatopostgresql section contains the configuration of the Kafka to PostgreSQL microservice.

kafkatopostgresql section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the Kafka to PostgreSQL microservicebooltrue, falsetrue
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the kafkatopostgresql microservicestringAnyghcr.io/united-manufacturing-hub/kafka-to-postgresql
image.tagThe tag of the kafkatopostgresql microservice. Defaults to Chart version if not setstringAny0.9.14
initContainer.pullPolicyThe image pull policy of the init containerstringAlways, IfNotPresent, NeverIfNotPresent
initContainer.repositoryThe image of the init containerstringAnyghcr.io/united-manufacturing-hub/kafka-init
initContainer.tagThe tag of the init container. Defaults to Chart version if not setstringAny0.9.14
replicasThe number of Pod replicasintAny1
resources.limits.cpuThe CPU limitstringAny200m
resources.limits.memoryThe memory limitstringAny300Mi
resources.requests.cpuThe CPU requeststringAny50m
resources.requests.memoryThe memory requeststringAny150Mi

metrics

The metrics section contains the configuration of the metrics CronJob that sends anonymous usage data.

metrics section parameters
ParameterDescriptionTypeAllowed valuesDefault
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the metrics microservicestringAnyghcr.io/united-manufacturing-hub/metrics
cronJob.scheduleThe schedule of the CronJobstringAny0 */4 * * * (every 4 hours)

mqtt_broker

The mqtt_broker section contains the configuration of the MQTT broker.

mqtt_broker section parameters
ParameterDescriptionTypeAllowed valuesDefault
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
image.repositoryThe image of the mqtt_broker microservicestringAnyhivemq/hivemq-ce
image.tagThe tag of the mqtt_broker microservice. Defaults to 2022.1 if not setstringAny2022.1
initContainerThe init container configurationobjectAnySee initContainer section
persistence.extension.sizeThe size of the persistence volume for the extensionsstringAny100Mi
persistence.storage.sizeThe size of the persistence volume for the storagestringAny2Gi
rbacEnabledWhether to enable RBACbooltrue, falsefalse
resources.limits.cpuThe CPU limitstringAny700m
resources.limits.memoryThe memory limitstringAny1700Mi
resources.requests.cpuThe CPU requeststringAny300m
resources.requests.memoryThe memory requeststringAny1000Mi
service.mqtt.enabledWhether to enable the MQTT servicebooltrue, falsetrue
service.mqtt.portThe port of the MQTT serviceintAny1883
service.mqtts.cipher_suitesThe ciphersuites to enablestring arrayAnyTLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA
service.mqtts.enabledWhether to enable the MQTT over TLS servicebooltrue, falsetrue
service.mqtts.portThe port of the MQTT over TLS serviceintAny8883
service.mqtts.tls_versionsThe TLS versions to enablestring arrayAnyTLSv1.3, TLSv1.2
service.ws.enabledWhether to enable the WebSocket servicebooltrue, falsefalse
service.ws.portThe port of the WebSocket serviceintAny8080
service.wss.cipher_suitesThe ciphersuites to enablestring arrayAnyTLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA
service.wss.enabledWhether to enable the WebSocket over TLS servicebooltrue, falsefalse
service.wss.portThe port of the WebSocket over TLS serviceintAny8443
service.wss.tls_versionsThe TLS versions to enablestring arrayAnyTLSv1.3, TLSv1.2
initContainer

The initContainer section contains the configuration for the init containers. By default, the hivemqextensioninit container is used to initialize the HiveMQ extensions.

initContainer:
  hivemqextensioninit:
    image:
      repository: unitedmanufacturinghub/hivemq-init
      tag: 2.0.0
      pullPolicy: IfNotPresent

mqttbridge

The mqttbridge section contains the configuration of the MQTT bridge.

mqttbridge section parameters
ParameterDescriptionTypeAllowed valuesDefault
imageThe image of the mqtt-bridge microservicestringAnyghcr.io/united-manufacturing-hub/mqtt-bridge
mqtt.encryptedPasswordThe encrypted password of the MQTT brokerstringAnyBase 64 encrypted password
mqtt.passwordThe password of the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
resources.limits.cpuThe CPU limitstringAny200m
resources.limits.memoryThe memory limitstringAny100Mi
resources.requests.cpuThe CPU requeststringAny100m
resources.requests.memoryThe memory requeststringAny20Mi
storageRequestThe amount of storage for the PersistentVolumeClaimstringAny1Gi
tagThe tag of the mqtt-bridge microservice. Defaults to Chart version if not setstringAny0.9.14

mqttkafkabridge

The mqttkafkabridge section contains the configuration of the MQTT-Kafka bridge.

mqttkafkabridge section parameters
ParameterDescriptionTypeAllowed valuesDefault
enabledWhether to enable the MQTT-Kafka bridgebooltrue, falsefalse
image.pullPolicyThe pull policy of the mqtt-kafka-bridge microservicestringAnyIfNotPresent
image.repositoryThe image of the mqtt-kafka-bridge microservicestringAnyghcr.io/united-manufacturing-hub/mqtt-kafka-bridge
image.tagThe tag of the mqtt-kafka-bridge microservice. Defaults to Chart version if not setstringAny0.9.14
initContainer.pullPolicyThe pull policy of the init containerstringAnyIfNotPresent
initContainer.repositoryThe image of the init containerstringAnyghcr.io/united-manufacturing-hub/kafka-init
initContainer.tagThe tag of the init container. Defaults to Chart version if not setstringAny0.9.14
kafkaAcceptNoOriginAllow access to the Kafka broker without a valid x-tracebooltrue, falsefalse
kafkaSenderThreadsThe number of threads for sending messages to the Kafka brokerintAny1
messageLRUSizeThe size of the LRU cache for messagesintAny100000
mqtt.encryptedPasswordThe encrypted password of the MQTT brokerstringAnyBase 64 encrypted password
mqtt.passwordThe password of the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
mqttSenderThreadsThe number of threads for sending messages to the MQTT brokerintAny1
pdb.enabledWhether to enable the pod disruption budgetbooltrue, falsetrue
pdb.minAvailableThe minimum number of pods that must be availableintAny1
rawMessageLRUSizeThe size of the LRU cache for raw messagesintAny100000
resources.limits.cpuThe CPU limitstringAny500m
resources.limits.memoryThe memory limitstringAny450Mi
resources.requests.cpuThe CPU requeststringAny400m
resources.requests.memoryThe memory requeststringAny300Mi

nodered

The nodered section contains the configuration of the Node-RED microservice.

nodered section parameters
ParameterDescriptionTypeAllowed valuesDefault
envEnvironment variables to add to the PodobjectAnySee env section
flowsA JSON string containing the flows to import into Node-REDstringAnySee the documentation
ingress.enabledWhether to enable the ingressbooltrue, falsefalse
ingress.publicHostSecretNameThe secret name of the public host of the IngressstringAny""
ingress.publicHostThe public host of the IngressstringAny""
mqtt.encryptedPasswordThe encrypted password of the MQTT brokerstringAnyBase 64 encrypted password
portThe port of the Node-RED serviceintAny1880
serviceTypeThe type of the servicestringClusterIP, LoadBalancerLoadBalancer
settingsA JSON string containing the settings of Node-REDstringAnySee the documentation
storageRequestThe amount of storage for the PersistentVolumeClaimstringAny1Gi
tagThe Node-RED versionstringAny2.0.6
timezoneThe timezonestringAnyBerlin/Europe
env

The env section contains the environment variables to add to the Pod.

env section parameters
ParameterDescriptionTypeAllowed valuesDefault
NODE_RED_ENABLE_SAVE_MODEWhether to enable the save modebooltrue, falsefalse

opcuasimulator

The opcuasimulator section contains the configuration of the OPC UA Simulator microservice.

opcuasimulator section parameters
ParameterDescriptionTypeAllowed valuesDefault
certadds.hostsHosts to add to the certificatestringAnyunited-manufacturing-hub-opcuasimulator-service
certadds.ipsIPs to add to the certificatestringAny""
imageThe image of the OPC UA Simulator microservicestringAnyghcr.io/united-manufacturing-hub/opcuasimulator
resources.limits.cpuThe CPU limitstringAny30m
resources.limits.memoryThe memory limitstringAny50Mi
resources.requests.cpuThe CPU requeststringAny10m
resources.requests.memoryThe memory requeststringAny20Mi
service.annotationsThe annotations of the serviceobjectAny{}
tagThe tag of the OPC UA Simulator microservice. Defaults to latest if not setstringAny0.1.0

packmlmqttsimulator

The packmlmqttsimulator section contains the configuration of the PackML MQTT Simulator microservice.

packmlmqttsimulator section parameters
ParameterDescriptionTypeAllowed valuesDefault
image.repositoryThe image of the PackML MQTT Simulator microservicestringAnyspruiktec/packml-simulator
image.hashThe hash of the image of the PackML MQTT Simulator microservicestringAny01e2f0da3542f1b4e0de830a8d24135de03fd9174dce184ed329bed3ee688e19
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
replicasThe number of replicasintAny1
resources.limits.cpuThe CPU limitstringAny30m
resources.limits.memoryThe memory limitstringAny50Mi
resources.requests.cpuThe CPU requeststringAny10m
resources.requests.memoryThe memory requeststringAny20Mi
envEnvironment variables to add to the PodobjectAnySee env section
env

The env section contains the environment variables to add to the Pod.

env section parameters
ParameterDescriptionTypeAllowed valuesDefault
areaISA-95 area name of the linestringAnyDefaultArea
productionLineISA-95 line name of the linestringAnyDefaultProductionLine
siteISA-95 site name of the linestringAnytestLocation
mqtt.passwordThe password of the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
mqtt.encryptedPasswordThe encrypted password of the MQTT brokerstringAnyBase 64 encrypted password

redis

The redis section contains the configuration of the Redis microservice. This is based on the official Redis Helm chart. For more information about the parameters, see the official documentation.

Here are only the values different from the default ones.

redis section parameters
ParameterDescriptionTypeAllowed valuesDefault
architectureRedis architecturestringstandalone, replicationstandalone
auth.existingSecretPasswordKeyPassword key to be retrieved from existing secretstringAnyredispassword
auth.existingSecretThe name of the existing secret with Redis credentialsstringAnyredis-secret
commonConfigurationCommon configuration to be added into the ConfigMapstringAnySee commonConfiguration section
master.extraFlagsArray with additional command line flags for Redis masterstring arrayAny–maxmemory 200mb
master.livenessProbe.initialDelaySecondsThe initial delay before the liveness probe startsintAny5
master.readinessProbe.initialDelaySecondsThe initial delay before the readiness probe startsintAny120
master.resources.limits.cpuThe CPU limitstringAny100m
master.resources.limits.memoryThe memory limitstringAny100Mi
master.resources.requests.cpuThe CPU requeststringAny50m
master.resources.requests.memoryThe memory requeststringAny50Mi
metrics.enabledStart a sidecar prometheus exporter to expose Redis metricsbooltrue, falsetrue
pdb.createWhether to create a Pod Disruption Budgetbooltrue, falsetrue
pdb.minAvailableMin number of pods that must still be available after the evictionintAny2
serviceAccount.createWhether to create a service accountbooltrue, falsefalse
commonConfiguration

The commonConfiguration section contains the common configuration to be added into the ConfigMap. For more information, see the documentation.

# Enable AOF https://redis.io/topics/persistence#append-only-file
appendonly yes
# Disable RDB persistence, AOF persistence already enabled.
save ""
# Backwards compatability with Redis version 6.*
replica-ignore-disk-write-errors yes

redpanda

The redpanda section contains the configuration of the Kafka broker. This is based on the RedPanda chart. For more information about the parameters, see the official documentation.

Here are only the values different from the default ones.

kafka section parameters
ParameterDescriptionTypeAllowed valuesDefault
config.cluster.auto_create_topics_enabledWhether to enable auto creation of topicsbooltrue, falsetrue
consoleThe configuration for RedPanda ConsoleobjectAnySee console section
external.typeThe type of Service for external accessstringNodePort, LoadBalancerLoadBalancer
fullnameOverrideThe full name overridestringAnyunited-manufacturing-hub-kafka
listeners.kafka.portThe port of the Kafka listenerintAny9092
rbac.enableWhether to enable RBACbooltrue, falsetrue
resources.cpu.coresThe number of CPU cores to allocate to the Kafka brokerintAny1
resources.memory.container.maxMaximum memory count for each brokerstringAny2Gi
resources.memory.enable_memory_lockingWhether to enable memory lockingbooltrue, falsetrue
serviceAccount.createWhether to create a service accountbooltrue, falsefalse
statefulset.replicasThe number of brokersintAny1
storage.persistentVolume.sizeThe size of the persistent volumestringAny10Gi
tls.enabledWhether to enable TLSbooltrue, falsefalse
console

The console section contains the configuration of the RedPanda Console.

For more information about the parameters, see the official documentation.

console section parameters
ParameterDescriptionTypeAllowed valuesDefault
console.config.kafka.brokersThe list of Kafka brokerslistAnyunited-manufacturing-hub-kafka:9092
service.portThe port of the Service to exposeintAny8090
service.targetPortThe target port of the Service to exposeintAny8080
service.typeThe type of Service to exposestringClusterIp, NodePort, LoadBalancerLoadBalancer
serviceAccount.createWhether to create a service accountbooltrue, falsefalse

sensorconnect

The sensorconnect section contains the configuration of the Sensorconnect microservice.

sensorconnect section parameters
ParameterDescriptionTypeAllowed valuesDefault
additionalSleepTimePerActivePortMsAdditional sleep time between pollings for each active port in millisecondsfloatAny0.0
additionalSlowDownMapJSON map of values, allows to slow down and speed up the polling time of specific sensorsJSONAny{}
allowSubTwentyMsWhether to allow sub 20ms polling time. Set to 1 to enable. Not recommendedint0, 10
deviceFinderTimeSecTime interval in second between new device discoveryintAny20
deviceFinderTimeoutSecTimeout in second for device discovery. Never set lower than deviceFinderTimeSecintAny1
imageThe image of the sensorconnect microservicestringAnyghcr.io/united-manufacturing-hub/sensorconnect
ioddfilepathThe path to the IODD filesstringAny/ioddfiles
lowerPollingTimeThe lower polling time in millisecondsintAny20
maxSensorErrorCountThe maximum number of sensor errors before the sensor is marked as not respondingintAny50
mqtt.encryptedPasswordThe encrypted password of the MQTT brokerstringAnyBase 64 encrypted password
mqtt.passwordThe password of the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
pollingSpeedStepDownMsThe time to subtract from the polling time in milliseconds when a sensor is respondingintAny1
pollingSpeedStepUpMsThe time to add to the polling time in milliseconds when a sensor is not respondingintAny20
resources.limits.cpuThe CPU limitstringAny100m
resources.limits.memoryThe memory limitstringAny200Mi
resources.requests.cpuThe CPU requeststringAny10m
resources.requests.memoryThe memory requeststringAny75Mi
storageRequestThe amount of storage for the PersistentVolumeClaimstringAny1Gi
tagThe tag of the sensorconnect microservice. Defaults to Chart version if not setstringAny0.9.14
upperPollingTimeThe upper polling time in millisecondsintAny1000

serviceAccount

The serviceAccount section contains the configuration of the service account. See the Kubernetes documentation for more information.

serviceAccount section parameters
ParameterDescriptionTypeAllowed valuesDefault
createWhether to create a service accountbooltrue, falsetrue

timescaledb-single

The timescaledb-single section contains the configuration of the TimescaleDB microservice. This is based on the official TimescaleDB Helm chart. For more information about the parameters, see the official documentation.

Here are only the values different from the default ones.

timescaledb-single section parameters
ParameterDescriptionTypeAllowed valuesDefault
replicaCountThe number of replicasintAny1
image.repositoryThe image of the TimescaleDB microservicestringAnyghcr.io/united-manufacturing-hub/timescaledb
image.tagThe Timescaledb-ha versionstringAnypg13.8-ts2.8.0-p1
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
patroni.postgresql.create_replica_methodsThe replica creation methodstring arrayAnybasebackup
postInitA list of sources that contain post init scriptsobject arrayAnySee postInit
service.primary.typeThe type of the primary servicestringClusterIP, NodePort, LoadBalancerLoadBalancer
serviceAccount.createWhether to create a service accountbooltrue, falsefalse
postInit

The postInit parameter is a list of references to sources that contain post init scripts. The scripts are executed after the database is initialized.

postInit:
  - configMap:
      name: {{ resource type="configmap" name="database" }}
      optional: false
  - secret:
      name: {{ resource type="secret" name="database" }}
      optional: false

tulipconnector

The tulipconnector section contains the configuration of the Tulip Connector microservice.

tulipconnector section parameters
ParameterDescriptionTypeAllowed valuesDefault
image.repositoryThe image of the Tulip Connector microservicestringAnyghcr.io/united-manufacturing-hub/tulip-connector
image.tagThe tag of the Tulip Connector microservice. Defaults to latest if not setstringAny0.1.0
image.pullPolicyThe image pull policystringAlways, IfNotPresent, NeverIfNotPresent
replicasThe number of Pod replicasintAny1
envThe environment variablesobjectAnySee env
resources.limits.cpuThe CPU limitstringAny30m
resources.limits.memoryThe memory limitstringAny50Mi
resources.requests.cpuThe CPU requeststringAny10m
resources.requests.memoryThe memory requeststringAny20Mi
env

The env section contains the configuration of the environment variables to add to the Pod.

env section parameters
ParameterDescriptionTypeAllowed valuesDefault
modeIn which mode to run the Tulip Connectorstringdev, prodprod

What’s next

2 - Microservices

This section gives an overview of the microservices that can be found in the United Manufacturing Hub.

There are several microservices that are part of the United Manufacturing Hub. Some of them compose the core of the platform, and are mainly developed by the UMH team, with the addition of some third-party software. Others are maintained by the community, and are used to extend the functionality of the platform.

2.1 - Core

This section contains the overview of the Core components of the United Manufacturing Hub.

The microservices in this section are part of the Core of the United Manufacturing Hub. They are mainly developed by the UMH team, with the addition of some third-party software. They are used to provide the core functionality of the platform.

2.1.1 - Cache

The technical documentation of the redis microservice, which is used as a cache for the other microservices.

The cache in the United Manufacturing Hub is Redis, a key-value store that is used as a cache for the other microservices.

How it works

Recently used data is stored in the cache to reduce the load on the database. All the microservices that need to access the database will first check if the data is available in the cache. If it is, it will be used, otherwise the microservice will query the database and store the result in the cache.

By default, Redis is configured to run in standalone mode, which means that it will only have one master node.

Kubernetes resources

  • StatefulSet: united-manufacturing-hub-redis-master
  • Service:
    • Internal ClusterIP:
      • Redis: united-manufacturing-hub-redis-master at port 6379
      • Headless: united-manufacturing-hub-redis-headless at port 6379
      • Metrics: united-manufacturing-hub-redis-metrics at port 6379
  • ConfigMap:
    • Configuration: united-manufacturing-hub-redis-configuration
    • Health: united-manufacturing-hub-redis-health
    • Scripts: united-manufacturing-hub-redis-scripts
  • Secret: redis-secret
  • PersistentVolumeClaim: redis-data-united-manufacturing-hub-redis-master-0

Configuration

You shouldn’t need to configure the cache manually, as it’s configured automatically when the cluster is deployed. However, if you need to change the configuration, you can do it by editing the redis section of the Helm chart values file.

You can consult the Bitnami Redis chart for more information about the available configuration options.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
ALLOW_EMPTY_PASSWORDAllow empty passwordbooltrue, falsefalse
BITNAMI_DEBUGSpecify if debug values should be setbooltrue, falsefalse
REDIS_PASSWORDRedis passwordstringAnyRandom UUID
REDIS_PORTRedis port numberintAny6379
REDIS_REPLICATION_MODERedis replication modestringmaster, slavemaster
REDIS_TLS_ENABLEDEnable TLSbooltrue, falsefalse

2.1.2 - Database

The technical documentation of the database microservice, which stores the data of the application.

The database microservice is the central component of the United Manufacturing Hub and is based on TimescaleDB, an open-source relational database built for handling time-series data. TimescaleDB is designed to provide scalable and efficient storage, processing, and analysis of time-series data.

You can find more information on the datamodel of the database in the Data Model section, and read about the choice to use TimescaleDB in the blog article.

How it works

When deployed, the database microservice will create two databases, with the related usernames and passwords:

  • grafana: This database is used by Grafana to store the dashboards and other data.
  • factoryinsight: This database is the main database of the United Manufacturing Hub. It contains all the data that is collected by the microservices.

Then, it creates the tables based on the database schema.

If you want to learn more about how TimescaleDB works, you can read the TimescaleDB documentation.

Kubernetes resources

  • StatefulSet: united-manufacturing-hub-timescaledb
  • Service:
    • Internal ClusterIP for the replicas: united-manufacturing-hub-replica at port 5432
    • Internal ClusterIP for the config: united-manufacturing-hub-config at port 8008
    • External LoadBalancer: united-manufacturing-hub at port 5432
  • ConfigMap:
    • Patroni: united-manufacturing-hub-timescaledb-patroni
    • Post init: timescale-post-init
    • Postgres BackRest: united-manufacturing-hub-timescaledb-pgbackrest
    • Scripts: united-manufacturing-hub-timescaledb-scripts
  • Secret:
    • Certificate: united-manufacturing-hub-certificate
    • Patroni credentials: united-manufacturing-hub-credentials
    • Users passwords: timescale-post-init-pw
  • PersistentVolumeClaim:
    • Data: storage-volume-united-manufacturing-hub-timescaledb-0
    • WAL-E: wal-volume-united-manufacturing-hub-timescaledb-0

Configuration

There is only one parameter that usually needs to be changed: the password used to connect to the database. To do so, set the value of the db_password key in the _000_commonConfig.datastorage section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
BOOTSTRAP_FROM_BACKUPWhether to bootstrap the database from a backup or not.int0, 10
PATRONI_KUBERNETES_LABELSThe labels to use to find the pods of the StatefulSet.stringAny{app: united-manufacturing-hub-timescaledb, cluster-name: united-manufacturing-hub, release: united-manufacturing-hub}
PATRONI_KUBERNETES_NAMESPACEThe namespace in which the StatefulSet is deployed.stringAnyunited-manufacturing-hub
PATRONI_KUBERNETES_POD_IPThe IP address of the pod.stringAnyRandom IP
PATRONI_KUBERNETES_PORTSThe ports to use to connect to the pods.stringAny[{"name": "postgresql", "port": 5432}]
PATRONI_NAMEThe name of the pod.stringAnyunited-manufacturing-hub-timescaledb-0
PATRONI_POSTGRESQL_CONNECT_ADDRESSThe address to use to connect to the database.stringAny$(PATRONI_KUBERNETES_POD_IP):5432
PATRONI_POSTGRESQL_DATA_DIRThe directory where the database data is stored.stringAny/var/lib/postgresql/data
PATRONI_REPLICATION_PASSWORDThe password to use to connect to the database as a replica.stringAnyRandom 16 characters
PATRONI_REPLICATION_USERNAMEThe username to use to connect to the database as a replica.stringAnystandby
PATRONI_RESTAPI_CONNECT_ADDRESSThe address to use to connect to the REST API.stringAny$(PATRONI_KUBERNETES_POD_IP):8008
PATRONI_SCOPEThe name of the cluster.stringAnyunited-manufacturing-hub
PATRONI_SUPERUSER_PASSWORDThe password to use to connect to the database as the superuser.stringAnyRandom 16 characters
PATRONI_admin_OPTIONSThe options to use for the admin user.stringComma separated list of optionscreaterole,createdb
PATRONI_admin_PASSWORDThe password to use to connect to the database as the admin user.stringAnyRandom 16 characters
PGBACKREST_CONFIGThe path to the configuration file for Postgres BackRest.stringAny/etc/pgbackrest/pgbackrest.conf
PGDATAThe directory where the database data is stored.stringAny$(PATRONI_POSTGRESQL_DATA_DIR)
PGHOSTThe directory of the runnning databasestringAny/var/run/postgresql

2.1.3 - Factoryinsight

The technical documentation of the Factoryinsight microservice, which exposes a set of APIs to access the data from the database.

Factoryinsight is a microservice that provides a set of REST APIs to access the data from the database. It is particularly useful to calculate the Key Performance Indicators (KPIs) of the factories.

How it works

Factoryinsight exposes REST APIs to access the data from the database or calculate the KPIs. By default, it’s only accessible from the internal network of the cluster, but it can be configured to be accessible from the external network.

The APIs require authentication, that can be ehither a Basic Auth or a Bearer token. Both of these can be found in the Secret factoryinsight-secret.

API documentation

Kubernetes resources

  • Deployment: united-manufacturing-hub-factoryinsight-deployment
  • Service:
  • Secret: factoryinsight-secret

Configuration

You shouldn’t need to configure Factoryinsight manually, as it’s configured automatically when the cluster is deployed. However, if you need to change the configuration, you can do it by editing the factoryinsight section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
CUSTOMER_NAME_{NUMBER}Specifies a user for the REST API. Multiple users can be setstringAny""
CUSTOMER_PASSWORD_{NUMBER}Specifies the password of the user for the REST APIstringAny""
DEBUG_ENABLE_FGTRACEEnables the use of the fgtrace library. Not reccomended for productionstringtrue, falsefalse
DRY_RUNIf enabled, data wont be stored in databasebooltrue, falsefalse
FACTORYINSIGHT_PASSWORDSpecifies the password for the admin user for the REST APIstringAnyRandom UUID
FACTORYINSIGHT_USERSpecifies the admin user for the REST APIstringAnyfactoryinsight
INSECURE_NO_AUTHIf enabled, no authentication is required for the REST API. Not reccomended for productionbooltrue, falsefalse
LOGGING_LEVELDefines which logging level is used, mostly relevant for developersstringPRODUCTION, DEVELOPMENTPRODUCTION
MICROSERVICE_NAMEName of the microservice. Used for tracingstringAnyunited-manufacturing-hub-factoryinsight
POSTGRES_DATABASESpecifies the database name to usestringAnyfactoryinsight
POSTGRES_HOSTSpecifies the database DNS name or IP addressstringAnyunited-manufacturing-hub
POSTGRES_PASSWORDSpecifies the database password to usestringAnychangeme
POSTGRES_PORTSpecifies the database portintValid port number5432
POSTGRES_USERSpecifies the database user to usestringAnyfactoryinsight
REDIS_PASSWORDPassword to access the redis sentinelstringAnyRandom UUID
REDIS_URIThe URI of the Redis instancestringAnyunited-manufacturing-hub-redis-headless:6379
SERIAL_NUMBERSerial number of the cluster. Used for tracingstringAnydefalut
VERSIONThe version of the API used. Each version also enables all the previous onesintAny2

2.1.4 - Grafana

The technical documentation of the grafana microservice, which is a web application that provides visualization and analytics capabilities.

The grafana microservice is a web application that provides visualization and analytics capabilities. Grafana allows you to query, visualize, alert on and understand your metrics no matter where they are stored.

It has a rich ecosystem of plugins that allow you to extend its functionality beyond the core features.

How it works

Grafana is a web application that can be accessed through a web browser. It let’s you create dashboards that can be used to visualize data from the database.

Thanks to some custom datasource plugins, Grafana can use the various APIs of the United Manufacturing Hub to query the database and display useful information.

Kubernetes resources

  • Deployment: united-manufacturing-hub-grafana
  • Service:
    • External LoadBalancer: united-manufacturing-hub-grafana at port 8080
  • ConfigMap: united-manufacturing-hub-grafana
  • Secret: grafana-secret
  • PersistentVolumeClaim: united-manufacturing-hub-grafana

Configuration

Grafana is configured through its user interface. The default credentials are found in the grafana-secret Secret.

The Grafana installation that is provided by the United Manufacturing Hub is shipped with a set of preinstalled plugins:

  • ACE.SVG by Andrew Rodgers
  • Button Panel by CloudSpout LLC
  • Button Panel by UMH Systems Gmbh
  • Discrete by Natel Energy
  • Dynamic Text by Marcus Olsson
  • FlowCharting by agent
  • Pareto Chart by isaozler
  • Pie Chart (old) by Grafana Labs
  • Timepicker Buttons Panel by williamvenner
  • UMH Datasource by UMH Systems Gmbh
  • UMH Datasource v2 by UMH Systems Gmbh
  • Untimely by factry
  • Worldmap Panel by Grafana Labs

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
FACTORYINSIGHT_APIKEYThe API key to use to authenticate to the Factoryinsight APIstringAnyBase64 encoded string
FACTORYINSIGHT_BASEURLThe base URL of the Factoryinsight APIstringAnyunited-manufacturing-hub-factoryinsight-service
FACTORYINSIGHT_CUSTOMERIDThe customer ID to use to authenticate to the Factoryinsight APIstringAnyfactoryinsight
FACTORYINSIGHT_PASSWORDThe password to use to authenticate to the Factoryinsight APIstringAnyRandom UUID
GF_PATHS_DATAThe path where Grafana will store its datastringAny/var/lib/grafana/data
GF_PATHS_LOGSThe path where Grafana will store its logsstringAny/var/log/grafana
GF_PATHS_PLUGINSThe path where Grafana will store its pluginsstringAny/var/lib/grafana/plugins
GF_PATHS_PROVISIONINGThe path where Grafana will store its provisioning configurationstringAny/etc/grafana/provisioning
GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINSList of plugin identifiers to allow loading even if they lack a valid signaturestringComma separated listumh-datasource,umh-factoryinput-panel,umh-v2-datasource
GF_SECURITY_ADMIN_PASSWORDThe password of the admin userstringAnyRandom UUID
GF_SECURITY_ADMIN_USERThe username of the admin userstringAnyadmin

2.1.5 - Kafka Bridge

The technical documentation of the kafka-bridge microservice, which acts as a communication bridge between two Kafka brokers.

Kafka-bridge is a microservice that connects two Kafka brokers and forwards messages between them. It is used to connect the local broker of the edge computer with the remote broker on the server.

How it works

This microservice has two ways of operation:

  • High Integrity: This mode is used for topics that are critical for the user. It is garanteed that no messages are lost. This is achieved by committing the message only after it has been successfully inserted into the database. Ususally all the topics are forwarded in this mode, except for processValue, processValueString and raw messages.
  • High Throughput: This mode is used for topics that are not critical for the user. They are forwarded as fast as possible, but it is possible that messages are lost, for example if the database struggles to keep up. Usually only the processValue, processValueString and raw messages are forwarded in this mode.

Kubernetes resources

  • Deployment: united-manufacturing-hub-kafkabridge
  • Secret:
    • Local broker: united-manufacturing-hub-kafkabridge-secrets-local
    • Remote broker: united-manufacturing-hub-kafkabridge-secrets-remote

Configuration

You can configure the kafka-bridge microservice by setting the following values in the _000_commonConfig.kafkaBridge section of the Helm chart values file.

  kafkaBridge:
    enabled: true
    remotebootstrapServer: ""
    topicmap:
      - bidirectional: false
        name: HighIntegrity
        send_direction: to_remote
        topic: ^ia\..+\..+\..+\.((addMaintenanceActivity)|(addOrder)|(addParentToChild)|(addProduct)|(addShift)|(count)|(deleteShiftByAssetIdAndBeginTimestamp)|(deleteShiftById)|(endOrder)|(modifyProducedPieces)|(modifyState)|(productTag)|(productTagString)|(recommendation)|(scrapCount)|(startOrder)|(state)|(uniqueProduct)|(scrapUniqueProduct))$
      - bidirectional: false
        name: HighThroughput
        send_direction: to_remote
        topic: ^ia\..+\..+\..+\.(processValue).*$

Topic Map schema

The topic map is a list of objects, each object represents a topic (or a set of topics) that should be forwarded. The following JSON schema describes the structure of the topic map:

{
    "$schema": "http://json-schema.org/draft-07/schema",
    "type": "array",
    "title": "Kafka Topic Map",
    "description": "This schema validates valid Kafka topic maps.",
    "default": [],
    "additionalItems": true,
    "items": {
        "$id": "#/items",
        "anyOf": [
            {
                "$id": "#/items/anyOf/0",
                "type": "object",
                "title": "Unidirectional Kafka Topic Map with send direction",
                "description": "This schema validates entries, that are unidirectional and have a send direction.",
                "default": {},
                "examples": [
                    {
                        "name": "HighIntegrity",
                        "topic": "^ia\\..+\\..+\\..+\\.(?!processValue).+$",
                        "bidirectional": false,
                        "send_direction": "to_remote"
                    }
                ],
                "required": [
                    "name",
                    "topic",
                    "bidirectional",
                    "send_direction"
                ],
                "properties": {
                    "name": {
                        "$id": "#/items/anyOf/0/properties/name",
                        "type": "string",
                        "title": "Entry Name",
                        "description": "Name of the map entry, only used for logging & tracing.",
                        "default": "",
                        "examples": [
                            "HighIntegrity"
                        ]
                    },
                    "topic": {
                        "$id": "#/items/anyOf/0/properties/topic",
                        "type": "string",
                        "title": "The topic to listen on",
                        "description": "The topic to listen on, this can be a regular expression.",
                        "default": "",
                        "examples": [
                            "^ia\\..+\\..+\\..+\\.(?!processValue).+$"
                        ]
                    },
                    "bidirectional": {
                        "$id": "#/items/anyOf/0/properties/bidirectional",
                        "type": "boolean",
                        "title": "Is the transfer bidirectional?",
                        "description": "When set to true, the bridge will consume and produce from both brokers",
                        "default": false,
                        "examples": [
                            false
                        ]
                    },
                    "send_direction": {
                        "$id": "#/items/anyOf/0/properties/send_direction",
                        "type": "string",
                        "title": "Send direction",
                        "description": "Can be either 'to_remote' or 'to_local'",
                        "default": "",
                        "examples": [
                            "to_remote",
                            "to_local"
                        ]
                    }
                },
                "additionalProperties": true
            },
            {
                "$id": "#/items/anyOf/1",
                "type": "object",
                "title": "Bi-directional Kafka Topic Map with send direction",
                "description": "This schema validates entries, that are bi-directional.",
                "default": {},
                "examples": [
                    {
                        "name": "HighIntegrity",
                        "topic": "^ia\\..+\\..+\\..+\\.(?!processValue).+$",
                        "bidirectional": true
                    }
                ],
                "required": [
                    "name",
                    "topic",
                    "bidirectional"
                ],
                "properties": {
                    "name": {
                        "$id": "#/items/anyOf/1/properties/name",
                        "type": "string",
                        "title": "Entry Name",
                        "description": "Name of the map entry, only used for logging & tracing.",
                        "default": "",
                        "examples": [
                            "HighIntegrity"
                        ]
                    },
                    "topic": {
                        "$id": "#/items/anyOf/1/properties/topic",
                        "type": "string",
                        "title": "The topic to listen on",
                        "description": "The topic to listen on, this can be a regular expression.",
                        "default": "",
                        "examples": [
                            "^ia\\..+\\..+\\..+\\.(?!processValue).+$"
                        ]
                    },
                    "bidirectional": {
                        "$id": "#/items/anyOf/1/properties/bidirectional",
                        "type": "boolean",
                        "title": "Is the transfer bidirectional?",
                        "description": "When set to true, the bridge will consume and produce from both brokers",
                        "default": false,
                        "examples": [
                            true
                        ]
                    }
                },
                "additionalProperties": true
            }
        ]
    },
    "examples": [
   {
      "name":"HighIntegrity",
      "topic":"^ia\\..+\\..+\\..+\\.(?!processValue).+$",
      "bidirectional":true
   },
   {
      "name":"HighThroughput",
      "topic":"^ia\\..+\\..+\\..+\\.(processValue).*$",
      "bidirectional":false,
      "send_direction":"to_remote"
   }
]
}

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
DEBUG_ENABLE_FGTRACEEnables the use of the fgtrace library, do not enable in productionstringtrue, falsefalse
KAFKA_GROUP_ID_SUFFIXIdentifier appended to the kafka group ID, usually a serial numberstringAnydefalut
KAFKA_SSL_KEY_PASSWORD_LOCALPassword for the SSL key pf the local brokerstringAny""
KAFKA_SSL_KEY_PASSWORD_REMOTEPassword for the SSL key of the remote brokerstringAny""
KAFKA_TOPIC_MAPA json map of the kafka topics should be forwardedJSONSee below{}
KAKFA_USE_SSLEnables the use of SSL for the kafka connectionstringtrue, falsefalse
LOCAL_KAFKA_BOOTSTRAP_SERVERURL of the local kafka broker, port is requiredstringAny valid URLunited-manufacturing-hub-kafka:9092
LOGGING_LEVELDefines which logging level is used, mostly relevant for developers.stringPRODUCTION, DEVELOPMENTPRODUCTION
MICROSERVICE_NAMEName of the microservice (used for tracing)stringAnyunited-manufacturing-hub-kafka-bridge
REMOTE_KAFKA_BOOTSTRAP_SERVERURL of the remote kafka brokerstringAny valid URL""
SERIAL_NUMBERSerial number of the cluster (used for tracing)stringAnydefalut

2.1.6 - Kafka Broker

The technical documentation of the kafka-broker microservice, which handles the communication between the microservices.

The Kafka broker in the United Manufacturing Hub is RedPanda, a Kafka-compatible event streaming platform. It’s used to store and process messages, in order to stream real-time data between the microservices.

How it works

RedPanda is a distributed system that is made up of a cluster of brokers, designed for maximum performance and reliability. It does not depend on external systems like ZooKeeper, as it’s shipped as a single binary.

Read more about RedPanda in the official documentation.

Kubernetes resources

  • StatefulSet: united-manufacturing-hub-kafka
  • Service:
    • Internal ClusterIP (headless): united-manufacturing-hub-kafka
    • External NodePort: united-manufacturing-hub-kafka-external at port 9094 for the Kafka API listener, port 9644 for the Admin API listener, port 8083 for the HTTP Proxy listener, and port 8081 for the Schema Registry listener.
  • ConfigMap: united-manufacturing-hub-kafka
  • Secret: united-manufacturing-hub-kafka-sts-lifecycle
  • PersistentVolumeClaim: datadir-united-manufacturing-hub-kafka-0

Configuration

You shouldn’t need to configure the Kafka broker manually, as it’s configured automatically when the cluster is deployed. However, if you need to change the configuration, you can do it by editing the redpanda section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
HOST_IPThe IP address of the host machine.stringAnyRandom IP
POD_IPThe IP address of the pod.stringAnyRandom IP
SERVICE_NAMEThe name of the service.stringAnyunited-manufacturing-hub-kafka

2.1.7 - Kafka Console

The technical documentation of the kafka-console microservice, which provides a GUI to interact with the Kafka broker.

Kafka-console uses Redpanda Console to help you manage and debug your Kafka workloads effortlessy.

With it, you can explore your Kafka topics, view messages, list the active consumers, and more.

How it works

You can access the Kafka console via its Service.

It’s automatically connected to the Kafka broker, so you can start using it right away. You can view the Kafka broker configuration in the Broker tab, and explore the topics in the Topics tab.

Kubernetes resources

  • Deployment: united-manufacturing-hub-console
  • Service:
    • External LoadBalancer: united-manufacturing-hub-console at port 8090
  • ConfigMap: united-manufacturing-hub-console
  • Secret: united-manufacturing-hub-console

Configuration

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
LOGIN_JWTSECRETThe secret used to authenticate the communication to the backend.stringAnyRandom string

2.1.8 - Kafka to Postgresql

The technical documentation of the kafka-to-postgresql microservice, which consumes messages from a Kafka broker and writes them in a PostgreSQL database.

Kafka-to-postgresql is a microservice responsible for consuming kafka messages and inserting the payload into a Postgresql database. Take a look at the Datamodel to see how the data is structured.

This microservice requires that the Kafka Topic umh.v1.kafka.newTopic exits. This will happen automatically from version 0.9.12.

How it works

By default, kafka-to-postgresql sets up two Kafka consumers, one for the High Integrity topics and one for the High Throughput topics.

The graphic below shows the program flow of the microservice.

Kafka-to-postgres-flow
Kafka-to-postgres-flow

High integrity

The High integrity topics are forwarded to the database in a synchronous way. This means that the microservice will wait for the database to respond with a non error message before committing the message to the Kafka broker. This way, the message is garanteed to be inserted into the database, even though it might take a while.

Most of the topics are forwarded in this mode.

The picture below shows the program flow of the high integrity mode.

high-integrity-data-flow
high-integrity-data-flow

High throughput

The High throughput topics are forwarded to the database in an asynchronous way. This means that the microservice will not wait for the database to respond with a non error message before committing the message to the Kafka broker. This way, the message is not garanteed to be inserted into the database, but the microservice will try to insert the message into the database as soon as possible. This mode is used for the topics that are expected to have a high throughput.

The topics that are forwarded in this mode are processValue, processValueString and all the raw topics.

Kubernetes resources

  • Deployment: united-manufacturing-hub-kafkatopostgresql
  • Secret: united-manufacturing-hub-kafkatopostgresql-certificates

Configuration

You shouldn’t need to configure kafka-to-postgresql manually, as it’s configured automatically when the cluster is deployed. However, if you need to change the configuration, you can do it by editing the kafkatopostgresql section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
DEBUG_ENABLE_FGTRACEEnables the use of the fgtrace library. Not reccomended for productionstringtrue, falsefalse
DRY_RUNIf set to true, the microservice will not write to the databasebooltrue, falsefalse
KAFKA_BOOTSTRAP_SERVERURL of the Kafka broker used, port is requiredstringAnyunited-manufacturing-hub-kafka:9092
KAFKA_SSL_KEY_PASSWORDKey password to decode the SSL private keystringAny""
LOGGING_LEVELDefines which logging level is used, mostly relevant for developersstringPRODUCTION, DEVELOPMENTPRODUCTION
MEMORY_REQUESTMemory request for the message cachestringAny50Mi
MICROSERVICE_NAMEName of the microservice (used for tracing)stringAnyunited-manufacturing-hub-kafkatopostgresql
POSTGRES_DATABASEThe name of the PostgreSQL databasestringAnyfactoryinsight
POSTGRES_HOSTHostname of the PostgreSQL databasestringAnyunited-manufacturing-hub
POSTGRES_PASSWORDThe password to use for PostgreSQL connectionsstringAnychangeme
POSTGRES_SSLMODEIf set to true, the PostgreSQL connection will use SSLstringAnyrequire
POSTGRES_USERThe username to use for PostgreSQL connectionsstringAnyfactoryinsight

2.1.9 - MQTT Bridge

The technical documentation of the mqtt-bridge microservice, which acts as a communication bridge between two MQTT brokers.

MQTT-bridge is a microservice that connects two MQTT brokers and forwards messages between them. It is used to connect the local broker of the edge computer with the remote broker on the server.

How it works

This microservice subscribes to topics on the local broker and publishes the messages to the remote broker, while also subscribing to topics on the remote broker and publishing the messages to the local broker.

Kubernetes resources

  • StatefulSet: united-manufacturing-hub-mqttbridge
  • Secret: united-manufacturing-hub-mqttbridge-secrets
  • PersistentVolumeClaim: united-manufacturing-hub-mqttbridge-claim

Configuration

You can configure the URL of the remote MQTT broker that MQTT-bridge should connect to by setting the value of the remoteBrokerUrl parameter in the _000_commonConfig.mqttBridge section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
BRIDGE_ONE_WAYWhether to enable one-way communication, from local to remotebooltrue, falsetrue
INSECURE_SKIP_VERIFY_LOCALSkip TLS certificate verification for the local brokerbooltrue, falsetrue
INSECURE_SKIP_VERIFY_REMOTESkip TLS certificate verification for the remote brokerbooltrue, falsetrue
LOCAL_BROKER_SSL_ENABLEDWhether to enable SSL for the local MQTT brokerbooltrue, falsetrue
LOCAL_BROKER_URLURL for the local MQTT brokerstringAnyssl://united-manufacturing-hub-mqtt:8883
LOCAL_CERTIFICATE_NAMESet to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryptionstringUSE_TLS, NO_CERTUSE_TLS
LOCAL_PUB_TOPICLocal MQTT topic to publish tostringAnyia
LOCAL_SUB_TOPICLocal MQTT topic to subscribe tostringAnyia/factoryinsight
MQTT_PASSWORDPassword for the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
REMOTE_BROKER_SSL_ENABLEDWhether to enable SSL for the remote MQTT brokerbooltrue, falsetrue
REMOTE_BROKER_URLURL for the local MQTT brokerstringAnyssl://united-manufacturing-hub-mqtt.united-manufacturing-hub:8883
REMOTE_CERTIFICATE_NAMESet to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryptionstringUSE_TLS, NO_CERTUSE_TLS
REMOTE_PUB_TOPICRemote MQTT topic to publish tostringAnyia/factoryinsight
REMOTE_SUB_TOPICRemote MQTT topic to subscribe tostringAnyia

2.1.10 - MQTT Broker

The technical documentation of the mqtt-broker microservice, which forwards MQTT messages between the other microservices.

The MQTT broker in the United Manufacturing Hub is HiveMQ and is customized to fit the needs of the stack. It’s a core component of the stack and is used to communicate between the different microservices.

How it works

The MQTT broker is responsible for receiving MQTT messages from the different microservices and forwarding them to the MQTT Kafka bridge.

Kubernetes resources

  • StatefulSet: united-manufacturing-hub-hivemqce
  • Service:
    • Internal ClusterIP:
      • HiveMQ local: united-manufacturing-hub-hivemq-local-service at port 1883 (MQTT) and 8883 (MQTT over TLS)
      • VerneMQ (for backwards compatibility): united-manufacturing-hub-vernemq at port 1883 (MQTT) and 8883 (MQTT over TLS)
      • VerneMQ local (for backwards compatibility): united-manufacturing-hub-vernemq-local-service at port 1883 (MQTT) and 8883 (MQTT over TLS)
    • External LoadBalancer: united-manufacturing-hub-mqtt at port 1883 (MQTT) and 8883 (MQTT over TLS)
  • ConfigMap:
    • Configuration: united-manufacturing-hub-hivemqce-hive
    • Credentials: united-manufacturing-hub-hivemqce-extension
  • Secret: united-manufacturing-hub-hivemqce-secret-keystore
  • PersistentVolumeClaim:
    • Data: united-manufacturing-hub-hivemqce-claim-data
    • Extensions: united-manufacturing-hub-hivemqce-claim-extensions

Configuration

Most of the configuration is done through the XML files in the ConfigMaps. The default configuration should be sufficient for most use cases.

The HiveMQ installation of the United Manufacturing Hub comes with these extensions:

If you want to add more extensions, or to change the configuration, visit the HiveMQ documentation.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
HIVEMQ_ALLOW_ALL_CLIENTSWhether to allow all clients to connect to the brokerbooltrue, falsetrue

2.1.11 - MQTT Kafka Bridge

The technical documentation of the mqtt-kafka-bridge microservice, which transfers messages from MQTT brokers to Kafka Brokers and vice versa.

Mqtt-kafka-bridge is a microservice that acts as a bridge between MQTT brokers and Kafka brokers, transfering messages from one to the other and vice versa.

This microservice requires that the Kafka Topic umh.v1.kafka.newTopic exits. This will happen automatically from version 0.9.12.

Since version 0.9.10, it allows all raw messages, even if their content is not in a valid JSON format.

How it works

Mqtt-kafka-bridge consumes topics from a message broker, translates them to the proper format and publishes them to the other message broker.

Kubernetes resources

  • Deployment: united-manufacturing-hub-mqttkafkabridge
  • Secret:
    • Kafka: united-manufacturing-hub-mqttkafkabridge-kafka-secrets
    • MQTT: united-manufacturing-hub-mqttkafkabridge-mqtt-secrets

Configuration

You shouldn’t need to configure mqtt-kafka-bridge manually, as it’s configured automatically when the cluster is deployed. However, if you need to change the configuration, you can do it by editing the mqttkafkabridge section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
DEBUG_ENABLE_FGTRACEEnables the use of the fgtrace library. Not reccomended for productionstringtrue, falsefalse
INSECURE_SKIP_VERIFYSkip TLS certificate verificationbooltrue, falsetrue
KAFKA_BASE_TOPICThe Kafka base topicstringAnyia
KAFKA_BOOTSTRAP_SERVERURL of the Kafka broker used, port is requiredstringAnyunited-manufacturing-hub-kafka:9092
KAFKA_LISTEN_TOPICKafka topic to subscribe to. Accept regex valuesstringAny^ia.+
KAFKA_SENDER_THREADSNumber of threads used to send messages to KafkaintAny1
LOGGING_LEVELDefines which logging level is used, mostly relevant for developersstringPRODUCTION, DEVELOPMENTPRODUCTION
MESSAGE_LRU_SIZESize of the LRU cache used to store messages. This is used to prevent duplicate messages from being sent to Kafka.intAny100000
MICROSERVICE_NAMEName of the microservice (used for tracing)stringAnyunited-manufacturing-hub-mqttkafkabridge
MQTT_BROKER_URLThe MQTT broker URLstringAnyunited-manufacturing-hub-mqtt:1883
MQTT_CERTIFICATE_NAMESet to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryptionstringUSE_TLS, NO_CERTUSE_TLS
MQTT_PASSWORDPassword for the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
MQTT_SENDER_THREADSNumber of threads used to send messages to MQTTintAny1
MQTT_TOPICMQTT topic to subscribe to. Accept regex valuesstringAnyia/#
POD_NAMEName of the pod. Used for tracingstringAnyunited-manufacturing-hub-mqttkafkabridge-Random-ID
RAW_MESSSAGE_LRU_SIZESize of the LRU cache used to store raw messages. This is used to prevent duplicate messages from being sent to Kafka.intAny100000
SERIAL_NUMBERSerial number of the cluster (used for tracing)stringAnydefault

2.1.12 - Node-RED

The technical documentation of the nodered microservice, which wires together hardware devices, APIs and online services.

Node-RED is a programming tool for wiring together hardware devices, APIs and online services in new and interesting ways. It provides a browser-based editor that makes it easy to wire together flows using the wide range of nodes in the Node-RED library.

How it works

Node-RED is a JavaScript-based tool that can be used to create flows that interact with the other microservices in the United Manufacturing Hub or external services.

See our guides for Node-RED to learn more about how to use it.

Kubernetes resources

  • StatefulSet: united-manufacturing-hub-nodered
  • Service:
    • External LoadBalancer: united-manufacturing-hub-nodered-service at port 1880
  • ConfigMap:
    • Configuration: united-manufacturing-hub-nodered-config
    • Flows: united-manufacturing-hub-nodered-flows
  • Secret: united-manufacturing-hub-nodered-secrets
  • PersistentVolumeClaim: united-manufacturing-hub-nodered-claim

Configuration

You can enable the nodered microservice and decide if you want to use the default flows in the _000_commonConfig.dataprocessing.nodered section of the Helm chart values.

All the other values are set by default and you can find them in the Danger Zone section of the Helm chart values.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
NODE_RED_ENABLE_SAFE_MODEEnable safe mode, useful in case of broken flowsbooleantrue, falsefalse
TZThe timezone used by Node-REDstringAnyBerlin/Europe

2.1.13 - Sensorconnect

The technical documentation of the sensorconnect microservice, which reads data from sensors and sends them to the MQTT or Kafka broker.

Sensorconnect automatically detects ifm gateways connected to the network and reads data from the connected IO-Link sensors.

How it works

Sensorconnect continuosly scans the given IP range for gateways, making it effectively a plug-and-play solution. Once a gateway is found, it automatically download the IODD files for the connected sensors and starts reading the data at the configured interval. Then it processes the data and sends it to the MQTT or Kafka broker, to be consumed by other microservices.

If you want to learn more about how to use sensors in your asstes, check out the retrofitting section of the UMH Learn website.

IODD files

The IODD files are used to describe the sensors connected to the gateway. They contain information about the data type, the unit of measurement, the minimum and maximum values, etc. The IODD files are downloaded automatically from IODDFinder once a sensor is found, and are stored in a Persistent Volume. If downloading from internet is not possible, for example in a closed network, you can download the IODD files manually and store them in the folder specified by the IODD_FILE_PATH environment variable.

If no IODD file is found for a sensor, the data will not be processed, but sent to the broker as-is.

Kubernetes resources

  • StatefulSet: united-manufacturing-hub-sensorconnect
  • Secret:
    • Kafka: united-manufacturing-hub-sensorconnect-kafka-secrets
    • MQTT: united-manufacturing-hub-sensorconnect-mqtt-secrets
  • PersistentVolumeClaim: united-manufacturing-hub-sensorconnect-claim

Configuration

You can configure the IP range to scan for gateways, and which message broker to use, by setting the values of the parameters in the _000_commonConfig.datasources.sensorconnect section of the Helm chart values file.

The default values of the other parameters are usually good for most use cases, but you can change them in the Danger Zone section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
ADDITIONAL_SLEEP_TIME_PER_ACTIVE_PORT_MSAdditional sleep time between pollings for each active portfloatAny0.0
ADDITIONAL_SLOWDOWN_MAPJSON map of values, allows to slow down and speed up the polling time of specific sensorsJSONSee below[]
DEBUG_ENABLE_FGTRACEEnables the use of the fgtrace library. Not reccomended for productionstringtrue, falsefalse
DEVICE_FINDER_TIMEOUT_SECHTTP timeout in seconds for finding new devicesintAny1
DEVICE_FINDER_TIME_SECTime interval in seconds for finding new devicesintAny20
IODD_FILE_PATHFilesystem path where to store IODD filesstringAny valid Unix path/ioddfiles
IP_RANGEThe IP range to scan for new sensorstringAny valid IP in CIDR notation192.168.10.1/24
KAFKA_BOOTSTRAP_SERVERURL of the Kafka broker. Port is requiredstringAnyunited-manufacturing-hub-kafka:9092
KAFKA_SSL_KEY_PASSWORDThe encrypted password of the SSL key. If empty, no password is usedstringAny""
KAFKA_USE_SSLSet to true to use SSL encryption for the connection to the Kafka brokerstringtrue, falsefalse
LOGGING_LEVELDefines which logging level is used, mostly relevant for developersstringPRODUCTION, DEVELOPMENTPRODUCTION
LOWER_POLLING_TIME_MSTime in milliseconds to define the lower bound of time between sensor pollingintAny20
MAX_SENSOR_ERROR_COUNTAmount of errors before a sensor is temporarily disabledintAny50
MICROSERVICE_NAMEName of the microservice (used for tracing)stringAnyunited-manufacturing-hub-sensorconnect
MQTT_BROKER_URLURL of the MQTT broker. Port is requiredstringAnyunited-manufacturing-hub-mqtt:1883
MQTT_CERTIFICATE_NAMESet to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryptionstringUSE_TLS, NO_CERTUSE_TLS
MQTT_PASSWORDPassword for the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
POD_NAMEName of the pod (used for tracing)stringAnyunited-manufacturing-hub-sensorconnect-0
POLLING_SPEED_STEP_DOWN_MSTime in milliseconds subtracted from the polling interval after a successful pollingintAny1
POLLING_SPEED_STEP_UP_MSTime in milliseconds added to the polling interval after a failed pollingintAny20
SENSOR_INITIAL_POLLING_TIME_MSAmount of time in milliseconds before starting to request sensor data. Must be higher than LOWER_POLLING_TIME_MSintAny100
SUB_TWENTY_MSSet to 1 to allow LOWER_POLLING_TIME_MS of under 20 ms. This is not recommended as it might lead to the gateway becoming unresponsive until a manual rebootint0, 10
TESTIf enabled, the microservice will use a test IODD file from the filesystem to use with a mocked sensor. Only useful for development.stringtrue, falsefalse
TRANSMITTERIDSerial number of the cluster (used for tracing)stringAnydefault
UPPER_POLLING_TIME_MSTime in milliseconds to define the upper bound of time between sensor pollingintAny1000
USE_KAFKAIf enabled, uses Kafka as a message brokerstringtrue, falsetrue
USE_MQTTIf enabled, uses MQTT as a message brokerstringtrue, falsefalse

Slowdown map

The ADDITIONAL_SLOWDOWN_MAP environment variable allows you to slow down and speed up the polling time of specific sensors. It is a JSON array of values, with the following structure:

[
  {
    "serialnumber": "000200610104",
    "slowdown_ms": -10
  },
  {
    "url": "http://192.168.0.13",
    "slowdown_ms": 20
  },
  {
    "productcode": "AL13500",
    "slowdown_ms": 20.01
  }
]

2.2 - Community

This section contains the overview of the community-supported components of the United Manufacturing Hub used to extend the functionality of the platform.

The microservices in this section are not part of the Core of the United Manufacturing Hub, either because they are still in development, deprecated or only supported community. They can be used to extend the functionality of the platform.

It is not recommended to use these microservices in production as they might be unstable or not supported anymore.

2.2.1 - Barcodereader

The technical documentation of the barcodereader microservice, which reads barcodes and sends the data to the Kafka broker.

This microservice is still in development and is not considered stable for production use.

Barcodereader is a microservice that reads barcodes and sends the data to the Kafka broker.

How it works

Connect a barcode scanner to the system and the microservice will read the barcodes and send the data to the Kafka broker.

Kubernetes resources

  • Deployment: united-manufacturing-hub-barcodereader
  • Secret: united-manufacturing-hub-barcodereader-secrets

Configuration

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
ASSET_IDThe asset ID, which is used for the topic structurestringAnybarcodereader
CUSTOMER_IDThe customer ID, which is used for the topic structurestringAnyraw
DEBUG_ENABLE_FGTRACEEnables the use of the fgtrace library. Not recommended for productionstringtrue, falsefalse
INPUT_DEVICE_NAMEThe name of the USB device to usestringAnyDatalogic ADC, Inc. Handheld Barcode Scanner
INPUT_DEVICE_PATHThe path of the USB device to use. It is recommended to use a wildcard (for example, /dev/input/event*) or leave emptystringValid Unix device path""
KAFKA_BOOTSTRAP_SERVERURL of the Kafka broker used, port is requiredstringAnyunited-manufacturing-hub-kafka:9092
LOCATIONThe location, which is used for the topic structurestringAnybarcodereader
LOGGING_LEVELDefines which logging level is used, mostly relevant for developers.stringPRODUCTION, DEVELOPMENTPRODUCTION
MICROSERVICE_NAMEName of the microservice (used for tracing)stringAnyunited-manufacturing-hub-barcodereader
SCAN_ONLYPrevent message broadcasting if enabledbooltrue, falsefalse
SERIAL_NUMBERSerial number of the cluster (used for tracing)stringAnydefalut

2.2.2 - Factoryinput

The technical documentation of the factoryinput microservice, which provides REST endpoints for MQTT messages via HTTP requests.

This microservice is still in development and is not considered stable for production use

Factoryinput provides REST endpoints for MQTT messages via HTTP requests.

This microservice is typically accessed via grafana-proxy

How it works

The factoryinput microservice provides REST endpoints for MQTT messages via HTTP requests.

The main endpoint is /api/v1/{customer}/{location}/{asset}/{value}, with a POST request method. The customer, location, asset and value are all strings. And are used to build the MQTT topic. The body of the HTTP request is used as the MQTT payload.

Kubernetes resources

  • StatefulSet: united-manufacturing-hub-factoryinput
  • Service:
    • Internal ClusterIP: united-manufacturing-hub-factoryinput-service at port 80
  • Secret: factoryinput-secret

Configuration

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
BROKER_URLURL to the brokerstringallssl://united-manufacturing-hub-mqtt:8883
CERTIFICATE_NAMESet to NO_CERT to allow non-encrypted MQTT access, or to USE_TLS to use TLS encryptionstringUSE_TLS, NO_CERTUSE_TLS
CUSTOMER_NAME_{NUMBER}Specifies a user for the REST API. Multiple users can be setstringAny""
CUSTOMER_PASSWORD_{NUMBER}Specifies the password of the user for the REST APIstringAny""
DEBUG_ENABLE_FGTRACEEnables the use of the fgtrace library. Not reccomended for productionstringtrue, falsefalse
FACTORYINPUT_PASSWORDSpecifies the admin user for the REST APIstringAnyfactoryinsight
FACTORYINPUT_USERSpecifies the password for the admin user for the REST APIstringAnyRandom UUID
LOGGING_LEVELDefines which logging level is used, mostly relevant for developersstringPRODUCTION, DEVELOPMENTPRODUCTION
MQTT_QUEUE_HANDLERNumber of queue workers to spawnint0-6553510
MQTT_PASSWORDPassword for the MQTT brokerstringAnyINSECURE_INSECURE_INSECURE
POD_NAMEName of the pod. Used for tracingstringAnyunited-manufacturing-hub-factoryinput-0
SERIAL_NUMBERSerial number of the cluster. Used for tracingstringAnydefalut
VERSIONThe version of the API used. Each version also enables all the previous onesintAny1

2.2.3 - Grafana Proxy

The technical documentation of the grafana-proxy microservice, which proxies request from Grafana to the backend services.

This microservice is still in development and is not considered stable for production use

How it works

The grafana-proxy microservice serves an HTTP REST endpoint located at /api/v1/{service}/{data}. The service parameter specifies the backend service to which the request should be proxied, like factoryinput or factoryinsight. The data parameter specifies the API endpoint to forward to the backend service. The body of the HTTP request is used as the payload for the proxied request.

Kubernetes resources

  • Deployment: united-manufacturing-hub-grafanaproxy
  • Service:
    • External LoadBalancer: united-manufacturing-hub-grafanaproxy-service at port 2096

Configuration

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
DEBUG_ENABLE_FGTRACEEnables the use of the fgtrace library. Not reccomended for productionstringtrue, falsefalse
FACTORYINPUT_BASE_URLURL of factoryinputstringAnyhttp://united-manufacturing-hub-factoryinput-service
FACTORYINPUT_KEYSpecifies the password for the admin user for factoryinputstringAnyRandom UUID
FACTORYINPUT_USERSpecifies the admin user for factoryinputstringAnyfactoryinput
FACTORYINSIGHT_BASE_URLURL of factoryinsightstringAnyhttp://united-manufacturing-hub-factoryinsight-service
MICROSERVICE_NAMEName of the microservice. Used for tracingstringAnyunited-manufacturing-hub-factoryinput
SERIAL_NUMBERSerial number of the cluster. Used for tracingstringAnydefault
VERSIONThe version of the API used. Each version also enables all the previous onesintAny1

2.2.4 - Kafka State Detector

The technical documentation of the kafka-state-detector microservice, which detects the state of the Kafka broker.
This microservice is still in development and is not considered stable for production use

How it works

Kubernetes resources

  • Deployment: united-manufacturing-hub-kafkastatedetector
  • Secret: united-manufacturing-hub-kafkastatedetector-secrets

Configuration

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
ACTIVITY_ENABLEDControls wheter to check the activity of the Kafka brokerstringtrue, falsetrue
ANOMALY_ENABLEDControls wheter to check for anomalies in the Kafka brokerstringtrue, falsetrue
DEBUG_ENABLE_FGTRACEEnables the use of the fgtrace library. Not reccomended for productionstringtrue, falsefalse
KAFKA_BOOTSTRAP_SERVERURL of the Kafka broker used, port is requiredstringAnyunited-manufacturing-hub-kafka:9092
KAFKA_SSL_KEY_PASSWORDKey password to decode the SSL private keystringAny""
KAKFA_USE_SSLEnables the use of SSL for the kafka connectionstringtrue, falsefalse
MICROSERVICE_NAMEName of the microservice (used for tracing)stringAnyunited-manufacturing-hub-kafkastatedetector
SERIAL_NUMBERSerial number of the cluster. Used for tracingstringAnydefalut

2.2.5 - MQTT Simulator

The technical documentation of the iotsensorsmqtt microservice, which simulates sensors sending data to the MQTT broker.

This microservice is a community contribution and is not part of the main stack of the United Manufacturing Hub, but is enabled by default.

The IoTSensors MQTT Simulator is a microservice that simulates sensors sending data to the MQTT broker. You can read the full documentation on the

GitHub repository.

How it works

The microservice publishes messages on the topic ia/raw/development/ioTSensors/, creating a subtopic for each simulation. The subtopics are the names of the simulations, which are Temperature, Humidity, and Pressure. The values are calculated using a normal distribution with a mean and standard deviation that can be configured.

Kubernetes resources

  • Deployment: united-manufacturing-hub-iotsensorsmqtt
  • ConfigMap: united-manufacturing-hub-iotsensors-mqtt

Configuration

You can change the configuration of the microservice by updating the config.json file in the ConfigMap.

2.2.6 - MQTT to Postgresql

The technical documentation of the mqtt-to-postgresql microservice, which consumes messages from an MQTT broker and writes them in a PostgreSQL database.

If you landed here from Google, you probably might want to check out either the architecture of the United Manufacturing Hub or our knowledge website for more information on the general topics of IT, OT and IIoT.

This microservice is deprecated and should not be used anymore in production. Please use kafka-to-postgresql instead.

How it works

The mqtt-to-postgresql microservice subscribes to the MQTT broker and saves the values of the messages on the topic ia/# in the database.

2.2.7 - OPCUA Simulator

The technical documentation of the opcua-simulator microservice, which simulates OPCUA devices.

This microservice is a community contribution and is not part of the main stack of the United Manufacturing Hub, but is enabled by default.

How it works

The OPCUA Simulator is a microservice that simulates OPCUA devices. You can read the full documentation on the GitHub repository.

You can then connect to the simulated OPCUA server via Node-RED and read the values of the simulated devices. Learn more about how to connect to the OPCUA simulator to Node-RED in our guide.

Kubernetes resources

  • Deployment: united-manufacturing-hub-opcuasimulator-deployment
  • Service:
    • External LoadBalancer: united-manufacturing-hub-opcuasimulator-service at port 46010
  • ConfigMap: united-manufacturing-hub-opcuasimulator-config

Configuration

You can change the configuration of the microservice by updating the config.json file in the ConfigMap.

2.2.8 - PackML Simulator

The technical documentation of the packml-simulator microservice, which simulates a manufacturing line using PackML over MQTT.

This microservice is a community contribution and is not part of the main stack of the United Manufacturing Hub, but it is enabled by default.

PackML MQTT Simulator is a virtual line that interfaces using PackML implemented over MQTT. It implements the following PackML State model and communicates over MQTT topics as defined by environmental variables. The simulator can run with either a basic MQTT topic structure or SparkPlugB.

PackML StateModel
PackML StateModel

How it works

You can read the full documentation on the GitHub repository.

Kubernetes resources

  • Deployment: united-manufacturing-hub-packmlmqttsimulator

Configuration

You shouldn’t need to configure PackML Simulator manually, as it’s configured automatically when the cluster is deployed. However, if you need to change the configuration, you can do it by editing the packmlmqttsimulator section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
AREAISA-95 area name of the linestringAnyDefaultArea
LINEISA-95 line name of the linestringAnyDefaultProductionLine
MQTT_PASSWORDPassword for the MQTT broker. Leave empty if the server does not manage permissionsstringAnyINSECURE_INSECURE_INSECURE
MQTT_URLServer URL of the MQTT serverstringAnymqtt://united-manufacturing-hub-mqtt:1883
MQTT_USERNAMEName for the MQTT broker. Leave empty if the server does not manage permissionsstringAnyPACKMLSIMULATOR
SITEISA-95 site name of the linestringAnytestLocation

2.2.9 - Tulip Connector

The technical documentation of the tulip-connector microservice, which exposes internal APIs, such as factoryinsight, to the internet. Specifically designed to communicate with Tulip.

This microservice is still in development and is not considered stable for production use.

The tulip-connector microservice enables communication with the United Manufacturing Hub by exposing internal APIs, like factoryinsight, to the internet. With this REST endpoint, users can access data stored in the UMH and seamlessly integrate Tulip with a Unified Namespace and on-premise Historian. Furthermore, the tulip-connector can be customized to meet specific customer requirements, including integration with an on-premise MES system.

How it works

The tulip-connector acts as a proxy between the internet and the UMH. It exposes an endpoint to forward requests to the UMH and returns the response.

API documentation

Kubernetes resources

  • Deployment: united-manufacturing-hub-tulip-connector-deployment
  • Service:
    • Internal ClusterIP: united-manufacturing-hub-tulip-connector-service at port 80
  • Ingress: united-manufacturing-hub-tulip-connector-ingress

Configuration

You can enable the tulip-connector and set the domain for the ingress by editing the values in the _000_commonConfig.tulipconnector section of the Helm chart values file.

Environment variables

Environment variables
Variable nameDescriptionTypeAllowed valuesDefault
FACTORYINSIGHT_PASSWORDSpecifies the password for the admin user for the REST APIstringAnyRandom UUID
FACTORYINSIGHT_URLSpecifies the URL of the factoryinsight microservice.stringAnyhttp://united-manufacturing-hub-factoryinsight-service
FACTORYINSIGHT_USERSpecifies the admin user for the REST APIstringAnyfactoryinsight
MODESpecifies the mode that the service will run in. Change only during developmentstringdev, prodprod

2.3 - Grafana Plugins

This section contains the overview of the custom Grafana plugins that can be used to access the United Manufacturing Hub.

2.3.1 - Umh Datasource V2

This page contains the technical documentation of the umh-datasource-v2 plugin, which allows for easy data extraction from factoryinsight.

The plugin, umh-datasource-v2, is a Grafana data source plugin that allows you to fetch resources from a database and build queries for your dashboard.

How it works

  1. When creating a new panel, select umh-datasource-v2 from the Data source drop-down menu. It will then fetch the resources from the database. The loading time may depend on your internet speed.

    selectingDatasource
    selectingDatasource

  2. Select the resources in the cascade menu to build your query. DefaultArea and DefaultProductionLine are placeholders for the future implementation of the new data model.

    selectingDatasource
    selectingDatasource

  3. Only the available values for the specified work cell will be fetched from the database. You can then select which data value you want to query.

    selectingDatasource
    selectingDatasource

  4. Next you can specify how to transform the data, depending on what value you selected. For example, all the custom tags will have the aggregation options available. For example if you query a processValue:

    • Time bucket: lets you group data in a time bucket
    • Aggregates: common statistical aggregations (maximum, minimum, sum or count)
    • Handling missing values: lets you choose how missing data should be handled

    selectingDatasource
    selectingDatasource

Configuration

  1. In Grafana, navigate to the Data sources configuration panel.

    selectingConfiguration
    selectingConfiguration

  2. Select umh-v2-datasource to configure it.

    selectingConfiguration
    selectingConfiguration

  3. Configurations:

    • Base URL: the URL for the factoryinsight backend. Defaults to http://united-manufacturing-hub-factoryinsight-service/.
    • Enterprise name: previously customerID for the old datasource plugin. Defaults to factoryinsight.
    • API Key: authenticates the API calls to factoryinsight. Can be found with UMHLens by going to Secrets → factoryinsight-secret → apiKey. It should follow the format Basic xxxxxxxx.

    selectingConfiguration
    selectingConfiguration

    [i18n] resource_grafanaplugin_baseurlfactoryinsight

2.3.2 - Umh Datasource

This page contains the technical documentation of the plugin umh-datasource, which allows for easy data extraction from factoryinsight.

We are no longer maintaining this microservice. Use instead our new microservice datasource-v2 for data extraction from factoryinsight.

The umh datasource is a Grafana 8.X compatible plugin, that allows you to fetch resources from a database and build queries for your dashboard.

How it works

  1. When creating a new panel, select umh-datasource from the Data source drop-down menu. It will then fetch the resources from the database. The loading time may depend on your internet speed.

    selectingDatasource
    selectingDatasource

  2. Select your query parameters Location, Asset and Value to build your query.

    selectingDatasource
    selectingDatasource

Configuration

  1. In Grafana, navigate to the Data sources configuration panel.

    selectingConfiguration
    selectingConfiguration

  2. Select umh-datasource to configure it.

    selectingConfiguration
    selectingConfiguration

  3. Configurations:

    • Base URL: the URL for the factoryinsight backend. Defaults to http://united-manufacturing-hub-factoryinsight-service/.
    • Enterprise name: previously customerID for the old datasource plugin. Defaults to factoryinsight.
    • API Key: authenticates the API calls to factoryinsight. Can be found with UMHLens by going to Secrets → factoryinsight-secret → apiKey. It should follow the format Basic xxxxxxxx.

    selectingConfiguration
    selectingConfiguration

2.3.3 - Factoryinput Panel

This page contains the technical documentation of the plugin factoryinput-panel, which allows for easy execution of MQTT messages inside the UMH stack from a Grafana panel.

This plugin is still in development and is not considered stable for production use

Requirements

  • A United Manufacturing Hub stack
  • External IP or URL to the grafana-proxy
    • In most cases it is the same IP address as your Grafana dashboard.

Getting started

For development, the steps to build the plugin from source are described here.

  1. Go to united-manufacturing-hub/grafana-plugins/umh-factoryinput-panel
  2. Install dependencies.
yarn install
  1. Build plugin in development mode or run in watch mode.
yarn dev
  1. Build plugin in production mode (not recommended due to Issue 32336).
yarn build
  1. Move the resulting dis folder in your Grafana plugins directory.
  • Windows: C:\Program Files\GrafanaLabs\grafana\data\plugins
  • Linux: /var/lib/grafana/plugins
  1. Rename the folder to umh-factoryinput-panel.

  2. Enable the enable development mode to load unsigned plugins.

  3. restart your Grafana service.

Technical Information

Below you will find a schematic of this flow, through our stack.

3 - Datamodel

This page describes the data model of the UMH stack - from the message payloads up to database tables.

Raw Data

If you have events that you just want to send to the message broker / Unified Namespace without the need for it to be stored, simply send it to the raw topic. This data will not be processed by the UMH stack, but you can use it to build your own data processing pipeline.

ProcessValue Data

If you have data that does not fit in the other topics (such as your PLC tags or sensor data), you can use the processValue topic. It will be saved in the database in the processValue or processValueString and can be queried using factorysinsight or the umh-datasource Grafana plugin.

Production Data

In a production environment, you should first declare products using addProduct. This allows you to create an order using addOrder. Once you have created an order, send an state message to tell the database that the machine is working (or not working) on the order.

When the machine is ordered to produce a product, send a startOrder message. When the machine has finished producing the product, send an endOrder message.

Send count messages if the machine has produced a product, but it does not make sense to give the product its ID. Especially useful for bottling or any other use case with a large amount of products, where not each product is traced.

You can also add shifts using addShift.

All messages land up in different tables in the database and will be accessible from factorysinsight or the umh-datasource Grafana plugin.

Recommendation: Start with addShift and state and continue from there on

Modifying Data

If you have accidentally sent the wrong state or if you want to modify a value, you can use the modifyState message.

Unique Product Tracking

You can use uniqueProduct to tell the database that a new instance of a product has been created. If the produced product is scrapped, you can use scrapUniqueProduct to change its state to scrapped.

3.1 - Messages

For each message topic you will find a short description what the message is used for and which structure it has, as well as what structure the payload is excepted to have.

Introduction

The United Manufacturing Hub provides a specific structure for messages/topics, each with its own unique purpose. By adhering to this structure, the UMH will automatically calculate KPIs for you, while also making it easier to maintain consistency in your topic structure.

3.1.1 - activity

activity messages are sent when a new order is added.

This is part of our recommended workflow to create machine states. The data sent here will not be stored in the database automatically, as it will be required to be converted into a state. In the future, there will be a microservice, which converts these automatically.

Topic


ia/<customerID>/<location>/<AssetID>/activity


ia.<customerID>.<location>.<AssetID>.activity

Usage

A message is sent here each time the machine runs or stops.

Content

keydata typedescription
timestamp_msintunix timestamp of message creation
activitybooltrue if asset is currently active, false if asset is currently inactive

JSON

Examples

The asset was active during the timestamp of the message:

{
  "timestamp_ms":1588879689394,
  "activity": true,
}

Schema

Producers

  • Typically Node-RED

Consumers

  • Typically Node-RED

3.1.2 - addOrder

AddOrder messages are sent when a new order is added.

Topic


ia/<customerID>/<location>/<AssetID>/addOrder


ia.<customerID>.<location>.<AssetID>.addOrder

Usage

A message is sent here each time a new order is added.

Content

keydata typedescription
product_idstringcurrent product name
order_idstringcurrent order name
target_unitsint64amount of units to be produced
  1. The product needs to be added before adding the order. Otherwise, this message will be discarded
  2. One order is always specific to that asset and can, by definition, not be used across machines. For this case one would need to create one order and product for each asset (reason: one product might go through multiple machines, but might have different target durations or even target units, e.g. one big 100m batch get split up into multiple pieces)

JSON

Examples

One order was started for 100 units of product “test”:

{
  "product_id":"test",
  "order_id":"test_order",
  "target_units":100
}

Schema

{
    "$schema": "http://json-schema.org/draft/2019-09/schema",
    "$id": "https://learn.umh.app/content/docs/architecture/datamodel/messages/addOrder.json",
    "type": "object",
    "default": {},
    "title": "Root Schema",
    "required": [
        "product_id",
        "order_id",
        "target_units"
    ],
    "properties": {
        "product_id": {
            "type": "string",
            "default": "",
            "title": "The product id to be produced",
            "examples": [
                "test",
                "Beierlinger 30x15"
            ]
        },
        "order_id": {
            "type": "string",
            "default": "",
            "title": "The order id of the order",
            "examples": [
                "test_order",
                "HA16/4889"
            ]
        },
        "target_units": {
            "type": "integer",
            "default": 0,
            "minimum": 0,
            "title": "The amount of units to be produced",
            "examples": [
                1,
                100
            ]
        }
    },
    "examples": [{
      "product_id": "Beierlinger 30x15",
      "order_id": "HA16/4889",
      "target_units": 1
    },{
      "product_id":"test",
      "order_id":"test_order",
      "target_units":100
    }]
}

Producers

  • Typically Node-RED

Consumers

3.1.3 - addParentToChild

AddParentToChild messages are sent when child products are added to a parent product.

Topic


ia/<customerID>/<location>/<AssetID>/addParentToChild


ia.<customerID>.<location>.<AssetID>.addParentToChild

Usage

This message can be emitted to add a child product to a parent product. It can be sent multiple times, if a parent product is split up into multiple child’s or multiple parents are combined into one child. One example for this if multiple parts are assembled to a single product.

Content

keydata typedescription
timestamp_msint64unix timestamp you want to go back from
childAIDstringthe AID of the child product
parentAIDstringthe AID of the parent product

JSON

Examples

A parent is added to a child:

{
  "timestamp_ms":1589788888888,
  "childAID":"23948723489",
  "parentAID":"4329875"
}

Schema

{
    "$schema": "http://json-schema.org/draft/2019-09/schema",
    "$id": "https://learn.umh.app/content/docs/architecture/datamodel/messages/scrapCount.json",
    "type": "object",
    "default": {},
    "title": "Root Schema",
    "required": [
        "timestamp_ms",
        "childAID",
        "parentAID"
    ],
    "properties": {
        "timestamp_ms": {
            "type": "integer",
            "default": 0,
            "minimum": 0,
            "title": "The unix timestamp you want to go back from",
            "examples": [
              1589788888888
            ]
        },
        "childAID": {
            "type": "string",
            "default": "",
            "title": "The AID of the child product",
            "examples": [
              "23948723489"
            ]
        },
        "parentAID": {
            "type": "string",
            "default": "",
            "title": "The AID of the parent product",
            "examples": [
              "4329875"
            ]
        }
    },
    "examples": [
        {
            "timestamp_ms":1589788888888,
            "childAID":"23948723489",
            "parentAID":"4329875"
        },
        {
            "timestamp_ms":1589788888888,
            "childAID":"TestChild",
            "parentAID":"TestParent"
        }
    ]
}

Producers

  • Typically Node-RED

Consumers

3.1.4 - addProduct

AddProduct messages are sent when a new product is produced.

Topic


ia/<customerID>/<location>/<AssetID>/addProduct


ia.<customerID>.<location>.<AssetID>.addProduct

Usage

A message is sent each time a new product is produced.

Content

keydata typedescription
product_idstringcurrent product name
time_per_unit_in_secondsfloat64the time it takes to produce one unit of the product

See also notes regarding adding products and orders in /addOrder

JSON

Examples

A new product “Beilinger 30x15” with a cycle time of 200ms is added to the asset.

{
  "product_id": "Beilinger 30x15",
  "time_per_unit_in_seconds": "0.2"
}

Schema

{
    "$schema": "http://json-schema.org/draft/2019-09/schema",
    "$id": "https://learn.umh.app/content/docs/architecture/datamodel/messages/scrapCount.json",
    "type": "object",
    "default": {},
    "title": "Root Schema",
    "required": [
        "product_id",
        "time_per_unit_in_seconds"
    ],
    "properties": {
        "product_id": {
          "type": "string",
          "default": "",
          "title": "The product id to be produced"
        },
        "time_per_unit_in_seconds": {
          "type": "number",
          "default": 0.0,
          "minimum": 0,
          "title": "The time it takes to produce one unit of the product"
        }
    },
    "examples": [
        {
            "product_id": "Beierlinger 30x15",
            "time_per_unit_in_seconds": "0.2"
        },
        {
            "product_id": "Test product",
            "time_per_unit_in_seconds": "10"
        }
    ]
}

Producers

  • Typically Node-RED

Consumers

3.1.5 - addShift

AddShift messages are sent to add a shift with start and end timestamp.

Topic


ia/<customerID>/<location>/<AssetID>/addShift


ia.<customerID>.<location>.<AssetID>.addShift

Usage

This message is send to indicate the start and end of a shift.

Content

keydata typedescription
timestamp_msint64unix timestamp of the shift start
timestamp_ms_endint64optional unix timestamp of the shift end

JSON

Examples

A shift with start and end:

{
  "timestamp_ms":1589788888888,
  "timestamp_ms_end":1589788888888
}

And shift without end:

{
  "timestamp_ms":1589788888888
}

Schema

{
    "$schema": "http://json-schema.org/draft/2019-09/schema",
    "$id": "https://learn.umh.app/content/docs/architecture/datamodel/messages/scrapCount.json",
    "type": "object",
    "default": {},
    "title": "Root Schema",
    "required": [
        "timestamp_ms"
    ],
    "properties": {
        "timestamp_ms": {
            "type": "integer",
            "description": "The unix timestamp, of shift start"
        },
        "timestamp_ms_end": {
            "type": "integer",
            "description": "The *optional* unix timestamp, of shift end"
        }
    },
    "examples": [
        {
            "timestamp_ms":1589788888888,
            "timestamp_ms_end":1589788888888
        },
        {
            "timestamp_ms":1589788888888
        }
    ]
}

Producers

Consumers

3.1.6 - count

Count Messages are sent everytime an asset has counted a new item.

Topic


ia/<customerID>/<location>/<AssetID>/count


ia.<customerID>.<location>.<AssetID>.count

Usage

A count message is send everytime an asset has counted a new item.

Content

keydata typedescription
timestamp_msint64unix timestamp of message creation
countint64amount of items counted
scrapint64optional amount of defective items. In unset 0 is assumed

JSON

Examples

One item was counted and there was no scrap:

{
  "timestamp_ms":1589788888888,
  "count":1,
  "scrap":0
}

Ten items where counted and there was five scrap:

{
  "timestamp_ms":1589788888888,
  "count":10,
  "scrap":5
}

Schema

{
    "$schema": "http://json-schema.org/draft/2019-09/schema",
    "$id": "https://learn.umh.app/content/docs/architecture/datamodel/messages/count.json",
    "type": "object",
    "default": {},
    "title": "Root Schema",
    "required": [
        "timestamp_ms",
        "count"
    ],
    "properties": {
        "timestamp_ms": {
            "type": "integer",
            "default": 0,
            "minimum": 0,
            "title": "The unix timestamp of message creation",
            "examples": [
                1589788888888
            ]
        },
        "count": {
            "type": "integer",
            "default": 0,
            "minimum": 0,
            "title": "The amount of items counted",
            "examples": [
                1
            ]
        },
        "scrap": {
            "type": "integer",
            "default": 0,
            "minimum": 0,
            "title": "The optional amount of defective items",
            "examples": [
                0
            ]
        }
    },
    "examples": [{
      "timestamp_ms": 1589788888888,
      "count": 1,
      "scrap": 0
    },{
      "timestamp_ms": 1589788888888,
      "count": 1
    }]
}

Producers

  • Typically Node-RED

Consumers

3.1.7 - deleteShift

DeleteShift messages are sent to delete a shift that starts at the designated timestamp.

Topic


ia/<customerID>/<location>/<AssetID>/deleteShift


ia.<customerID>.<location>.<AssetID>.deleteShift

Usage

deleteShift is generated to delete a shift that started at the designated timestamp.

Content

keydata typedescription
timestamp_msint32unix timestamp of the shift start

JSON

Example

The shift that started at the designated timestamp is deleted from the database.

{
    "begin_time_stamp": 1588879689394
}

Producers

  • Typically Node-RED

Consumers

3.1.8 - detectedAnomaly

detectedAnomaly messages are sent when an asset has stopped and the reason is identified.

This is part of our recommended workflow to create machine states. The data sent here will not be stored in the database automatically, as it will be required to be converted into a state. In the future, there will be a microservice, which converts these automatically.

Topic


ia/<customerID>/<location>/<AssetID>/detectedAnomaly


ia.<customerID>.<location>.<AssetID>.detectedAnomaly

Usage

A message is sent here each time a stop reason has been identified automatically or by input from the machine operator.

Content

keydata typedescription
timestamp_msintUnix timestamp of message creation
detectedAnomalystringreason for the production stop of the asset

JSON

Examples

The anomaly of the asset has been identified as maintenance:

{
  "timestamp_ms":1588879689394,
  "detectedAnomaly":"maintenance",
}

Producers

  • Typically Node-RED

Consumers

  • Typically Node-RED

3.1.9 - endOrder

EndOrder messages are sent whenever a new product is produced.

Topic


ia/<customerID>/<location>/<AssetID>/endOrder


ia.<customerID>.<location>.<AssetID>.endOrder

Usage

A message is sent each time a new product is produced.

Content

keydata typedescription
timestamp_msint64unix timestamp of message creation
order_idint64current order name

See also notes regarding adding products and orders in /addOrder

JSON

Examples

The order “test_order” was finished at the shown timestamp.

{
  "order_id":"test_order",
  "timestamp_ms":1589788888888
}

Schema

{
    "$schema": "http://json-schema.org/draft/2019-09/schema",
    "$id": "https://learn.umh.app/content/docs/architecture/datamodel/messages/endOrder.json",
    "type": "object",
    "default": {},
    "title": "Root Schema",
    "required": [
        "order_id",
        "timestamp_ms"
    ],
    "properties": {
        "timestamp_ms": {
          "type": "integer",
          "description": "The unix timestamp, of shift start"
        },
        "order_id": {
            "type": "string",
            "default": "",
            "title": "The order id of the order",
            "examples": [
                "test_order",
                "HA16/4889"
            ]
        }
    },
    "examples": [{
      "order_id": "HA16/4889",
      "timestamp_ms":1589788888888
    },{
      "product_id":"test",
      "timestamp_ms":1589788888888
    }]
}

Producers

  • Typically Node-RED

Consumers

3.1.10 - modifyProducedPieces

ModifyProducesPieces messages are sent whenever the count of produced and scrapped items need to be modified.

Topic


ia/<customerID>/<location>/<AssetID>/modifyProducedPieces


ia.<customerID>.<location>.<AssetID>.modifyProducedPieces

Usage

modifyProducedPieces is generated to change the count of produced items and scrapped items at the named timestamp.

Content

keydata typedescription
timestamp_msint64unix timestamp of the time point whose count is to be modified
countint32number of produced items
scrapint32number of scrapped items

JSON

Example

The count and scrap are overwritten to be to each at the timestamp.

{
    "timestamp_ms": 1588879689394,
    "count": 10,
    "scrap": 10
}

Producers

  • Typically Node-RED

Consumers

3.1.11 - modifyState

ModifyState messages are generated when a state of an asset during a certain timeframe needs to be modified.

Topic


ia/<customerID>/<location>/<AssetID>/modifyState


ia.<customerID>.<location>.<AssetID>.modifyState

Usage

modifyState is generated to modify the state from the starting timestamp to the end timestamp. You can find a list of all supported states here.

Content

keydata typedescription
timestamp_msint32unix timestamp of the starting point of the timeframe to be modified
timestamp_ms_endint32unix timestamp of the end point of the timeframe to be modified
new_stateint32new state code

JSON

Example

The state of the timeframe between the timestamp is modified to be 150000: OperatorBreakState

{
    "timestamp_ms": 1588879689394,
    "timestamp_ms_end": 1588891381023,
    "new_state": 150000
}

Producers

  • Typically Node-RED

Consumers

3.1.12 - processValue

ProcessValue messages are sent whenever a custom process value with unique name has been prepared. The value is numerical.

Topic


ia/<customerID>/<location>/<AssetID>/processValue 
or: ia/<customerID>/<location>/<AssetID>/processValue/<tagName>


ia.<customerID>.<location>.<AssetID>.processValue
or: ia.<customerID>.<location>.<AssetID>.processValue.<tagName>

If you have a lot of processValues, we’d recommend not using the /processValue as topic, but to append the tag name as well, e.g., /processValue/energyConsumption. This will structure it better for usage in MQTT Explorer or for data processing only certain processValues.

For automatic data storage in kafka-to-postgresql both will work fine as long as the payload is correct.

Please be aware that the values may only be int or float, other character are not valid, so make sure there is no quotation marks or anything sneaking in there. Also be cautious of using the JavaScript ToFixed() function, as it is converting a float into a string.

Usage

A message is sent each time a process value has been prepared. The key has a unique name.

Content

keydata typedescription
timestamp_msint64unix timestamp of message creation
<valuename>int64 or float64Represents a process value, e.g. temperature

Pre 0.10.0: As <valuename> is either of type ´int64´ or ´float64´, you cannot use booleans. Convert to integers as needed; e.g., true = “1”, false = “0”

Post 0.10.0: <valuename> will be converted, even if it is a boolean value. Check integer literals and floating-point literals for other valid values.

JSON

Example

At the shown timestamp the custom process value “energyConsumption” had a readout of 123456.

{
    "timestamp_ms": 1588879689394, 
    "energyConsumption": 123456
}

Producers

  • Typically Node-RED

Consumers

3.1.13 - processValueString

ProcessValueString messages are sent whenever a custom process value is prepared. The value is a string.

This message type is not functional as of 0.9.5!

Topic


ia/<customerID>/<location>/<AssetID>/processValueString


ia.<customerID>.<location>.<AssetID>.processValueString

Usage

A message is sent each time a process value has been prepared. The key has a unique name. This message is used when the datatype of the process value is a string instead of a number.

Content

keydata typedescription
timestamp_msint64unix timestamp of message creation
<valuename>stringRepresents a process value, e.g. temperature

JSON

Example

At the shown timestamp the custom process value “customer” had a readout of “miller”.

{
    "timestamp_ms": 1588879689394, 
    "customer": "miller"
}

Producers

  • Typically Node-RED

Consumers

3.1.14 - productTag

ProductTag messages are sent to contextualize processValue messages.

Topic


ia/<customerID>/<location>/<AssetID>/productTag


ia.<customerID>.<location>.<AssetID>.productTag

Usage

productTagString is usually generated by contextualizing a processValue.

Content

keydata typedescription
AIDstringAID of the product
namestringName of the product
valuefloat64key of the processValue
timestamp_msint64unix timestamp of message creation

JSON

Example

At the shown timestamp the product with the shown AID had 5 blemishes recorded.

{
    "AID": "43298756", 
    "name": "blemishes",
    "value": 5, 
    "timestamp_ms": 1588879689394
}

Producers

  • Typically Node-RED

Consumers

3.1.15 - productTagString

ProductTagString messages are sent to contextualize processValueString messages.

Topic


ia/<customerID>/<location>/<AssetID>/productTagString


ia.<customerID>.<location>.<AssetID>.productTagString

Usage

ProductTagString is usually generated by contextualizing a processValueString.

Content

keydata typedescription
AIDstringAID of the product
namestringKey of the processValue
valuestringvalue of the processValue
timestamp_msint64unix timestamp of message creation

JSON

Example

At the shown timestamp the product with the shown AID had the processValue of “test_value”.

{
    "AID": "43298756", 
    "name": "shirt_size",
    "value": "XL", 
    "timestamp_ms": 1588879689394
}

Producers

Consumers

3.1.16 - recommendation

Recommendation messages are sent whenever rapid actions would quickly improve efficiency on the shop floor.

Topic


ia/<customerID>/<location>/<AssetID>/recommendation


ia.<customerID>.<location>.<AssetID>.recommendation

Usage

recommendation are action recommendations, which require concrete and rapid action in order to quickly eliminate efficiency losses on the store floor.

Content

keydata typedescription
uidstringUniqueID of the product
timestamp_msint64unix timestamp of message creation
customerstringthe customer ID in the data structure
locationstringthe location in the data structure
assetstringthe asset ID in the data structure
recommendationTypeint32Name of the product
enabledbool-
recommendationValuesmapMap of values based on which this recommendation is created
diagnoseTextDEstringDiagnosis of the recommendation in german
diagnoseTextENstringDiagnosis of the recommendation in english
recommendationTextDEstringRecommendation in german
recommendationTextENstringRecommendation in english

JSON

Example

A recommendation for the demonstrator at the shown location has not been running for a while, so a recommendation is sent to either start the machine or specify a reason why it is not running.

{
    "UID": "43298756", 
    "timestamp_ms": 15888796894,
    "customer": "united-manufacturing-hub",
    "location": "dccaachen", 
    "asset": "DCCAachen-Demonstrator",
    "recommendationType": "1", 
    "enabled": true,
    "recommendationValues": { "Treshold": 30, "StoppedForTime": 612685 }, 
    "diagnoseTextDE": "Maschine DCCAachen-Demonstrator steht seit 612685 Sekunden still (Status: 8, Schwellwert: 30)" ,
    "diagnoseTextEN": "Machine DCCAachen-Demonstrator is not running since 612685 seconds (status: 8, threshold: 30)", 
    "recommendationTextDE":"Maschine DCCAachen-Demonstrator einschalten oder Stoppgrund auswählen.",
    "recommendationTextEN": "Start machine DCCAachen-Demonstrator or specify stop reason.", 
}

Producers

  • Typically Node-RED

Consumers

3.1.17 - scrapCount

ScrapCount messages are sent whenever a product is to be marked as scrap.

Topic


ia/<customerID>/<location>/<AssetID>/scrapCount


ia.<customerID>.<location>.<AssetID>.scrapCount

Usage

Here a message is sent every time products should be marked as scrap. It works as follows: A message with scrap and timestamp_ms is sent. It starts with the count that is directly before timestamp_ms. It is now iterated step by step back in time and step by step the existing counts are set to scrap until a total of scrap products have been scraped.

Content

  • timestamp_ms is the unix timestamp, you want to go back from
  • scrap number of item to be considered as scrap.
  1. You can specify maximum of 24h to be scrapped to avoid accidents
  2. (NOT IMPLEMENTED YET) If counts does not equal scrap, e.g. the count is 5 but only 2 more need to be scrapped, it will scrap exactly 2. Currently, it would ignore these 2. see also #125
  3. (NOT IMPLEMENTED YET) If no counts are available for this asset, but uniqueProducts are available, they can also be marked as scrap.

JSON

Examples

Ten items where scrapped:

{
  "timestamp_ms":1589788888888,
  "scrap":10
}

Schema

{
    "$schema": "http://json-schema.org/draft/2019-09/schema",
    "$id": "https://learn.umh.app/content/docs/architecture/datamodel/messages/scrapCount.json",
    "type": "object",
    "default": {},
    "title": "Root Schema",
    "required": [
        "timestamp_ms",
        "scrap"
    ],
    "properties": {
        "timestamp_ms": {
            "type": "integer",
            "default": 0,
            "minimum": 0,
            "title": "The unix timestamp you want to go back from",
            "examples": [
              1589788888888
            ]
        },
        "scrap": {
            "type": "integer",
            "default": 0,
            "minimum": 0,
            "title": "Number of items to be considered as scrap",
            "examples": [
                10
            ]
        }
    },
    "examples": [
        {
            "timestamp_ms": 1589788888888,
            "scrap": 10
        },
        {
            "timestamp_ms": 1589788888888,
            "scrap": 5
        }
    ]
}

Producers

  • Typically Node-RED

Consumers

3.1.18 - scrapUniqueProduct

ScrapUniqueProduct messages are sent whenever a unique product should be scrapped.

Topic


ia/<customerID>/<location>/<AssetID>/scrapUniqueProduct


ia.<customerID>.<location>.<AssetID>.scrapUniqueProduct

Usage

A message is sent here everytime a unique product is scrapped.

Content

keydata typedescription
UIDstringunique ID of the current product

JSON

Example

The product with the unique ID 22 is scrapped.

{
    "UID": 22, 
}

Producers

  • Typically Node-RED

Consumers

3.1.19 - startOrder

StartOrder messages are sent whenever a new order is started.

Topic


ia/<customerID>/<location>/<AssetID>/startOrder


ia.<customerID>.<location>.<AssetID>.startOrder

Usage

A message is sent here everytime a new order is started.

Content

keydata typedescription
order_idstringname of the order
timestamp_msint64unix timestamp of message creation
  1. See also notes regarding adding products and orders in /addOrder
  2. When startOrder is executed multiple times for an order, the last used timestamp is used.

JSON

Example

The order “test_order” is started at the shown timestamp.

{
  "order_id":"test_order",
  "timestamp_ms":1589788888888
}

Producers

  • Typically Node-RED

Consumers

3.1.20 - state

State messages are sent every time an asset changes status.

Topic


ia/<customerID>/<location>/<AssetID>/state


ia.<customerID>.<location>.<AssetID>.state

Usage

A message is sent here each time the asset changes status. Subsequent changes are not possible. Different statuses can also be process steps, such as “setup”, “post-processing”, etc. You can find a list of all supported states here.

Content

keydata typedescription
stateuint32value of the state according to the link above
timestamp_msuint64unix timestamp of message creation

JSON

Example

The asset has a state of 10000, which means it is actively producing.

{
  "timestamp_ms":1589788888888,
  "state":10000
}

Producers

  • Typically Node-RED

Consumers

3.1.21 - uniqueProduct

UniqueProduct messages are sent whenever a unique product was produced or modified.

Topic


ia/<customerID>/<location>/<AssetID>/uniqueProduct


ia.<customerID>.<location>.<AssetID>.uniqueProduct

Usage

A message is sent here each time a product has been produced or modified. A modification can take place, for example, due to a downstream quality control.

There are two cases of when to send a message under the uniqueProduct topic:

  • The exact product doesn’t already have a UID (-> This is the case, if it has not been produced at an asset incorporated in the digital shadow). Specify a space holder asset = “storage” in the MQTT message for the uniqueProduct topic.
  • The product was produced at the current asset (it is now different from before, e.g. after machining or after something was screwed in). The newly produced product is always the “child” of the process. Products it was made out of are called the “parents”.

Content

keydata typedescription
begin_timestamp_msint64unix timestamp of start time
end_timestamp_msint64unix timestamp of completion time
product_idstringproduct ID of the currently produced product
isScrapbooloptional information whether the current product is of poor quality and will be sorted out. Is considered false if not specified.
uniqueProductAlternativeIDstringalternative ID of the product

JSON

Example

The processing of product “Beilinger 30x15” with the AID 216381 started and ended at the designated timestamps. It is of low quality and due to be scrapped.

{
  "begin_timestamp_ms":1589788888888,
  "end_timestamp_ms":1589788893729,
  "product_id":"Beilinger 30x15",
  "isScrap":true,
  "uniqueProductAlternativeID":"216381"
}

Producers

  • Typically Node-RED

Consumers

3.2 - Database

The database stores the messages in different tables.

Introduction

We are using the database TimescaleDB, which is based on PostgreSQL and supports standard relational SQL database work, while also supporting time-series databases. This allows for usage of regular SQL queries, while also allowing to process and store time-series data. Postgresql has proven itself reliable over the last 25 years, so we are happy to use it.

If you want to learn more about database paradigms, please refer to the knowledge article about that topic. It also includes a concise video summarizing what you need to know about different paradigms.

Our database model is designed to represent a physical manufacturing process. It keeps track of the following data:

  • The state of the machine
  • The products that are produced
  • The orders for the products
  • The workers’ shifts
  • Arbitrary process values (sensor data)
  • The producible products
  • Recommendations for the production

Please note that our database does not use a retention policy. This means that your database can grow quite fast if you save a lot of process values. Take a look at our guide on enabling data compression and retention in TimescaleDB to customize the database to your needs.

A good method to check your db size would be to use the following commands inside postgres shell:

SELECT pg_size_pretty(pg_database_size('factoryinsight'));

3.2.1 - assetTable

assetTable is contains all assets and their location.

Usage

Primary table for our data structure, it contains all the assets and their location.

Structure

keydata typedescriptionexample
idintAuto incrementing id of the asset0
assetIDtextAsset namePrinter-03
locationtextPhysical location of the assetDCCAachen
customertextCustomer name, in most cases “factoryinsight”factoryinsight

Relations

assetTable
assetTable

DDL

 CREATE TABLE IF NOT EXISTS assetTable
 (
     id         SERIAL  PRIMARY KEY,
     assetID    TEXT    NOT NULL,
     location   TEXT    NOT NULL,
     customer   TEXT    NOT NULL,
     unique (assetID, location, customer)
 );

3.2.2 - configurationTable

configurationTable stores the configuration of the UMH system.

Usage

This table stores the configuration of the system

Structure

keydata typedescriptionexample
customertextCustomer namefactoryinsight
MicrostopDurationInSecondsintegerStop counts as microstop if smaller than this value120
IgnoreMicrostopUnderThisDurationInSecondsintegerIgnore stops under this value-1
MinimumRunningTimeInSecondsintegerMinimum runtime of the asset before tracking micro-stops0
ThresholdForNoShiftsConsideredBreakInSecondsintegerIf no shift is shorter than this value, it is a break2100
LowSpeedThresholdInPcsPerHourintegerThreshold once machine should go into low speed state-1
AutomaticallyIdentifyChangeoversbooleanAutomatically identify changeovers in productiontrue
LanguageCodeinteger0 is german, 1 is english1
AvailabilityLossStatesinteger[]States to count as availability loss{40000, 180000, 190000, 200000, 210000, 220000}
PerformanceLossStatesinteger[]States to count as performance loss{20000, 50000, 60000, 70000, 80000, 90000, 100000, 110000, 120000, 130000, 140000, 150000}

Relations

configurationTable
configurationTable

DDL

CREATE TABLE IF NOT EXISTS configurationTable
(
    customer TEXT PRIMARY KEY,
    MicrostopDurationInSeconds INTEGER DEFAULT 60*2,
    IgnoreMicrostopUnderThisDurationInSeconds INTEGER DEFAULT -1, --do not apply
    MinimumRunningTimeInSeconds INTEGER DEFAULT 0, --do not apply
    ThresholdForNoShiftsConsideredBreakInSeconds INTEGER DEFAULT 60*35,
    LowSpeedThresholdInPcsPerHour INTEGER DEFAULT -1, --do not apply
    AutomaticallyIdentifyChangeovers BOOLEAN DEFAULT true,
    LanguageCode INTEGER DEFAULT 1, -- english
    AvailabilityLossStates INTEGER[] DEFAULT '{40000, 180000, 190000, 200000, 210000, 220000}',
    PerformanceLossStates INTEGER[] DEFAULT '{20000, 50000, 60000, 70000, 80000, 90000, 100000, 110000, 120000, 130000, 140000, 150000}'
);

3.2.3 - countTable

countTable contains all reported counts of all assets.

Usage

This table contains all reported counts of the assets.

Structure

keydata typedescriptionexample
timestamptimestamptzEntry timestamp0
asset_idserialAsset id (see assetTable)1
countintegerA count greater 01

Relations

countTable
countTable

DDL

CREATE TABLE IF NOT EXISTS countTable
(
    timestamp                TIMESTAMPTZ                         NOT NULL,
    asset_id            SERIAL REFERENCES assetTable (id),
    count INTEGER CHECK (count > 0),
    UNIQUE(timestamp, asset_id)
);
-- creating hypertable
SELECT create_hypertable('countTable', 'timestamp');

-- creating an index to increase performance
CREATE INDEX ON countTable (asset_id, timestamp DESC);

3.2.4 - orderTable

orderTable contains orders for production.

Usage

This table stores orders for product production

Structure

keydata typedescriptionexample
order_idserialAuto incrementing id0
order_nametextName of the orderScarjit-500-DaVinci-1-24062022
product_idserialProduct id to produce1
begin_timestamptimestamptzBegin timestamp of the order0
end_timestamptimestamptzEnd timestamp of the order10000
target_unitsintegerHow many product to produce500
asset_idserialWhich asset to produce on (see assetTable)1

Relations

orderTable
orderTable

DDL

CREATE TABLE IF NOT EXISTS orderTable
(
    order_id        SERIAL          PRIMARY KEY,
    order_name      TEXT            NOT NULL,
    product_id      SERIAL          REFERENCES productTable (product_id),
    begin_timestamp TIMESTAMPTZ,
    end_timestamp   TIMESTAMPTZ,
    target_units    INTEGER,
    asset_id        SERIAL          REFERENCES assetTable (id),
    unique (asset_id, order_name),
    CHECK (begin_timestamp < end_timestamp),
    CHECK (target_units > 0),
    EXCLUDE USING gist (asset_id WITH =, tstzrange(begin_timestamp, end_timestamp) WITH &&) WHERE (begin_timestamp IS NOT NULL AND end_timestamp IS NOT NULL)
);

3.2.5 - processValueStringTable

processValueStringTable contains process values.

Usage

This table stores process values, for example toner level of a printer, flow rate of a pump, etc. This table, has a closely related table for storing number values, processValueTable.

Structure

keydata typedescriptionexample
timestamptimestamptzEntry timestamp0
asset_idserialAsset id (see assetTable)1
valueNametextName of the process valuetoner-level
valuestringValue of the process value100

Relations

processValueTable
processValueTable

DDL

CREATE TABLE IF NOT EXISTS processValueStringTable
(
    timestamp               TIMESTAMPTZ                         NOT NULL,
    asset_id                SERIAL                              REFERENCES assetTable (id),
    valueName               TEXT                                NOT NULL,
    value                   TEST                                NULL,
    UNIQUE(timestamp, asset_id, valueName)
);
-- creating hypertable
SELECT create_hypertable('processValueStringTable', 'timestamp');

-- creating an index to increase performance
CREATE INDEX ON processValueStringTable (asset_id, timestamp DESC);

-- creating an index to increase performance
CREATE INDEX ON processValueStringTable (valuename);

3.2.6 - processValueTable

processValueTable contains process values.

Usage

This table stores process values, for example toner level of a printer, flow rate of a pump, etc. This table, has a closely related table for storing string values, processValueStringTable.

Structure

keydata typedescriptionexample
timestamptimestamptzEntry timestamp0
asset_idserialAsset id (see assetTable)1
valueNametextName of the process valuetoner-level
valuedoubleValue of the process value100

Relations

processValueTable
processValueTable

DDL

CREATE TABLE IF NOT EXISTS processValueTable
(
    timestamp               TIMESTAMPTZ                         NOT NULL,
    asset_id                SERIAL                              REFERENCES assetTable (id),
    valueName               TEXT                                NOT NULL,
    value                   DOUBLE PRECISION                    NULL,
    UNIQUE(timestamp, asset_id, valueName)
);
-- creating hypertable
SELECT create_hypertable('processValueTable', 'timestamp');

-- creating an index to increase performance
CREATE INDEX ON processValueTable (asset_id, timestamp DESC);

-- creating an index to increase performance
CREATE INDEX ON processValueTable (valuename);

3.2.7 - productTable

productTable contains products in production.

Usage

This table products to be produced at assets

Structure

keydata typedescriptionexample
product_idserialAuto incrementing id0
product_nametextName of the productPainting-DaVinci-1
asset_idserialAsset producing this product (see assetTable)1
time_per_unit_in_secondsrealTime in seconds to produce this product600

Relations

productTable
productTable

DDL

CREATE TABLE IF NOT EXISTS productTable
(
    product_id                  SERIAL PRIMARY KEY,
    product_name                TEXT NOT NULL,
    asset_id                    SERIAL REFERENCES assetTable (id),
    time_per_unit_in_seconds    REAL NOT NULL,
    UNIQUE(product_name, asset_id),
    CHECK (time_per_unit_in_seconds > 0)
);

3.2.8 - recommendationTable

recommendationTable contains given recommendation for the shop floor assets.

Usage

This table stores recommendations

Structure

keydata typedescriptionexample
uidtextId of the recommendationrefill_toner
timestamptimestamptzTimestamp of recommendation insertion1
recommendationTypeintegerUsed to subscribe people to specific types only3
enabledboolRecommendation can be outputtedtrue
recommendationValuestextValues to change to resolve recommendation{ “toner-level”: 100 }
diagnoseTextDEtextDiagnose text in german“Der Toner ist leer”
diagnoseTextENtextDiagnose text in english“The toner is empty”
recommendationTextDEtextRecommendation text in german“Bitte den Toner auffüllen”
recommendationTextENtextRecommendation text in english“Please refill the toner”

Relations

recommendationTable
recommendationTable

DDL

CREATE TABLE IF NOT EXISTS recommendationTable
(
    uid                     TEXT                                PRIMARY KEY,
    timestamp               TIMESTAMPTZ                         NOT NULL,
    recommendationType      INTEGER                             NOT NULL,
    enabled                 BOOLEAN                             NOT NULL,
    recommendationValues    TEXT,
    diagnoseTextDE          TEXT,
    diagnoseTextEN          TEXT,
    recommendationTextDE    TEXT,
    recommendationTextEN    TEXT
);

3.2.9 - shiftTable

shiftTable contains shifts with asset, start and finish timestamp

Usage

This table stores shifts

Structure

keydata typedescriptionexample
idserialAuto incrementing id0
typeintegerShift type (1 for shift, 0 for no shift)1
begin_timestamptimestamptzBegin of the shift3
end_timestamptimestamptzEnd of the shift10
asset_idtextAsset ID the shift is performed on (see assetTable)1

Relations

shiftTable
shiftTable

DDL

-- Using btree_gist to avoid overlapping shifts
-- Source: https://gist.github.com/fphilipe/0a2a3d50a9f3834683bf
CREATE EXTENSION btree_gist;
CREATE TABLE IF NOT EXISTS shiftTable
(
    id              SERIAL      PRIMARY KEY,
    type            INTEGER,
    begin_timestamp TIMESTAMPTZ NOT NULL,
    end_timestamp   TIMESTAMPTZ,
    asset_id        SERIAL      REFERENCES assetTable (id),
    unique (begin_timestamp, asset_id),
    CHECK (begin_timestamp < end_timestamp),
    EXCLUDE USING gist (asset_id WITH =, tstzrange(begin_timestamp, end_timestamp) WITH &&)
);

3.2.10 - stateTable

stateTable contains the states of all assets.

Usage

This table contains all state changes of the assets.

Structure

keydata typedescriptionexample
timestamptimestamptzEntry timestamp0
asset_idserialAsset ID (see assetTable)1
stateintegerState ID (see states)40000

Relations

stateTable
stateTable

DDL

CREATE TABLE IF NOT EXISTS stateTable
(
    timestamp   TIMESTAMPTZ NOT NULL,
    asset_id    SERIAL      REFERENCES assetTable (id),
    state       INTEGER     CHECK (state >= 0),
    UNIQUE(timestamp, asset_id)
);
-- creating hypertable
SELECT create_hypertable('stateTable', 'timestamp');

-- creating an index to increase performance
CREATE INDEX ON stateTable (asset_id, timestamp DESC);

3.2.11 - uniqueProductTable

uniqueProductTable contains unique products and their IDs.

Usage

This table stores unique products.

Structure

keydata typedescriptionexample
uidtextID of a unique product0
asset_idserialAsset id (see assetTable)1
begin_timestamp_mstimestamptzTime when product was inputted in asset0
end_timestamp_mstimestamptzTime when product was output of asset100
product_idtextID of the product (see productTable)1
is_scrapbooleanTrue if product is scraptrue
quality_classtextQuality class of the productA
station_idtextID of the station where the product was processedSoldering Iron-1

Relations

uniqueProductTable
uniqueProductTable

DDL

CREATE TABLE IF NOT EXISTS uniqueProductTable
(
    uid                 TEXT        NOT NULL,
    asset_id            SERIAL      REFERENCES assetTable (id),
    begin_timestamp_ms  TIMESTAMPTZ NOT NULL,
    end_timestamp_ms    TIMESTAMPTZ NOT NULL,
    product_id          TEXT        NOT NULL,
    is_scrap            BOOLEAN     NOT NULL,
    quality_class       TEXT        NOT NULL,
    station_id          TEXT        NOT NULL,
    UNIQUE(uid, asset_id, station_id),
    CHECK (begin_timestamp_ms < end_timestamp_ms)
);

-- creating an index to increase performance
CREATE INDEX ON uniqueProductTable (asset_id, uid, station_id);

3.3 - States

States are the core of the database model. They represent the state of the machine at a given point in time.

States Documentation Index

Introduction

This documentation outlines the various states used in the United Manufacturing Hub software stack to calculate OEE/KPI and other production metrics.

State Categories

Glossary

  • OEE: Overall Equipment Effectiveness
  • KPI: Key Performance Indicator

Conclusion

This documentation provides a comprehensive overview of the states used in the United Manufacturing Hub software stack and their respective categories. For more information on each state category and its individual states, please refer to the corresponding subpages.

3.3.1 - Active (10000-29999)

These states represent that the asset is actively producing

10000: ProducingAtFullSpeedState

This asset is running at full speed.

Examples for ProducingAtFullSpeedState

  • WS_Cur_State: Operating
  • PackML/Tobacco: Execute

20000: ProducingAtLowerThanFullSpeedState

Asset is producing, but not at full speed.

Examples for ProducingAtLowerThanFullSpeedState

  • WS_Cur_Prog: StartUp
  • WS_Cur_Prog: RunDown
  • WS_Cur_State: Stopping
  • PackML/Tobacco : Stopping
  • WS_Cur_State: Aborting
  • PackML/Tobacco: Aborting
  • WS_Cur_State: Holding
  • Ws_Cur_State: Unholding
  • PackML:Tobacco: Unholding
  • WS_Cur_State Suspending
  • PackML/Tobacco: Suspending
  • WS_Cur_State: Unsuspending
  • PackML/Tobacco: Unsuspending
  • PackML/Tobacco: Completing
  • WS_Cur_Prog: Production
  • EUROMAP: MANUAL_RUN
  • EUROMAP: CONTROLLED_RUN

Currently not included:

  • WS_Prog_Step: all

3.3.2 - Unknown (30000-59999)

These states represent that the asset is in an unspecified state

30000: UnknownState

Data for that particular asset is not available (e.g. connection to the PLC is disrupted)

Examples for UnknownState

  • WS_Cur_Prog: Undefined
  • EUROMAP: Offline

40000 UnspecifiedStopState

The asset is not producing, but the reason is unknown at the time.

Examples for UnspecifiedStopState

  • WS_Cur_State: Clearing
  • PackML/Tobacco: Clearing
  • WS_Cur_State: Emergency Stop
  • WS_Cur_State: Resetting
  • PackML/Tobacco: Clearing
  • WS_Cur_State: Held
  • EUROMAP: Idle
  • Tobacco: Other
  • WS_Cur_State: Stopped
  • PackML/Tobacco: Stopped
  • WS_Cur_State: Starting
  • PackML/Tobacco: Starting
  • WS_Cur_State: Prepared
  • WS_Cur_State: Idle
  • PackML/Tobacco: Idle
  • PackML/Tobacco: Complete
  • EUROMAP: READY_TO_RUN

50000: MicrostopState

The asset is not producing for a short period (typically around five minutes), but the reason is unknown at the time.

3.3.3 - Material (60000-99999)

These states represent that the asset has issues regarding materials.

60000 InletJamState

This machine does not perform its intended function due to a lack of material flow in the infeed of the machine, detected by the sensor system of the control system (machine stop). In the case of machines that have several inlets, the condition o lack in the inlet refers to the main flow , i.e. to the material (crate, bottle) that is fed in the direction of the filling machine (Central machine). The defect in the infeed is an extraneous defect, but because of its importance for visualization and technical reporting, it is recorded separately.

Examples for InletJamState

  • WS_Cur_State: Lack

70000: OutletJamState

The machine does not perform its intended function as a result of a jam in the good flow discharge of the machine, detected by the sensor system of the control system (machine stop). In the case of machines that have several discharges, the jam in the discharge condition refers to the main flow, i.e. to the good (crate, bottle) that is fed in the direction of the filling machine (central machine) or is fed away from the filling machine. The jam in the outfeed is an external fault 1v, but it is recorded separately, because of its importance for visualization and technical reporting.

Examples for OutletJamState

  • WS_Cur_State: Tailback

80000: CongestionBypassState

The machine does not perform its intended function due to a shortage in the bypass supply or a jam in the bypass discharge of the machine, detected by the sensor system of the control system (machine stop). This condition can only occur in machines with two outlets or inlets and in which the bypass is in turn the inlet or outlet of an upstream or downstream machine of the filling line (packaging and palleting machines). The jam/shortage in the auxiliary flow is an external fault, but it is recoded separately due to its importance for visualization and technical reporting.

Examples for the CongestionBypassState

  • WS_Cur_State: Lack/Tailback Branch Line

90000: MaterialIssueOtherState

The asset has a material issue, but it is not further specified.

Examples for MaterialIssueOtherState

  • WS_Mat_Ready (Information of which material is lacking)
  • PackML/Tobacco: Suspended

3.3.4 - Process(100000-139999)

These states represent that the asset is in a stop, which belongs to the process and cannot be avoided.

100000: ChangeoverState

The asset is in a changeover process between products.

Examples for ChangeoverState

  • WS_Cur_Prog: Program-Changeover
  • Tobacco: CHANGE OVER

110000: CleaningState

The asset is currently in a cleaning process.

Examples for CleaningState

  • WS_Cur_Prog: Program-Cleaning
  • Tobacco: CLEAN
120000: EmptyingState

The asset is currently emptied, e.g. to prevent mold for food products over the long breaks, e.g. the weekend.

Examples for EmptyingState
  • Tobacco: EMPTY OUT

130000: SettingUpState

This machine is currently preparing itself for production, e.g. heating up.

Examples for SettingUpState
  • EUROMAP: PREPARING

3.3.5 - Operator (140000-159999)

These states represent that the asset is stopped because of operator related issues.

140000: OperatorNotAtMachineState

The operator is not at the machine.

150000: OperatorBreakState

The operator is taking a break.

This is different from a planned shift as it could contribute to performance losses.

Examples for OperatorBreakState

  • WS_Cur_Prog: Program-Break

3.3.6 - Planning (160000-179999)

These states represent that the asset is stopped as it is planned to stopped (planned idle time).

160000: NoShiftState

There is no shift planned at that asset.

170000: NO OrderState

There is no order planned at that asset.

3.3.7 - Technical (180000-229999)

These states represent that the asset has a technical issue.

180000: EquipmentFailureState

The asset itself is defect, e.g. a broken engine.

Examples for EquipmentFailureState

  • WS_Cur_State: Equipment Failure

190000: ExternalFailureState

There is an external failure, e.g. missing compressed air.

Examples for ExternalFailureState

  • WS_Cur_State: External Failure

200000: ExternalInterferenceState

There is an external interference, e.g. the crane to move the material is currently unavailable.

210000: PreventiveMaintenanceStop

A planned maintenance action.

Examples for PreventiveMaintenanceStop

  • WS_Cur_Prog: Program-Maintenance
  • PackML: Maintenance
  • EUROMAP: MAINTENANCE
  • Tobacco: MAINTENANCE

220000: TechnicalOtherStop

The asset has a technical issue, but it is not specified further.

Examples for TechnicalOtherStop

  • WS_Not_Of_Fail_Code
  • PackML: Held
  • EUROMAP: MALFUNCTION
  • Tobacco: MANUAL
  • Tobacco: SET UP
  • Tobacco: REMOTE SERVICE